0% found this document useful (0 votes)
1K views

What Is Azure Pipelines PDF

Azure Pipelines is a cloud service that can be used to automatically build, test, and deploy code projects. It supports continuous integration (CI) and continuous delivery (CD) to constantly test, build, and deliver code to any target. Azure Pipelines works with many languages and project types and combines CI and CD capabilities.

Uploaded by

Nagaraju Lanka
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

What Is Azure Pipelines PDF

Azure Pipelines is a cloud service that can be used to automatically build, test, and deploy code projects. It supports continuous integration (CI) and continuous delivery (CD) to constantly test, build, and deliver code to any target. Azure Pipelines works with many languages and project types and combines CI and CD capabilities.

Uploaded by

Nagaraju Lanka
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1968

Contents

Azure Pipelines
What is Azure Pipelines?
CI, CD, YAML & Classic
Get started
Sign up for Azure Pipelines
Create your first pipeline
Create your first pipeline from the CLI
Clone or import a pipeline
Customize your pipeline
Multi-stage pipelines user experience
Pipeline basics
Key concepts
Repositories
Supported repositories
Azure Repos Git
GitHub
GitHub Enterprise Server
Bitbucket Cloud
Bitbucket Server
TFVC
Subversion
Multiple repositories
Triggers
Types of triggers
Scheduled triggers
Pipeline completion triggers
Release triggers (classic)
Tasks & templates
Task types & usage
Task groups
Template types & usage
Add a custom task extension
Jobs & stages
Specify jobs in your pipeline
Define container jobs
Add stages, dependencies & conditions
Deployment jobs
Author a custom pipeline decorator
Pipeline decorator context
Specify conditions
Specify demands
Library, variables & secure files
Library & shared resources
Define variables
Use predefined variables
Use runtime parameters
Use classic release and artifacts variables
Use secrets from Azure Key Vault
Approvals, checks, & gates
Release approval and gates overview
Define approvals & checks
Define a gate
Use approvals and gates
Use approvals for release deployment control
Pipeline runs
Pipeline run sequence
Job access tokens
Pipeline reports
View pipeline reports
Add pipeline widgets to a dashboard
Test Results Trend (Advanced)
Ecosystems & integration
Ecosystem support
.NET Core
.NET Framework
JavaScript and Node.js apps
Python
Python to web app
Anaconda
C/C++
Java
Java apps
Java to web App
Java to web app with MicroProfile
Java to Azure Function
Android
Go
PHP
PHP to web app
Ruby
Xamarin
Xcode
GitHub Actions
Build apps
Build multiple branches
Build on multiple platforms
Use service containers
Cross-platform scripts
Run a PowerShell script
Run Git commands
Reduce build time using caching
Configure build run numbers
Classic Build options
Run pipeline tests
About pipeline tests
Set up parallel testing (VSTest)
Set up parallel testing (Test Runner)
Enable Test Impact Analysis (TIA)
Enable flaky test management
Run UI tests
Run UI tests with Selenium
Requirements traceability
Review test results
Review test results
Review test Analytics
Review code coverage
Review pull request code coverage
Deploy apps
Deploy apps to environments
Define and target environments
Kubernetes resource
Virtual machine resource
Deploy apps using VMs
Linux virtual machines
Deploy apps to Azure
Deploy a Linux web app - ARM template
Deploy a data pipeline with Azure
Data pipeline overview
Build a data pipeline
Azure Government Cloud
Azure Resource Manager
Azure SQL database
Azure App Service
Azure Stack
Function App on Container
Function App on Linux
Function App on Windows
Web App on Linux
Web App on Linux Container
Web App on Windows
Deploy apps (Classic)
Release pipelines
Deploy from multiple branches
Deploy pull request builds
Classic CD pipelines
Pipelines with PowerShell DSC
Stage templates in Azure Pipelines
Deploy apps to Azure (Classic)
Azure Web App (Classic)
Azure Web App for Containers (Classic)
Azure Kubernetes Service (Classic)
Azure IoT Edge (Classic)
Azure Cosmos DB CI/CD (Classic)
Azure Policy Compliance (Classic)
Deploy apps to VMs (Classic)
Linux VMs (Classic)
Windows VMs (Classic)
IIS servers (WinRM) (Classic)
Extend IIS Server deployments (Classic)
SCVMM (Classic)
VMware (Classic)
Deploy apps using containers
Build images
Push images
Content trust
Kubernetes
Deploy manifests
Bake manifests
Multi-cloud deployments
Deployment strategies
Azure Container Registry
Azure Kubernetes Service
Kubernetes canary deployments
Azure Machine Learning
Consume & publish artifacts
About artifacts
Publish & download artifacts
Build artifacts
Releases in Azure Pipelines
Release artifacts and artifact sources
Maven
npm
NuGet
Python
Symbols
Universal
Restore NuGet packages
Restore & publish NuGet packages (Jenkins)
Create & use resources
About resources
Add resources to a pipeline
Add & use variable groups
Secure files
Manage service connections
Manage agents & agent pools
About agents & agent pools
Add & manage agent pools
Microsoft-hosted agents
Self-hosted Linux agents
Self-hosted macOS agents
Self-hosted Windows agents
Windows agents (TFS 2015)
Scale set agents
Run an agent behind a web proxy
Run an agent in Docker
Use a self-signed certificate
Create & use deployment groups
Provision deployment groups
Provision agents for deployment groups
Add a deployment group job to a release pipeline
Deploying to Azure VMs using deployment groups in Azure Pipelines
Configure security & settings
Set retention policies
Configure and pay for parallel jobs
Pipeline permissions and security roles
Add users to contribute to pipelines
Grant version control permissions to the build service
Integrate with 3rd party software
Microsoft Teams
Slack
Integrate with ServiceNow (Classic)
Integrate with Jenkins (Classic)
Automate infrastructure deployment with Terraform
Migrate
Migrate from Jenkins
Migrate from Travis
Migrate from XAML builds
Migrate from Lab Management
Pipeline tasks
Task index
Build tasks
.NET Core CLI
Android build
Android signing
Ant
Azure IoT Edge
CMake
Docker
Docker Compose
Go
Gradle
Grunt
gulp
Index Sources & Publish Symbols
Jenkins Queue Job
Maven
MSBuild
Visual Studio Build
Xamarin.Android
Xamarin.iOS
Xcode
Xcode Package iOS
Utility tasks
Archive files
Azure Network Load Balancer
Bash
Batch script
Command line
Copy and Publish Build Artifacts
Copy Files
cURL Upload Files
Decrypt File
Delay
Delete Files
Download Build Artifacts
Download Fileshare Artifacts
Download GitHub Release
Download Package
Download Pipeline Artifact
Download Secure File
Extract Files
File Transform
FTP Upload
GitHub Release
Install Apple Certificate
Install Apple Provisioning Profile
Install SSH Key
Invoke Azure Function
Invoke REST API
Jenkins Download Artifacts
Manual Intervention
PowerShell
Publish Build Artifacts
Publish Pipeline Artifact
Publish to Azure Service Bus
Python Script
Query Azure Monitor Alerts
Query Work Items
Service Fabric PowerShell
Shell script
Update Service Fabric Manifests
Test tasks
App Center Test
Cloud-based Apache JMeter Load Test
Cloud-based Load Test
Cloud-based Web Performance Test
Container Structure Test Task
Publish Code Coverage Results
Publish Test Results
Run Functional Tests
Visual Studio Test
Visual Studio Test Agent Deployment
Package tasks
CocoaPods
Conda Environment
Maven Authenticate
npm
npm Authenticate
NuGet
NuGet Authenticate
PyPI Publisher
Python Pip Authenticate
Python Twine Upload Authenticate
Universal Packages
Xamarin Component Restore
Previous versions
NuGet Installer 0.*
NuGet Restore 1.*
NuGet Packager 0.*
NuGet Publisher 0.*
NuGet Command 0.*
Pip Authenticate 0.*
Twine Authenticate 0.*
Deploy tasks
App Center Distribute
Azure App Service Deploy
Azure App Service Manage
Azure App Service Settings
Azure CLI
Azure Cloud PowerShell Deployment
Azure File Copy
Azure Function App
Azure Function App for Container
Azure Key Vault
Azure Monitor Alerts
Azure MySQL Deployment
Azure Policy
Azure PowerShell
Azure Resource Group Deployment
Azure SQL Database Deployment
Azure Web App
Azure virtual machine scale set deployment
Azure Web App for Container
Build Machine Image (Packer)
Chef
Chef Knife
Copy Files Over SSH
Docker
Docker Compose
Helm Deploy
IIS Web App Deploy (Machine Group)
IIS Web App Manage (Machine Group)
Kubectl task
Kubernetes manifest task
PowerShell on Target Machines
Service Fabric App Deployment
Service Fabric Compose Deploy
SSH
Windows Machine File Copy
WinRM SQL Server DB Deployment
MySQL Database Deployment On Machine Group
Tool tasks
Docker Installer
Go Tool Installer
Helm Installer
Java Tool Installer
Kubectl Installer
Node.js Tool Installer
NuGet Tool Installer
Use .NET Core
Use Python Version
Use Ruby Version
Visual Studio Test Platform Installer
Troubleshooting
Troubleshoot pipeline runs
Review logs
Debug deployment issues
Troubleshoot Azure connections
Reference
YAML schema
Expressions
File matching patterns
File and variable transform
Logging commands
Artifact policy checks
Case studies & best practices
Pipelines security walkthrough
Overview
Approach to securing YAML pipelines
Repository protection
Pipeline resources
Project structure
Security through templates
Variables and parameters
Shared infrastructure
Other security considerations
Add continuous security validation
Build & deploy automation
Progressively expose releases using deployment rings
Progressively expose features in production
Developer resources
REST API reference
Azure DevOps CLI
Microsoft Learn
Create a build pipeline
Implement a code workflow in your build pipeline by using Git and GitHub
Run quality tests in your build pipeline
Manage build dependencies with Azure Artifacts
Automated testing
What is Azure Pipelines?
2/26/2020 • 2 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2017
Azure Pipelines is a cloud service that you can use to automatically build and test your code project and make it
available to other users. It works with just about any language or project type.
Azure Pipelines combines continuous integration (CI) and continuous delivery (CD) to constantly and consistently
test and build your code and ship it to any target.

Does Azure Pipelines work with my language and tools?


Languages
You can use many languages with Azure Pipelines, such as Python, Java, JavaScript, PHP, Ruby, C#, C++, and Go.
Version control systems
Before you use continuous integration and continuous delivery practices for your applications, you must have your
source code in a version control system. Azure Pipelines integrates with GitHub, GitHub Enterprise, Azure Repos Git
& TFVC, Bitbucket Cloud, and Subversion.
Application types
You can use Azure Pipelines with most application types, such as Java, JavaScript, Node.js, Python, .NET, C++, Go,
PHP, and XCode.
Deployment targets
Use Azure Pipelines to deploy your code to multiple targets. Targets include container registries, virtual machines,
Azure services, or any on-premises or cloud target.
Package formats
To produce packages that can be consumed by others, you can publish NuGet, npm, or Maven packages to the
built-in package management repository in Azure Pipelines. You also can use any other package management
repository of your choice.

What do I need to use Azure Pipelines?


To use Azure Pipelines, you need:
An organization in Azure DevOps.
To have your source code stored in a version control system.
Pricing
If you use public projects, Azure Pipelines is free. To learn more, see What is a public project? If you use private
projects, you can run up to 1,800 minutes (30 hours) of pipeline jobs for free every month. Learn more about how
the pricing works based on parallel jobs.

Why should I use Azure Pipelines?


Implementing CI and CD pipelines helps to ensure consistent and quality code that's readily available to users. And,
Azure Pipelines provides a quick, easy, and safe way to automate building your projects and making them available
to users.
Use Azure Pipelines because it supports the following scenarios:
Works with any language or platform
Deploys to different types of targets at the same time
Integrates with Azure deployments
Builds on Windows, Linux, or Mac machines
Integrates with GitHub
Works with open-source projects.

Try this next


Get started with Azure Pipelines guide
Use Azure Pipelines
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Azure Pipelines supports continuous integration (CI) and continuous delivery (CD) to constantly and consistently
test and build your code and ship it to any target. You accomplish this by defining a pipeline. You define pipelines
using the YAML syntax or through the user interface (Classic).
Azure Pipelines supports continuous integration (CI) and continuous delivery (CD) to constantly and consistently
test and build your code and ship it to any target. You accomplish this by defining a pipeline using the user
interface, also referred to as Classic.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Automate tests, builds, and delivery


Continuous integration automates tests and builds for your project. CI helps to catch bugs or issues early in the
development cycle, when they're easier and faster to fix. Items known as artifacts are produced from CI systems.
They're used by the continuous delivery release pipelines to drive automatic deployments.
Continuous delivery automatically deploys and tests code in multiple stages to help drive quality. Continuous
integration systems produce deployable artifacts, which includes infrastructure and apps. Automated release
pipelines consume these artifacts to release new versions and fixes to the target of your choice.

C O N T IN UO US IN T EGRAT IO N ( C I) C O N T IN UO US DEL IVERY ( C D)

- Increase code coverage - Automatically deploy code to production


- Build faster by splitting test and build runs - Ensure deployment targets have latest code
- Automatically ensure you don't ship broken code - Use tested code from CI process.
- Run tests continually.

Define pipelines using YAML syntax


You define your pipeline in a YAML file called azure-pipelines.yml with the rest of your app.

The pipeline is versioned with your code. It follows the same branching structure. You get validation of your
changes through code reviews in pull requests and branch build policies.
Every branch you use can modify the build policy by modifying the azure-pipelines.yml file.
A change to the build process might cause a break or result in an unexpected outcome. Because the change is in
version control with the rest of your codebase, you can more easily identify the issue.
Follow these basic steps:
1. Configure Azure Pipelines to use your Git repo.
2. Edit your azure-pipelines.yml file to define your build.
3. Push your code to your version control repository. This action kicks off the default trigger to build and deploy
and then monitor the results.
Your code is now updated, built, tested, and packaged. It can be deployed to any target.
YAML pipelines aren't available in TFS 2018 and earlier versions.

Define pipelines using the Classic interface


Create and configure pipelines in the Azure DevOps web portal with the Classic user interface editor. You define a
build pipeline to build and test your code, and then to publish artifacts. You also define a release pipeline to
consume and deploy those artifacts to deployment targets.

Follow these basic steps:


1. Configure Azure Pipelines to use your Git repo.
2. Use the Azure Pipelines classic editor to create and configure your build and release pipelines.
3. Push your code to your version control repository. This action triggers your pipeline and runs tasks such as
building or testing code.
The build creates an artifact that's used by the rest of your pipeline to run tasks such as deploying to staging or
production.
Your code is now updated, built, tested, and packaged. It can be deployed to any target.

Feature availability
Certain pipeline features are only available when using YAML or when defining build or release pipelines with the
Classic interface. The following table indicates which features are supported and for which tasks and methods.

F EAT URE YA M L C L A SSIC B UIL D C L A SSIC REL EA SE N OT ES

Agents Yes Yes Yes Specifies a required


resource on which the
pipeline runs.
F EAT URE YA M L C L A SSIC B UIL D C L A SSIC REL EA SE N OT ES

Approvals Yes No Yes Defines a set of


validations required
prior to completing a
deployment stage.

Artifacts Yes Yes Yes Supports publishing


or consuming
different package
types.

Caching Yes Yes No Reduces build time by


allowing outputs or
downloaded
dependencies from
one run to be reused
in later runs. In
Preview, available with
Azure Pipelines only.

Conditions Yes Yes Yes Specifies conditions to


be met prior to
running a job.

Container jobs Yes No No Specifies jobs to run


in a container.

Demands Yes Yes Yes Ensures pipeline


requirements are met
before running a
pipeline stage.
Requires self-hosted
agents.

Dependencies Yes Yes Yes Specifies a


requirement that
must be met in order
to run the next job or
stage.

Deployment groups Yes No Yes Defines a logical set of


deployment target
machines.

Deployment group No No Yes Specifies a job to


jobs release to a
deployment group.

Deployment jobs Yes No No Defines the


deployment steps.

Environment Yes No No Represents a


collection of resources
targeted for
deployment. Available
with Azure Pipelines
only.
F EAT URE YA M L C L A SSIC B UIL D C L A SSIC REL EA SE N OT ES

Gates No No Yes Supports automatic


collection and
evaluation of external
health signals prior to
completing a release
stage. Available with
Classic Release only.

Jobs Yes Yes Yes Defines the execution


sequence of a set of
steps.

Service connections Yes Yes Yes Enables a connection


to a remote service
that is required to
execute tasks in a job.

Service containers Yes No No Enables you to


manage the lifecycle
of a containerized
service.

Stages Yes No Yes Organizes jobs within


a pipeline.

Task groups No Yes Yes Encapsulates a


sequence of tasks into
a single reusable task.
If using YAML, see
templates.

Tasks Yes Yes Yes Defines the building


blocks that make up a
pipeline.

Templates Yes No No Defines reusable


content, logic, and
parameters.

Triggers Yes Yes Yes Defines the event that


causes a pipeline to
run.

Variables Yes Yes Yes Represents a value to


be replaced by data
to pass to the
pipeline.

Variable groups Yes Yes Yes Use to store values


that you want to
control and make
available across
multiple pipelines.

TFS 2015 through TFS 2018 supports the Classic interface only. The following table indicates which pipeline
features are available when defining build or release pipelines.
F EAT URE C L A SSIC B UIL D C L A SSIC REL EA SE N OT ES

Agents Yes Yes Specifies a required resource


on which the pipeline runs.

Approvals No Yes Defines a set of validations


required prior to completing
a deployment stage.

Artifacts Yes Yes Supports publishing or


consuming different package
types.

Conditions Yes Yes Specifies conditions to be


met prior to running a job.

Demands Yes Yes Ensures pipeline


requirements are met before
running a pipeline stage.
Requires self-hosted agents.

Dependencies Yes Yes Specifies a requirement that


must be met in order to run
the next job or stage.

Deployment groups No Yes Defines a logical set of


deployment target
machines.

Deployment group jobs No Yes Specifies a job to release to a


deployment group.

Jobs Yes Yes Defines the execution


sequence of a set of steps.

Service connections Yes Yes Enables a connection to a


remote service that is
required to execute tasks in
a job.

Stages No Yes Organizes jobs within a


pipeline.

Task groups Yes Yes Encapsulates a sequence of


tasks into a single reusable
task. If using YAML, see
templates.

Tasks Yes Yes Defines the building blocks


that make up a pipeline.

Triggers Yes Yes Defines the event that


causes a pipeline to run.

Variables Yes Yes Represents a value to be


replaced by data to pass to
the pipeline.
F EAT URE C L A SSIC B UIL D C L A SSIC REL EA SE N OT ES

Variable groups Yes Yes Use to store values that you


want to control and make
available across multiple
pipelines.

Try this next


Create your first pipeline

Related articles
Key concepts for new Azure Pipelines users
Sign up for Azure Pipelines
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines
Sign up for an Azure DevOps organization and Azure Pipelines to begin managing CI/CD to deploy your code
with high-performance pipelines.
For more information on Azure Pipelines, see What is Azure Pipelines.

Sign up with a personal Microsoft account


If you have a Microsoft account, follow these steps to sign up for Azure Pipelines.
1. Open Azure Pipelines and choose Star t free .

2. Enter your email address, phone number, or Skype ID for your Microsoft account. If you're a Visual Studio
subscriber and you get Azure DevOps as a benefit, use the Microsoft account associated with your
subscription. Select Next .
3. Enter your password and select Sign in .

4. To get started with Azure Pipelines, select Continue .


An organization is created based on the account you used to sign in. Use the following URL to sign in to
your organization at any time:
https://dev.azure.com/{yourorganization}

Your next step is to create a project.

Sign up with a GitHub account


If you have a GitHub account, follow these steps to sign up for Azure Pipelines.

IMPORTANT
If your GitHub email address is associated with an Azure AD-backed organization in Azure DevOps, you can't sign in with
your GitHub account, rather you must sign in with your Azure AD account.

1. Choose Star t free with GitHub . If you're already part of an Azure DevOps organization, choose Star t
free .
2. Enter your GitHub account credentials, and then select Sign in .

3. Select Authorize Microsoft-corp .


4. Choose Continue .

An organization is created based on the account you used to sign in. Use the following URL to sign in to
your organization at any time:
https://dev.azure.com/{yourorganization}

For more information about GitHub authentication, see FAQs.


Your next step is to create a project.

Create a project
If you signed up for Azure DevOps with an existing MSA or GitHub identity, you're automatically prompted to
create a project. Create either a public or private project. To learn more about public projects, see What is a public
project?.
1. Enter a name for your project, select the visibility, and optionally provide a description. Then choose
Create project .

Special characters aren't allowed in the project name (such as / : \ ~ & % ; @ ' " ? < > | # $ * } { , + = [ ]).
The project name also can't begin with an underscore, can't begin or end with a period, and must be 64
characters or less. Set your project visibility to either public or private. Public visibility allows for anyone on
the internet to view your project. Private visibility is for only people who you give access to your project.
2. When your project is created, the Kanban board automatically appears.

You're now set to create your first pipeline, or invite other users to collaborate with your project.

Invite team members


You can add and invite others to work on your project by adding their email address to your organization and
project.

1. From your project web portal, choose the Azure DevOps icon, and then select Organization
settings .
2. Select Users > Add users .

]
3. Complete the form by entering or selecting the following information:
Users: Enter the email addresses (Microsoft accounts) or GitHub IDs for the users. You can add several
email addresses by separating them with a semicolon (;). An email address appears in red when it's
accepted.
Access level: Assign one of the following access levels:
Basic: Assign to users who must have access to all Azure Pipelines features. You can grant up to
five users Basic access for free.
Stakeholder : Assign to users for limited access to features to view, add, and modify work items.
You can assign an unlimited amount of users Stakeholder access for free.
Add to project: Select the project you named in the preceding procedure.
Azure DevOps groups: Select one of the following security groups, which will determine the
permissions the users have to do select tasks. To learn more, see Azure Pipelines resources.
Project Readers: Assign to users who only require read-only access.
Project Contributors: Assign to users who will contribute fully to the project.
Project Administrators: Assign to users who will configure project resources.

NOTE
Add email addresses for personal Microsoft accounts and IDs for GitHub accounts unless you plan to use Azure
Active Directory (Azure AD) to authenticate users and control organization access. If a user doesn't have a Microsoft
or GitHub account, ask the user to sign up for a Microsoft account or a GitHub account.

4. When you're done, select Add to complete your invitation.


For more information, see Add organization users for Azure DevOps Services.

Change organization or project settings


You can rename and delete your organization, or change the organization location. To learn more, see the
following articles:
Manage organizations
Rename an organization
Change the location of your organization
You can rename your project or change its visibility. To learn more about managing projects, see the following
articles:
Manage projects
Rename a project
Change the project visibility, public or private

Next steps
Create your first pipeline

Related articles
What is Azure Pipelines?
Key concepts for new Azure Pipelines users
Create your first pipeline
Create your first pipeline
11/2/2020 • 26 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
This is a step-by-step guide to using Azure Pipelines to build a GitHub repository.

Prerequisites - Azure DevOps


A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want
alignment between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that
you want to use.

NOTE
If you want create a new pipeline by copying another pipeline, see Clone or import a pipeline.

Create your first pipeline


Java
.NET
Python
JavaScript
Get the Java sample code
To get started, fork the following repository into your GitHub account.

https://github.com/MicrosoftDocs/pipelines-java

Create your first Java pipeline


1. Sign in to your Azure DevOps organization and navigate to your project.
2. In your project, navigate to the Pipelines page. Then choose the action to create a new pipeline.
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your desired sample app repository.
6. Azure Pipelines will analyze your repository and recommend a Maven pipeline template. Select Save
and run , then select Commit directly to the master branch , and then choose Save and run
again.
7. A new run is started. Wait for the run to finish.
Learn more about working with Java in your pipeline.
Add a status badge to your repository
Many developers like to show that they're keeping their code quality high by displaying a status badge in
their repo.

To copy the status badge to your clipboard:


1. In Azure Pipelines, go to the Pipelines page to view the list of pipelines. Select the pipeline you
created in the previous section.
2. In the context menu for the pipeline, select Status badge .
3. Copy the sample Markdown from the status badge panel.
Now with the badge Markdown in your clipboard, take the following steps in GitHub:
1. Go to the list of files and select Readme.md . Select the pencil icon to edit.
2. Paste the status badge Markdown at the beginning of the file.
3. Commit the change to the master branch.
4. Notice that the status badge appears in the description of your repository.
To configure anonymous access to badges:
1. Navigate to Project Settings
2. Open the Settings tab under Pipelines
3. Toggle the Disable anonymous access to badges slider under General

NOTE
Even in a private project, anonymous badge access is enabled by default. With anonymous badge access enabled,
users outside your organization might be able to query information such as project names, branch names, job names,
and build status through the badge status API.

Because you just changed the Readme.md file in this repository, Azure Pipelines automatically builds your
code, according to the configuration in the azure-pipelines.yml file at the root of your repository. Back in
Azure Pipelines, observe that a new run appears. Each time you make an edit, Azure Pipelines starts a new
run.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments, and
jobs are called phases.

NOTE
This guidance applies to TFS version 2017.3 and newer.

We'll show you how to use the classic editor in Azure DevOps Server 2019 to create a build and release that
prints "Hello world".
We'll show you how to use the classic editor in TFS to create a build and a release that prints "Hello world".

Prerequisites
A self-hosted Windows agent.

Initialize your repository


If you already have a repository in your project, you can skip to the next step: Skip to adding a script to
your repo

1. Go to Azure Repos . (The Code hub in the previous navigation)

2. If your project is empty, you will be greeted with a screen to help you add code to your repository.
Choose the bottom choice to initialize your repo with a readme file:
1. Navigate to your repository by clicking Code in the top navigation.
2. If your project is empty, you will be greeted with a screen to help you add code to your repository.
Choose the bottom choice to initialize your repo with a readme file:

Add a script to your repository


Create a PowerShell script that prints Hello world .
1. Go to Azure Repos .
2. Add a file.

3. In the dialog box, name your new file and create it.

HelloWorld.ps1

4. Copy and paste this script.

Write-Host "Hello world"

5. Commit (save) the file.


1. Go to the Code hub.
2. Add a file.
TFS 2018.2
TFS 2018 RTM
1. In the dialog box, name your new file and create it.

HelloWorld.ps1

2. Copy and paste this script.

Write-Host "Hello world"

3. Commit (save) the file.

In this tutorial, our focus is on CI/CD, so we're keeping the code part simple. We're working in an Azure
Repos Git repository directly in your web browser.
When you're ready to begin building and deploying a real app, you can use a wide range of version
control clients and services with Azure Pipelines CI builds. Learn more.

Create a build pipeline


Create a build pipeline that prints "Hello world."
1. Select Azure Pipelines , it should automatically take you to the Builds page.

2. Create a new pipeline.


For new Azure DevOps users, this will automatically take you to the YAML pipeline creation experience.
To get to the classic editor and complete this guide, you must turn off the preview feature for the
New YAML pipeline creation experience:

3. Make sure that the source , project , repositor y , and default branch match the location in which you
created the script.
4. Start with an Empty job .
5. On the left side, select Pipeline and specify whatever Name you want to use. For the Agent pool ,
select Hosted VS2017 .
6. On the left side, select the plus sign ( + ) to add a task to Job 1 . On the right side, select the Utility
category, select the PowerShell task from the list, and then choose Add .
7. On the left side, select your new PowerShell script task.
8. For the Script Path argument, select the ... button to browse your repository and select the script you
created.

9. Select Save & queue , and then select Save .


10. Select Build and Release , and then choose Builds .

11. Create a new pipeline.

12. Start with an empty pipeline


13. Select Pipeline and specify whatever Name you want to use. For the Agent pool , select Default .
14. On the left side, select + Add Task to add a task to the job, and then on the right side select the Utility
category, select the PowerShell task, and then choose Add .
15. On the left side, select your new PowerShell script task.
16. For the Script Path argument, select the ... button to browse your repository and select the script you
created.

17. Select Save & queue , and then select Save .


1. Select Azure Pipelines , and then the Builds tab.

2. Create a new pipeline.

3. Start with an empty pipeline .


4. Select Pipeline and specify whatever Name you want to use.
5. On the Options tab, select Default for the Agent pool , or select whichever pool you want to use that
has Windows build agents.
6. On the Tasks tab, make sure that Get sources is set with the Repositor y and Branch in which you
created the script.
7. On the left side select Add Task , and then on the right side select the Utility category, select the
PowerShell task, and then select Add .
8. On the left side, select your new PowerShell script task.
9. For the Script Path argument, select the ... button to browse your repository and select the script you
created.

10. Select Save & queue , and then select Save .

A build pipeline is the entity through which you define your automated build pipeline. In the build
pipeline, you compose a set of tasks, each of which perform a step in your build. The task catalog
provides a rich set of tasks for you to get started. You can also add PowerShell or shell scripts to your
build pipeline.

Publish an artifact from your build


A typical build produces an artifact that can then be deployed to various stages in a release. Here to
demonstrate the capability in a simple way, we'll simply publish the script as the artifact.
1. On the Tasks tab, select the plus sign ( + ) to add a task to Job 1 .
2. Select the Utility category, select the Publish Build Ar tifacts task, and then select Add .
Path to publish : Select the ... button to browse and select the script you created.
Ar tifact name : Enter drop .
Ar tifact publish location : Select Azure Ar tifacts/TFS .
1. On the Tasks tab, select Add Task .
2. Select the Utility category, select the Publish Build Ar tifacts task, and then select Add .

Path to Publish : Select the ... button to browse and select the script you created.
Ar tifact Name : Enter drop .
Ar tifact Type : Select Ser ver .

Artifacts are the files that you want your build to produce. Artifacts can be nearly anything your team
needs to test or deploy your app. For example, you've got a .DLL and .EXE executable files and .PDB
symbols file of a C# or C++ .NET Windows app.
To enable you to produce artifacts, we provide tools such as copying with pattern matching, and a staging
directory in which you can gather your artifacts before publishing them. See Artifacts in Azure Pipelines.
Enable continuous integration (CI)
1. Select the Triggers tab.
2. Enable Continuous integration .

A continuous integration trigger on a build pipeline indicates that the system should automatically queue
a new build whenever a code change is committed. You can make the trigger more general or more
specific, and also schedule your build (for example, on a nightly basis). See Build triggers.

Save and queue the build


Save and queue a build manually and test your build pipeline.
1. Select Save & queue , and then select Save & queue .
2. On the dialog box, select Save & queue once more.
This queues a new build on the Microsoft-hosted agent.
3. You see a link to the new build on the top of the page.

Choose the link to watch the new build as it happens. Once the agent is allocated, you'll start seeing
the live logs of the build. Notice that the PowerShell script is run as part of the build, and that "Hello
world" is printed to the console.

4. Go to the build summary. On the Ar tifacts tab of the build, notice that the script is published as an
artifact.
1. Select Save & queue , and then select Save & queue .
2. On the dialog box, select Save & queue once more.
This queues a new build on the Microsoft-hosted agent.
3. You see a link to the new build on the top of the page.

Choose the link to watch the new build as it happens. Once the agent is allocated, you'll start seeing
the live logs of the build. Notice that the PowerShell script is run as part of the build, and that "Hello
world" is printed to the console.
TFS 2018.2
TFS 2018 RTM
4. Go to the build summary.

5. On the Ar tifacts tab of the build, notice that the script is published as an artifact.

You can view a summary of all the builds or drill into the logs for each build at any time by navigating to
the Builds tab in Azure Pipelines . For each build, you can also view a list of commits that were built and
the work items associated with each commit. You can also run tests in each build and analyze the test
failures.
1. Select Save & queue , and then select Save & queue .
2. On the dialog box, select the Queue button.
This queues a new build on the agent. Once the agent is allocated, you'll start seeing the live logs of
the build. Notice that the PowerShell script is run as part of the build, and that "Hello world" is printed
to the console.

3. Go to the build summary.

4. On the Ar tifacts tab of the build, notice that the script is published as an artifact.
You can view a summary of all the builds or drill into the logs for each build at any time by navigating to
the Builds tab in Build and Release . For each build, you can also view a list of commits that were built
and the work items associated with each commit. You can also run tests in each build and analyze the test
failures.

Add some variables and commit a change to your script


We'll pass some build variables to the script to make our pipeline a bit more interesting. Then we'll commit a
change to a script and watch the CI pipeline run automatically to validate the change.
1. Edit your build pipeline.
2. On the Tasks tab, select the PowerShell script task.
3. Add these arguments.

TFS 2018.2
TFS 2018 RTM
Arguments

-greeter "$(Build.RequestedFor)" -trigger "$(Build.Reason)"

Finally, save the build pipeline.


Next you'll add the arguments to your script.
1. Go to your Files in Azure Repos (the Code hub in the previous navigation and TFS).
2. Select the HelloWorld.ps1 file, and then Edit the file.
3. Change the script as follows:
Param(
[string]$greeter,
[string]$trigger
)
Write-Host "Hello world" from $greeter
Write-Host Trigger: $trigger

4. Commit (save) the script.


Now you can see the results of your changes. Go to Azure Pipelines and select Queued . Notice under the
Queued or running section that a build is automatically triggered by the change that you committed.
Now you can see the results of your changes. Go to the Build and Release page and select Queued . Notice
under the Queued or running section that a build is automatically triggered by the change that you
committed.
1. Select the new build that was created and view its log.
2. Notice that the person who changed the code has their name printed in the greeting message. You also
see printed that this was a CI build.

We just introduced the concept of build variables in these steps. We printed the value of a variable that is
automatically predefined and initialized by the system. You can also define custom variables and use
them either in arguments to your tasks, or as environment variables within your scripts. To learn more
about variables, see Build variables.

You've got a build pipeline. What's next?


You've created a build pipeline that automatically builds and validates whatever code is checked in by your
team. At this point, you can continue to the next section to learn about release pipelines. Or, if you prefer, you
can skip ahead to create a build pipeline for your app.

Create a release pipeline


Define the process for running the script in two stages.
1. Go to the Pipelines tab, and then select Releases .
2. Select the action to create a New pipeline . If a release pipeline is already created, select the plus sign (
+ ) and then select Create a release pipeline .
3. Select the action to start with an Empty job .
4. Name the stage QA .
5. In the Artifacts panel, select + Add and specify a Source (Build pipeline) . Select Add .
6. Select the Lightning bolt to trigger continuous deployment and then enable the Continuous
deployment trigger on the right.

7. Select the Tasks tab and select your QA stage.


8. Select the plus sign ( + ) for the job to add a task to the job.
9. On the Add tasks dialog box, select Utility , locate the PowerShell task, and then select its Add
button.
10. On the left side, select your new PowerShell script task.
11. For the Script Path argument, select the ... button to browse your artifacts and select the script you
created.
12. Add these Arguments :

-greeter "$(Release.RequestedFor)" -trigger "$(Build.DefinitionName)"

13. On the Pipeline tab, select the QA stage and select Clone .
14. Rename the cloned stage Production .
15. Rename the release pipeline Hello world .

16. Save the release pipeline.


1. Go to the Build and Release tab, and then select Releases .
2. Select the action to create a New pipeline . If a release pipeline is already created, select the plus sign (
+ ) and then select Create a release definition .
3. Select the action to start with an Empty definition .
4. Name the stage QA .
5. In the Artifacts panel, select + Add and specify a Source (Build pipeline) . Select Add .
6. Select the Lightning bolt to trigger continuous deployment and then enable the Continuous
deployment trigger on the right.
TFS 2018.2
TFS 2018 RTM
7. Select the Tasks tab and select your QA stage.
8. Select the plus sign ( + ) for the job to add a task to the job.
9. On the Add tasks dialog box, select Utility , locate the PowerShell task, and then select its Add
button.
10. On the left side, select your new PowerShell script task.
11. For the Script Path argument, select the ... button to browse your artifacts and select the script you
created.
12. Add these Arguments :

-greeter "$(Release.RequestedFor)" -trigger "$(Build.DefinitionName)"

13. On the Pipeline tab, select the QA stage and select Clone .

14. Rename the cloned stage Production .


15. Rename the release pipeline Hello world .
16. Save the release pipeline.
1. Go to Azure Pipelines , and then to the Releases tab.
2. Select the action to create a New pipeline .
3. On the dialog box, select the Empty template and select Next .
4. Make sure that your Hello world build pipeline that you created above is selected. Select
Continuous deployment , and then select Create .
5. Select Add tasks in the stage.
6. On the Task catalog dialog box, select Utility , locate the PowerShell task, and then select its Add
button. Select the Close button.
7. For the Script Path argument, select the ... button to browse your artifacts and select the script you
created.
8. Add these Arguments :

-greeter "$(Release.RequestedFor)" -trigger "$(Build.DefinitionName)"

9. Rename the stage QA .

10. Clone the QA stage.


Leave Automatically approve and Deploy automatically... selected, and select Create .
11. Rename the new stage Production .
12. Rename the release pipeline Hello world .

13. Save the release pipeline.

A release pipeline is a collection of stages to which the application build artifacts are deployed. It also
defines the actual deployment pipeline for each stage, as well as how the artifacts are promoted from one
stage to another.
Also, notice that we used some variables in our script arguments. In this case, we used release variables
instead of the build variables we used for the build pipeline.

Deploy a release
Run the script in each stage.
1. Create a new release.
When Create new release appears, select Create .
2. Open the release that you created.

3. View the logs to get real-time data about the release.

4. Create a new release.


When Create new release appears, select Create (TFS 2018.2) or Queue (TFS 2018 RTM).
5. Open the release that you created.

6. View the logs to get real-time data about the release.

7. Create a new release.

8. Open the release that you created.


9. View the logs to get real-time data about the release.

You can track the progress of each release to see if it has been deployed to all the stages. You can track
the commits that are part of each release, the associated work items, and the results of any test runs that
you've added to the release pipeline.

Change your code and watch it automatically deploy to production


We'll make one more change to the script. This time it will automatically build and then get deployed all the
way to the production stage.
1. Go to the Code hub, Files tab, edit the HelloWorld.ps1 file, and change it as follows:

Param(
[string]$greeter,
[string]$trigger
)
Write-Host "Hello world" from $greeter
Write-Host Trigger: $trigger
Write-Host "Now that you've got CI/CD, you can automatically deploy your app every time your team
checks in code."

2. Commit (save) the script.


3. Select the Builds tab to see the build queued and run.
4. After the build is completed, select the Releases tab, open the new release, and then go to the Logs .
Your new code automatically is deployed in the QA stage, and then in the Production stage.
In many cases, you probably would want to edit the release pipeline so that the production deployment
happens only after some testing and approvals are in place. See Approvals and gates overview.

Next steps
You've just learned how to create your first Azure Pipeline. Learn more about configuring pipelines in the
language of your choice:
.NET Core
Go
Java
Node.js
Python
Containers
Or, you can proceed to customize the pipeline you just created.
To run your pipeline in a container, see Container jobs.
For details about building GitHub repositories, see Build GitHub repositories.
To learn what else you can do in YAML pipelines, see YAML schema reference.
Clean up
If you created any test pipelines, they are easy to delete when you are done with them.
Browser
Azure DevOps CLI
To delete a pipeline, navigate to the summary page for that pipeline, and choose Delete from the ... menu at
the top-right of the page. Type the name of the pipeline to confirm, and choose Delete .

You've learned the basics of creating and running a pipeline. Now you're ready to configure your build
pipeline for the programming language you're using. Go ahead and create a new build pipeline, and this time,
use one of the following templates.

L A N GUA GE T EM P L AT E TO USE

.NET ASP.NET

.NET Core ASP.NET Core

C++ .NET Desktop

Go Go

Java Gradle

JavaScript Node.js
L A N GUA GE T EM P L AT E TO USE

Xcode Xcode

FAQ
Where can I read articles about DevOps and CI/CD?
What is Continuous Integration?
What is Continuous Delivery?
What is DevOps?
What kinds of version control can I use
When you're ready to get going with CI/CD for your app, you can use the version control system of your
choice:
Clients
Visual Studio Code for Windows, macOS, and Linux
Visual Studio with Git for Windows or Visual Studio for Mac
Eclipse
Xcode
IntelliJ
Command line
Services
Azure Pipelines
Git service providers such as GitHub and Bitbucket Cloud
Subversion
Clients
Visual Studio Code for Windows, macOS, and Linux
Visual Studio with Git for Windows or Visual Studio for Mac
Visual Studio with TFVC
Eclipse
Xcode
IntelliJ
Command line
Services
Azure Pipelines
Git service providers such as GitHub and Bitbucket Cloud
Subversion
How do I replicate a pipeline?
If your pipeline has a pattern that you want to replicate in other pipelines, clone it, export it, or save it as a
template.
After you clone a pipeline, you can make changes and then save it.
After you export a pipeline, you can import it from the All pipelines tab.
After you create a template, your team members can use it to follow the pattern in new pipelines.

TIP
If you're using the New Build Editor , then your custom templates are shown at the bottom of the list.
How do I work with drafts?
If you're editing a build pipeline and you want to test some changes that are not yet ready for production, you
can save it as a draft.

You can edit and test your draft as needed.


When you're ready you can publish the draft to merge the changes into your build pipeline.

Or, if you decide to discard the draft, you can delete it from the All Pipeline tab shown above.
How can I delete a pipeline?
To delete a pipeline, navigate to the summary page for that pipeline, and choose Delete from the ... menu in
the top-right of the page. Type the name of the pipeline to confirm, and choose Delete .
What else can I do when I queue a build?
You can queue builds automatically or manually.
When you manually queue a build, you can, for a single run of the build:
Specify the pool into which the build goes.
Add and modify some variables.
Add demands.
In a Git repository
Build a branch or a tag.
Build a commit.
In a TFVC repository
Specify the source version as a label or changeset.
Run a private build of a shelveset. (You can use this option on either a Microsoft-hosted agent or
a self-hosted agent.)
You can queue builds automatically or manually.
When you manually queue a build, you can, for a single run of the build:
Specify the pool into which the build goes.
Add and modify some variables.
Add demands.
In a Git repository
Build a branch or a tag.
Build a commit.
Where can I learn more about build pipeline settings?
To learn more about build pipeline settings, see:
Getting sources
Tasks
Variables
Triggers
Options
Retention
History
To learn more about build pipeline settings, see:
Getting sources
Tasks
Variables
Triggers
Retention
History
How do I programmatically create a build pipeline?
REST API Reference: Create a build pipeline

NOTE
You can also manage builds and build pipelines from the command line or scripts using the Azure Pipelines CLI.
Create your first pipeline from the CLI
11/2/2020 • 10 minutes to read • Edit Online

Azure Pipelines
This is a step-by-step guide to using Azure Pipelines from the Azure CLI (command-line interface) to build a GitHub
repository. You can use Azure Pipelines to build an app written in any language. For this quickstart, you'll use Java.

Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
Azure CLI version 2.0.49 or newer.
To install, see Install the Azure CLI.
To check the version from the command prompt:

az --version

The Azure DevOps extension.


To install from the command prompt:

az extension add --name azure-devops

To confirm installation from the command prompt:

az extension show --name azure-devops

Make sure your Azure DevOps defaults include the organization and project from the command prompt:

az devops configure --defaults organization=https://dev.azure.com/your-organization project=your-project

Get your first run


1. From a command prompt, sign in to the Azure CLI.

az login

2. Fork the following repository into your GitHub account:


https://github.com/MicrosoftDocs/pipelines-java

After you've forked it, clone it to your dev machine. Learn how: Fork a repo.
3. Navigate to the cloned directory.
4. Create a new pipeline:

az pipelines create --name "First-Java.CI"

The repository and branch details are picked up from the git configuration available in the cloned directory.
5. Enter your GitHub user name and password to authenticate Azure Pipelines.

Enter your GitHub username (Leave blank for using already generated PAT): Contoso

Enter your GitHub password:

6. Provide a name for the service connection created to enable Azure Pipelines to communicate with the
GitHub Repository.

Enter a service connection name to create? ContosoPipelineServiceConnection

7. Select the Maven pipeline template from the list of recommended templates.

Which template do you want to use for this pipeline?


[1] Maven
[2] Maven package Java project Web App to Linux on Azure
[3] Android
[4] Ant
[5] ASP.NET
[6] ASP.NET Core
[7] ASP .NET Core (.NET Framework)
[8] Starter pipeline
[9] C/C++ with GCC
[10] Go
[11] Gradle
[12] HTML
[13] Jekyll site
[14] .NET Desktop
[15] Node.js
[16] Node.js with Angular
[17] Node.js with Grunt
[18] Node.js with gulp
[19] Node.js with React
[20] Node.js with Vue
[21] Node.js with webpack
[22] PHP
[23] Python Django
[24] Python package
[25] Ruby
[26] Universal Windows Platform
[27] Xamarin.Android
[28] Xamarin.iOS
[29] Xcode
Please enter a choice [Default choice(1)]:

8. The pipeline YAML is generated. You can open the YAML in your default editor to view and make changes.
Do you want to view/edit the template yaml before proceeding?
[1] Continue with the generated yaml
[2] View or edit the yaml
Please enter a choice [Default choice(1)]:2

9. Provide where you want to commit the YAML file that is generated.

How do you want to commit the files to the repository?


[1] Commit directly to the master branch.
[2] Create a new branch for this commit and start a pull request.
Please enter a choice [Default choice(1)]:

10. A new run is started. Wait for the run to finish.

Manage your pipelines


You can manage the pipelines in your organization using these az pipelines commands:
az pipelines run: Run an existing pipeline
az pipelines update: Update an existing pipeline
az pipelines show: Show the details of an existing pipeline
These commands require either the name or ID of the pipeline you want to manage. You can get the ID of a pipeline
using the az pipelines list command.
Run a pipeline
You can queue (run) an existing pipeline with the az pipelines run command. To get started, see Get started with
Azure DevOps CLI.

az pipelines run [--branch]


[--commit-id]
[--folder-path]
[--id]
[--name]
[--open]
[--org]
[--project]
[--variables]

Parameters
branch : Name of the branch on which the pipeline run is to be queued, for example, refs/heads/master.
commit-id : Commit-id on which the pipeline run is to be queued.
folder-path : Folder path of pipeline. Default is root level folder.
id : Required if name is not supplied. ID of the pipeline to queue.
name : Required if ID is not supplied, but ignored if ID is supplied. Name of the pipeline to queue.
open : Open the pipeline results page in your web browser.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
variables : Space separated "name=value" pairs for the variables you would like to set.
Example
The following command runs the pipeline named myGithubname.pipelines-java in the branch pipeline and
shows the result in table format.

az pipelines run --name myGithubname.pipelines-java --branch pipeline --output table

Run ID Number Status Result Pipeline ID Pipeline Name Source Branch Queued
Time Reason
-------- ---------- ---------- -------- ------------- --------------------------- --------------- ------
-------------------- --------
123 20200123.2 notStarted 12 myGithubname.pipelines-java pipeline
2020-01-23 11:55:56.633450 manual

Update a pipeline
You can update an existing pipeline with the az pipelines update command. To get started, see Get started with
Azure DevOps CLI.

az pipelines update [--branch]


[--description]
[--id]
[--name]
[--new-folder-path]
[--new-name]
[--org]
[--project]
[--queue-id]
[--yaml-path]

Parameters
branch : Name of the branch on which the pipeline run is to be configured, for example, refs/heads/master.
description : New description for the pipeline.
id : Required if name is not supplied. ID of the pipeline to update.
name : Required if ID is not supplied. Name of the pipeline to update.
new-folder-path : New full path of the folder to which the pipeline is moved, for example,
user1/production_pipelines.
new-name : New updated name of the pipeline.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
queue-id : Queue ID of the agent pool where the pipeline needs to run.
yaml-path : Path of the pipeline's yaml file in the repo.
Example
The following command updates the pipeline with the ID of 12 with a new name and description and shows the
result in table format.
az pipelines update --id 12 --description "rename pipeline" --new-name updatedname.pipelines-java --output
table

ID Name Status Default Queue


---- -------------------------- -------- ------------------
12 updatedname.pipelines-java enabled Hosted Ubuntu 1604

Show pipeline
You can view the details of an existing pipeline with the az pipelines show command. To get started, see Get started
with Azure DevOps CLI.

az pipelines show [--folder-path]


[--id]
[--name]
[--open]
[--org]
[--project]

Parameters
folder-path : Folder path of pipeline. Default is root level folder.
id : Required if name is not supplied. ID of the pipeline to show details.
name : Required if name is not supplied, but ignored if ID is supplied. Name of the pipeline to show details.
open : Open the pipeline summary page in your web browser.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .

Example
The following command shows the details of the pipeline with the ID of 12 and returns the result in table format.

az pipelines show --id 12 --output table

ID Name Status Default Queue


---- -------------------------- -------- ------------------
12 updatedname.pipelines-java enabled Hosted Ubuntu 1604

Add a status badge to your repository


Many developers like to show that they're keeping their code quality high by displaying a status badge in their
repo.

To copy the status badge to your clipboard:


1. In Azure Pipelines, go to the Pipelines page to view the list of pipelines. Select the pipeline you created in
the previous section.
2. In the context menu for the pipeline, select Status badge .
3. Copy the sample Markdown from the status badge panel.
Now with the badge Markdown in your clipboard, take the following steps in GitHub:
1. Go to the list of files and select Readme.md . Select the pencil icon to edit.
2. Paste the status badge Markdown at the beginning of the file.
3. Commit the change to the master branch.
4. Notice that the status badge appears in the description of your repository.
To configure anonymous access to badges:
1. Navigate to Project Settings
2. Open the Settings tab under Pipelines
3. Toggle the Disable anonymous access to badges slider under General

NOTE
Even in a private project, anonymous badge access is enabled by default. With anonymous badge access enabled, users
outside your organization might be able to query information such as project names, branch names, job names, and build
status through the badge status API.

Because you just changed the Readme.md file in this repository, Azure Pipelines automatically builds your code,
according to the configuration in the azure-pipelines.yml file at the root of your repository. Back in Azure
Pipelines, observe that a new run appears. Each time you make an edit, Azure Pipelines starts a new run.

Next steps
You've just learned how to create your first Azure Pipeline. Learn more about configuring pipelines in the language
of your choice:
.NET Core
Go
Java
Node.js
Python
Containers
Or, you can proceed to customize the pipeline you just created.
To run your pipeline in a container, see Container jobs.
For details about building GitHub repositories, see Build GitHub repositories.
To learn what else you can do in YAML pipelines, see YAML schema reference.
Clean up
If you created any test pipelines, they are easy to delete when you are done with them.
Browser
Azure DevOps CLI
To delete a pipeline, navigate to the summary page for that pipeline, and choose Delete from the ... menu at the
top-right of the page. Type the name of the pipeline to confirm, and choose Delete .
Clone or import a pipeline
11/2/2020 • 3 minutes to read • Edit Online

One approach to creating a pipeline is to copy an existing pipeline and use it as a starting point. For YAML
pipelines, the process is as easy as copying the YAML from one pipeline to another. For pipelines created in the
classic editor, the procedure depends on whether the pipeline to copy is in the same project as the new pipeline. If
the pipeline to copy is in the same project, you can clone it, and if it is in a different project you can export it from
that project and import it into your project.

Clone a pipeline
YAML
Classic
For YAML pipelines, the process for cloning is to copy the YAML from the source pipeline and use it as the basis for
the new pipeline.
1. Navigate to your pipeline, and choose Edit .

2. Copy the pipeline YAML from the editor, and paste it into the YAML editor for your new pipeline.
3. To customize your newly cloned pipeline, see Customize your pipeline.
1. Navigate to the pipeline details for your pipeline, and choose Edit .
2. Copy the pipeline YAML from the editor, and paste it into the YAML editor for your new pipeline.
3. To customize your newly cloned pipeline, see Customize your pipeline.
This version of TFS doesn't support YAML pipelines.

Export and Import a pipeline


You can create a new classic pipeline by exporting an existing one and then importing it. This is useful in cases
where the new pipeline has to be created in a separate project.
YAML
Classic
In a YAML pipeline, exporting from one project and importing into another is the same process as cloning. You can
simply copy the pipeline YAML from the editor and paste it into the YAML editor for your new pipeline.
1. Navigate to your pipeline, and choose Edit .

2. Copy the pipeline YAML from the editor, and paste it into the YAML editor for your new pipeline.
3. To customize your newly cloned pipeline, see Customize your pipeline.
1. Navigate to the pipeline details for your pipeline, and choose Edit .

2. Copy the pipeline YAML from the editor, and paste it into the YAML editor for your new pipeline.
3. To customize your newly cloned pipeline, see Customize your pipeline.
This version of TFS doesn't support YAML pipelines.
Next steps
Learn to customize the pipeline you just cloned or imported.
Customize your pipeline
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
This is a step-by-step guide on common ways to customize your pipeline.

Prerequisite
Follow instructions in Create your first pipeline to create a working pipeline.

Understand the azure-pipelines.yml file


A pipeline is defined using a YAML file in your repo. Usually, this file is named azure-pipelines.yml and is located
at the root of your repo.
Navigate to the Pipelines page in Azure Pipelines and select the pipeline you created.
Select Edit in the context menu of the pipeline to open the YAML editor for the pipeline. Examine the
contents of the YAML file.

trigger:
- master

pool:
vmImage: 'Ubuntu-16.04'

steps:
- task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
mavenOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.8'
jdkArchitectureOption: 'x64'
publishJUnitResults: false
testResultsFiles: '**/surefire-reports/TEST-*.xml'
goals: 'package'

NOTE
The contents of your YAML file may be different depending on the sample repo you started with, or upgrades
made in Azure Pipelines.

This pipeline runs whenever your team pushes a change to the master branch of your repo. It runs on a
Microsoft-hosted Linux machine. The pipeline process has a single step, which is to run the Maven task.

Change the platform to build on


You can build your project on Microsoft-hosted agents that already include SDKs and tools for various
development languages. Or, you can use self-hosted agents with specific tools that you need.
Navigate to the editor for your pipeline by selecting Edit pipeline action on the build, or by selecting Edit
from the pipeline's main page.
Currently the pipeline runs on a Linux agent:

pool:
vmImage: "ubuntu-16.04"

To choose a different platform like Windows or Mac, change the vmImage value:

pool:
vmImage: "vs2017-win2016"

pool:
vmImage: "macos-latest"

Select Save and then confirm the changes to see your pipeline run on a different platform.

Add steps
You can add additional scripts or tasks as steps to your pipeline. A task is a pre-packaged script. You can use
tasks for building, testing, publishing, or deploying your app. For Java, the Maven task we used handles testing
and publishing results, however, you can use a task to publish code coverage results too.
Open the YAML editor for your pipeline.
Add the following snippet to the end of your YAML file.

- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: "JaCoCo"
summaryFileLocation: "$(System.DefaultWorkingDirectory)/**/site/jacoco/jacoco.xml"
reportDirectory: "$(System.DefaultWorkingDirectory)/**/site/jacoco"
failIfCoverageEmpty: true

Select Save and then confirm the changes.


You can view your test and code coverage results by selecting your build and going to the Test and
Coverage tabs.

Build across multiple platforms


You can build and test your project on multiple platforms. One way to do it is with strategy and matrix . You can
use variables to conveniently put data into various parts of a pipeline. For this example, we'll use a variable to
pass in the name of the image we want to use.
In your azure-pipelines.yml file, replace this content:

pool:
vmImage: "ubuntu-16.04"

with the following content:


strategy:
matrix:
linux:
imageName: "ubuntu-16.04"
mac:
imageName: "macos-10.14"
windows:
imageName: "vs2017-win2016"
maxParallel: 3

pool:
vmImage: $(imageName)

Select Save and then confirm the changes to see your build run up to three jobs on three different
platforms.

Each agent can run only one job at a time. To run multiple jobs in parallel you must configure multiple agents.
You also need sufficient parallel jobs.

Build using multiple versions


To build a project using different versions of that language, you can use a matrix of versions and a variable. In
this step you can either build the Java project with two different versions of Java on a single platform or run
different versions of Java on different platforms.
If you want to build on a single platform and multiple versions, add the following matrix to your
azure-pipelines.yml file before the Maven task and after the vmImage.

strategy:
matrix:
jdk10:
jdk_version: "1.10"
jdk11:
jdk_version: "1.11"
maxParallel: 2

Then replace this line in your maven task:

jdkVersionOption: "1.11"

with this line:

jdkVersionOption: $(jdk_version)

Make sure to change the $(imageName) variable back to the platform of your choice.
If you want to build on multiple platforms and versions, replace the entire content in your
azure-pipelines.yml file before the publishing task with the following snippet:
trigger:
- master

strategy:
matrix:
jdk10_linux:
imageName: "ubuntu-16.04"
jdk_version: "1.10"
jdk11_windows:
imageName: "vs2017-win2016"
jdk_version: "1.11"
maxParallel: 2

pool:
vmImage: $(imageName)

steps:
- task: Maven@3
inputs:
mavenPomFile: "pom.xml"
mavenOptions: "-Xmx3072m"
javaHomeOption: "JDKVersion"
jdkVersionOption: $(jdk_version)
jdkArchitectureOption: "x64"
publishJUnitResults: true
testResultsFiles: "**/TEST-*.xml"
goals: "package"

Select Save and then confirm the changes to see your build run two jobs on three different platforms and
SDKs.

Customize CI triggers
You can use a trigger: to specify the events when you want to run the pipeline. YAML pipelines are configured
by default with a CI trigger on your default branch (which is usually master). You can set up triggers for specific
branches or for pull request validation. For a pull request validation trigger just replace the trigger: step with
pr: as shown in the two examples below.

If you'd like to set up triggers, add either of the following snippets at the beginning of your
azure-pipelines.yml file.

trigger:
- master
- releases/*

pr:
- master
- releases/*

You can specify the full name of the branch (for example, master ) or a prefix-matching wildcard (for
example, releases/* ).

Customize settings
There are pipeline settings that you wouldn't want to manage in your YAML file. Follow these steps to view and
modify these settings:
1. From your web browser, open the project for your organization in Azure DevOps and choose Pipelines /
Pipelines from the navigation sidebar.
2. Select the pipeline you want to configure settings for from the list of pipelines.
3. Open the overflow menu by clicking the action button with the vertical ellipsis and select Settings.
Processing of new run requests
Sometimes you'll want to prevent new runs from starting on your pipeline.
By default, the processing of new run requests is Enabled . This setting allows standard processing of all
trigger types, including manual runs.
Paused pipelines allow run requests to be processed, but those requests are queued without actually starting.
When new request processing is enabled, run processing resumes starting with the first request in the queue.
Disabled pipelines prevent users from starting new runs. All triggers are also disabled while this setting is
applied.
Other settings
YAML file path. If you ever need to direct your pipeline to use a different YAML file, you can specify the path
to that file. This setting can also be useful if you need to move/rename your YAML file.
Automatically link work items included in this run. The changes associated with a given pipeline run
may have work items associated with them. Select this option to link those work items to the run. When this
option is selected, you'll need to specify a specific branch. Work items will only be associated with runs of that
branch.
To get notifications when your runs fail, see how to Manage notifications for a team
You've just learned the basics of customizing your pipeline. Next we recommend that you learn more about
customizing a pipeline for the language you use:
.NET Core
Containers
Go
Java
Node.js
Python
Or, to grow your CI pipeline to a CI/CD pipeline, include a deployment job with steps to deploy your app to an
environment.
To learn more about the topics in this guide see Jobs, Tasks, Catalog of Tasks, Variables, Triggers, or
Troubleshooting.
To learn what else you can do in YAML pipelines, see YAML schema reference.
Multi-stage pipelines user experience
11/2/2020 • 4 minutes to read • Edit Online

The multi-stage pipelines experience brings improvements and ease of use to the Pipelines portal UI. This article
shows you how to view and manage your pipelines using this new experience.

Navigating pipelines
You can view and manage your pipelines by choosing Pipelines from the left-hand menu.

You can drill down and view pipeline details, run details, pipeline analytics, job details, logs, and more.
At the top of each page is a breadcrumb navigation bar. Select the different areas of the bar to navigate to different
areas of the portal. The breadcrumb navigation is a convenient way to go back one or more steps.
1. This area of the breadcrumb navigation shows you what page you're currently viewing. In this example, the page
is the run summary for run number 20191209.3 .
2. One level up is a link to the pipeline details for that run.
3. The next level up is the pipelines landing page.
4. This link is to the FabrikamFiber project, which contains the pipeline for this run.
5. The root breadcrumb link is to the Azure DevOps fabrikam-tailspin organization, which contains the project
that contains the pipeline.
Many pages also contain a back button that takes you to the previous page.

Pipelines landing page


From the pipelines landing page you can view pipelines and pipeline runs, create and import pipelines, manage
security, and drill down into pipeline and run details.
Choose Recent to view recently run pipelines (the default view), or choose All to view all pipelines.
Select a pipeline to manage that pipeline and view its runs. Select the build number for the last run to view the
results of that build, select the branch name to view the branch for that run, or select the context menu to run the
pipeline and perform other management actions.

Select Runs to view all pipeline runs. You can optionally filter the displayed runs.
Select a pipeline run to view information about that run.

View pipeline details


The details page for a pipeline allows you to view and manage that pipeline.

Runs
Select Runs to view the runs for that pipeline. You can optionally filter the displayed runs.
You can choose to Retain or Delete a run from the context menu. For more information on run retention, see Build
and release retention policies.

Branches
Select Branches to view the history or run for that branch. Hover over the Histor y to view a summary for each
run, and select a run to navigate to the details page for that run.

Analytics
Select Analytics to view pipeline metrics such as pass rate and run duration. Choose View full repor t for more
information on each metric.
View pipeline run details
From the pipeline run summary you can view the status of your run, both while it is running and when it is
complete.
From the summary pane you can download artifacts, and navigate to linked commits, test results, and work items.
Cancel and re -run a pipeline
If the pipeline is running, you can cancel it by choosing Cancel . If the run has completed, you can re-run the
pipeline by choosing Run new .

Pipeline run context menu


From the context menu you can download logs, add tags, edit the pipeline, delete the run, and configure retention
for the run.

NOTE
You can't delete a run if the run is retained. If you don't see Delete , choose Stop retaining run , and then delete the run. If
you see both Delete and View retention releases , one or more configured retention policies still apply to your run.
Choose View retention releases , delete the policies (only the policies for the selected run are removed), and then delete
the run.

Jobs and stages


The jobs pane displays an overview of the status of your stages and jobs. This pane may have multiple tabs
depending on whether your pipeline has stages and jobs, or just jobs. In this example the pipeline has two stages
named Build and Deploy . You can drill down into the pipeline steps by choosing the job from either the Stages or
Jobs pane.

Choose a job to see the steps for that job.


From the steps view, you can review the status and details of each step. From the context menu you can toggle
timestamps or view a raw log of all steps in the pipeline.

Manage security
You can configure pipelines security on a project level from the context menu on the pipelines landing page, and on
a pipeline level on the pipeline details page.
To support security of your pipeline operations, you can add users to a built-in security group, set individual
permissions for a user or group, or add users to pre-defined roles. You can manage security for for Azure Pipelines
in the web portal, either from the user or admin context. For more information on configuring pipelines security,
see Pipeline permissions and security roles.

Next steps
Learn more about configuring pipelines in the language of your choice:
.NET Core
Go
Java
Node.js
Python
Containers and Container jobs
Learn more about building Azure Repos and GitHub repositories.
To learn what else you can do in YAML pipelines, see Customize your pipeline, and for a complete reference see
YAML schema reference.
Key concepts for new Azure Pipelines users
11/2/2020 • 4 minutes to read • Edit Online

Learn about the key concepts and components that are used in Azure Pipelines. Understanding the basic terms
and parts of Azure Pipelines helps you further explore how it can help you deliver better code more efficiently
and reliably.
Key concepts over view

A trigger tells a Pipeline to run.


A pipeline is made up of one or more stages. A pipeline can deploy to one or more environments.
A stage is a way of organizing jobs in a pipeline and each stage can have one or more jobs.
Each job runs on one agent. A job can also be agentless.
Each agent runs a job that contains one or more steps.
A step can be a task or script and is the smallest building block of a pipeline.
A task is a pre-packaged script that performs an action, such as invoking a REST API or publishing a build
artifact.
An artifact is a collection of files or packages published by a run.
Azure Pipelines terms

Agent
When your build or deployment runs, the system begins one or more jobs. An agent is computing infrastructure
with installed agent software that runs one job at a time.
For more in-depth information about the different types of agents and how to use them, see Build and release
agents.

Approvals
Approvals define a set of validations required before a deployment can be performed. Manual approval is a
common check performed to control deployments to production environments. When checks are configured on
an environment, pipelines will stop before starting a stage that deploys to the environment until all the checks
are completed successfully.

Artifact
An artifact is a collection of files or packages published by a run. Artifacts are made available to subsequent tasks,
such as distribution or deployment. For more information, see Artifacts in Azure Pipelines.

Continuous delivery
Continuous delivery (CD) is a process by which code is built, tested, and deployed to one or more test and
production stages. Deploying and testing in multiple stages helps drive quality. Continuous integration systems
produce deployable artifacts, which includes infrastructure and apps. Automated release pipelines consume these
artifacts to release new versions and fixes to existing systems. Monitoring and alerting systems run constantly to
drive visibility into the entire CD process. This process ensures that errors are caught often and early.
Continuous integration
Continuous integration (CI) is the practice used by development teams to simplify the testing and building of
code. CI helps to catch bugs or problems early in the development cycle, which makes them easier and faster to
fix. Automated tests and builds are run as part of the CI process. The process can run on a set schedule, whenever
code is pushed, or both. Items known as artifacts are produced from CI systems. They're used by the continuous
delivery release pipelines to drive automatic deployments.

Deployment group
A deployment group is a set of deployment target machines that have agents installed. A deployment group is
just another grouping of agents, like an agent pool. You can set the deployment targets in a pipeline for a job
using a deployment group. Learn more about provisioning agents for deployment groups.

Environment
An environment is a collection of resources, where you deploy your application. It can contain one or more virtual
machines, containers, web apps, or any service that's used to host the application being developed. A pipeline
might deploy the app to one or more environments after build is completed and tests are run.

Job
A stage contains one or more jobs. Each job runs on an agent. A job represents an execution boundary of a set of
steps. All of the steps run together on the same agent. For example, you might build two configurations - x86 and
x64. In this case, you have one build stage and two jobs.

Pipeline
A pipeline defines the continuous integration and deployment process for your app. It's made up of one or more
stages. It can be thought of as a workflow that defines how your test, build, and deployment steps are run.

Run
A run represents one execution of a pipeline. It collects the logs associated with running the steps and the results
of running tests. During a run, Azure Pipelines will first process the pipeline and then hand off the run to one or
more agents. Each agent will run jobs. Learn more about the pipeline run sequence.

Script
A script runs code as a step in your pipeline using command line, PowerShell, or Bash. You can write cross-
platform scripts for macOS, Linux, and Windows. Unlike a task, a script is custom code that is specific to your
pipeline.

Stage
A stage is a logical boundary in the pipeline. It can be used to mark separation of concerns (e.g., Build, QA, and
production). Each stage contains one or more jobs.

Step
A step is the smallest building block of a pipeline. For example, a pipeline might consist of build and test steps. A
step can either be a script or a task. A task is simply a pre-created script offered as a convenience to you. To view
the available tasks, see the Build and release tasks reference. For information on creating custom tasks, see Create
a custom task.
Task
A task is the building block for defining automation in a pipeline. A task is packaged script or procedure that has
been abstracted with a set of inputs.

Trigger
A trigger is something that's set up to tell the pipeline when to run. You can configure a pipeline to run upon a
push to a repository, at scheduled times, or upon the completion of another build. All of these actions are known
as triggers. For more information, see build triggers and release triggers.
About the authors
Dave Jarvis contributed to the key concepts overview graphic.
Supported source repositories
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Azure Pipelines, Azure DevOps Server, and TFS integrate with a number of version control systems. When you
use any of these version control systems, you can configure a pipeline to build, test, and deploy your application.
YAML pipelines are a new form of pipelines that have been introduced in Azure DevOps Server 2019 and in
Azure Pipelines. YAML pipelines only work with certain version control systems. The following table shows all the
supported version control systems and the ones that support YAML pipelines.

A Z URE DEVO P S
SERVER 2019, T F S
A Z URE P IP EL IN ES A Z URE P IP EL IN ES 2018, T F S 2017, T F S
REP O SITO RY T Y P E ( YA M L ) ( C L A SSIC EDITO R) 2015. 4 T F S 2015 RT M

Azure Repos Git Yes Yes Yes Yes

Azure Repos TFVC No Yes Yes Yes

GitHub Yes Yes No No

GitHub Enterprise Yes Yes TFS 2018.2 and No


Server higher

Bitbucket Cloud Yes Yes No No

Bitbucket Server No Yes Yes Yes

Subversion No Yes Yes No


Build Azure Repos Git or TFS Git repositories
11/2/2020 • 32 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Azure Pipelines can automatically build and validate every pull request and commit to your Azure Repos Git
repository.

Choose a repository to build


YAML
Classic
You create a new pipeline by first selecting a repository and then a YAML file in that repository. The repository in
which the YAML file is present is called self repository. By default, this is the repository that your pipeline builds.
You can later configure your pipeline to check out a different repository or multiple repositories. To learn how to
do this, see multi-repo checkout.
YAML pipelines are not available in TFS.
Azure Pipelines must be granted access to your repositories to trigger their builds and fetch their code during
builds. Normally, a pipeline has access to repositories in the same project. But, if you wish to access repositories in
a different project, then you need to update the permissions granted to job access tokens.

CI triggers
Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified
branches or you push specified tags.
YAML
Classic
YAML pipelines are configured by default with a CI trigger on all branches.
Branches
You can control which branches get CI triggers with a simple syntax:

trigger:
- master
- releases/*

You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ). See
Wildcards for information on the wildcard syntax.
NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).

NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.

For more complex triggers that use exclude or batch , you must use the full syntax as shown in the following
example.

# specific branch build


trigger:
branches:
include:
- master
- releases/*
exclude:
- releases/old*

In the above example, the pipeline will be triggered if a change is pushed to master or to any releases branch.
However, it won't be triggered if a change is made to a releases branch that starts with old .
If you specify an exclude clause without an include clause, then it is equivalent to specifying * in the include
clause.
In addition to specifying branch names in the branches lists, you can also configure triggers based on tags by
using the following format:

trigger:
branches:
include:
- refs/tags/{tagname}
exclude:
- refs/tags/{othertagname}

If you don't specify any triggers, the default is as if you wrote:

trigger:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string

IMPORTANT
When you specify a trigger, it replaces the default implicit trigger, and only pushes to branches that are explicitly configured
to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list.

Batching CI runs
If you have many team members uploading changes often, you may want to reduce the number of runs you start.
If you set batch to true , when a pipeline is running, the system waits until the run is completed, then starts
another run with all changes that have not yet been built.
# specific branch build with batching
trigger:
batch: true
branches:
include:
- master

To clarify this example, let us say that a push A to master caused the above pipeline to run. While that pipeline is
running, additional pushes B and C occur into the repository. These updates do not start new independent runs
immediately. But after the first run is completed, all pushes until that point of time are batched together and a new
run is started.

NOTE
If the pipeline has multiple jobs and stages, then the first run should still reach a terminal state by completing or skipping all
its jobs and stages before the second run can start. For this reason, you must exercise caution when using this feature in a
pipeline with multiple stages or approvals. If you wish to batch your builds in such cases, it is recommended that you split
your CI/CD process into two pipelines - one for build (with batching) and one for deployments.

Paths
You can specify file paths to include or exclude. Note that the wildcard syntax is different between branches/tags
and file paths.

# specific path build


trigger:
branches:
include:
- master
- releases/*
paths:
include:
- docs/*
exclude:
- docs/README.md

When you specify paths, you must explicitly specify branches to trigger on. You can't trigger a pipeline with only a
path filter; you must also have a branch filter, and the changed files that match the path filter must be from a
branch that matches the branch filter.

Tips:
Paths are always specified relative to the root of the repository.
If you don't set path filters, then the root folder of the repo is implicitly included by default.
If you exclude a path, you cannot also include it unless you qualify it to a deeper folder. For example if you
exclude /tools then you could include /tools/trigger-runs-on-these
The order of path filters doesn't matter.
Paths in Git are case-sensitive. Be sure to use the same case as the real folders.

NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).

Tags
In addition to specifying tags in the branches lists as covered in the previous section, you can directly specify tags
to include or exclude:

# specific tag
trigger:
tags:
include:
- v2.*
exclude:
- v2.0

If you don't specify any tag triggers, then by default, tags will not trigger pipelines.

IMPORTANT
If you specify tags in combination with branch filters, the trigger will fire if either the branch filter is satisfied or the tag filter
is satisfied. For example, if a pushed tag satisfies the branch filter, the pipeline triggers even if the tag is excluded by the tag
filter, because the push satisfied the branch filter.

Opting out of CI
Disabling the CI trigger
You can opt out of CI triggers entirely by specifying trigger: none .

# A pipeline with no CI trigger


trigger: none

IMPORTANT
When you push a change to a branch, the YAML file in that branch is evaluated to determine if a CI run should be started.

For more information, see Triggers in the YAML schema.


YAML pipelines are not available in TFS.
Skipping CI for individual commits
You can also tell Azure Pipelines to skip running a pipeline that a commit would normally trigger. Just include
***NO_CI*** in the commit message of the HEAD commit and Azure Pipelines will skip running CI.

You can also tell Azure Pipelines to skip running a pipeline that a commit would normally trigger. Just include
[skip ci] in the commit message or description of the HEAD commit and Azure Pipelines will skip running CI.
You can also use any of the variations below.
[skip ci] or [ci skip]
skip-checks: true or skip-checks:true
[skip azurepipelines] or [azurepipelines skip]
[skip azpipelines] or [azpipelines skip]
[skip azp] or [azp skip]
***NO_CI***

Using the trigger type in conditions


It is a common scenario to run different steps, jobs, or stages in your pipeline depending on the type of trigger that
started the run. You can do this using the system variable Build.Reason . For example, add the following condition
to your step, job, or stage to exclude it from PR validations.
condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))

Behavior of triggers when new branches are created


It is common to configure multiple pipelines for the same repository. For instance, you may have one pipeline to
build the docs for your app and another to build the source code. You may configure CI triggers with appropriate
branch filters and path filters in each of these pipelines. For instance, you may want one pipeline to trigger when
you push an update to the docs folder, and another one to trigger when you push an update to your application
code. In these cases, you need to understand how the pipelines are triggered when a new branch is created.
Here is the behavior when you push a new branch (that matches the branch filters) to your repository:
If your pipeline has path filters, it will be triggered only if the new branch has changes to files that match that
path filter.
If your pipeline does not have path filters, it will be triggered even if there are no changes in the new branch.
Wildcards
When specifying a branch or tag, you may use an exact name or a wildcard. Wildcards patterns allow * to match
zero or more characters and ? to match a single character.
If you start your pattern with * in a YAML pipeline, you must wrap the pattern in quotes, like "*-releases" .
For branches and tags:
A wildcard may appear anywhere in the pattern.
For paths:
You may include * as the final character, but it doesn't do anything differently from specifying the
directory name by itself.
You may not include * in the middle of a path filter, and you may not use ? .

trigger:
branches:
include:
- master
- releases/*
- feature/*
exclude:
- releases/old*
- feature/*-working
paths:
include:
- '*' # same as '/' for the repository root
exclude:
- 'docs/*' # same as 'docs/'

PR triggers
Pull request (PR) triggers cause a pipeline to run whenever a pull request is opened with one of the specified
target branches, or when changes are pushed to such a pull request. In Azure Repos Git, this functionality is
implemented using branch policies. To enable pull request validation in Azure Git Repos, navigate to the branch
policies for the desired branch, and configure the Build validation policy for that branch. For more information, see
Configure branch policies.

NOTE
To configure validation builds for an Azure Repos Git repository, you must be a project administrator of its project.
NOTE
Draft pull requests do not trigger a pipeline even if you configure a branch policy.

Validate contributions from forks


Building pull requests from Azure Repos forks is no different from building pull requests within the same
repository or project. You can create forks only within the same organization that your project is part of.

Limit job authorization scope


Azure Pipelines provides several security settings to configure the job authorization scope that your pipelines run
with.
Limit job authorization scope to current project
Limit job authorization scope to referenced Azure DevOps repositories
Limit job authorization scope to current project
Azure Pipelines provides two Limit job authorization scope to current project settings:
Limit job authorization scope to current project for non-release pipelines - This setting applies to
YAML pipelines and classic build pipelines. This setting does not apply to classic release pipelines.
Limit job authorization scope to current project for release pipelines - This setting applies to classic
release pipelines only.
Pipelines run with collection scoped access tokens unless the relevant setting for the pipeline type is enabled. The
Limit job authorization scope settings allow you to reduce the scope of access for all pipelines to the current
project. This can impact your pipeline if you are accessing an Azure Repos Git repository in a different project in
your organization.
If your Azure Repos Git repository is in a different project than your pipeline, and the Limit job authorization
scope setting for your pipeline type is enabled, you must grant permission to the build service identity for your
pipeline to the second project. For more information, see Manage build service account permissions.
Azure Pipelines provides a security setting to configure the job authorization scope that your pipelines run with.
Limit job authorization scope to current project - This setting applies to YAML pipelines and classic build
pipelines. This setting does not apply to classic release pipelines.
Pipelines run with collection scoped access tokens unless Limit job authorization scope to current project is
enabled. This setting allows you to reduce the scope of access for all pipelines to the current project. This can
impact your pipeline if you are accessing an Azure Repos Git repository in a different project in your organization.
If your Azure Repos Git repository is in a different project than your pipeline, and the Limit job authorization
scope setting is enabled, you must grant permission to the build service identity for your pipeline to the second
project. For more information, see Job authorization scope.
For more information on Limit job authorization scope , see Understand job access tokens.
Limit job authorization scope to referenced Azure DevOps repositories
Pipelines can access any Azure DevOps repositories in authorized projects, as described in the previous Limit job
authorization scope to current project section, unless Limit job authorization scope to referenced Azure
DevOps repositories is enabled. With this option enabled, you can reduce the scope of access for all pipelines to
only Azure DevOps repositories explicitly referenced by a checkout step in the pipeline job that uses that
repository.
To configure this setting, navigate to Pipelines , Settings at either Organization settings or Project settings . If
enabled at the organization level, the setting is grayed out and unavailable at the project settings level.

IMPORTANT
Limit job authorization scope to referenced Azure DevOps repositories is enabled by default for new
organizations and projects created after May 2020.

When Limit job authorization scope to referenced Azure DevOps repositories is enabled, your YAML
pipelines must explicitly reference any Azure Repos Git repositories you want to use in the pipeline as a checkout
step in the job that uses the repository. You won't be able to fetch code using scripting tasks and git commands for
an Azure Repos Git repository unless that repo is first explicitly referenced.
There are a few exceptions where you don't need to explicitly reference an Azure Repos Git repository before using
it in your pipeline when Limit job authorization scope to referenced Azure DevOps repositories is
enabled.
If you do not have an explicit checkout step in your pipeline, it is as if you have a checkout: self step, and the
self repository is checked out.
If you are using a script to perform read-only operations on a repository in a public project, you don't need to
reference the public project repository in a checkout step.
If you are using a script that provides its own authentication to the repo, such as a PAT, you don't need to
reference that repository in a checkout step.

For example, when Limit job authorization scope to referenced Azure DevOps repositories is enabled, if
your pipeline is in the FabrikamProject/Fabrikam repo in your organization, and you want to use a script to check
out the FabrikamProject/FabrikamTools repo, you must also reference this repository in a checkout step.
If you are already checking out the FabrikamTools repository in your pipeline using a checkout step, you may
subsequently use scripts to interact with that repository, such as checking out different branches.

steps:
- checkout: git://FabrikamFiber/FabrikamTools # Azure Repos Git repository in the same organization

NOTE
For many scenarios, multi-repo checkout can be leveraged, removing the need to use scripts to check out additional
repositories in your pipeline. For more information, see Check out multiple repositories in your pipeline.

Checkout
When a pipeline is triggered, Azure Pipelines pulls your source code from the Azure Repos Git repository. You can
control various aspects of how this happens.
Preferred version of Git
The Windows agent comes with its own copy of Git. If you prefer to supply your own Git rather than use the
included copy, set System.PreferGitFromPath to true . This setting is always true on non-Windows agents.
Checkout path
YAML
Classic
If you are checking out a single repository, by default, your source code will be checked out into a directory called
s . For YAML pipelines, you can change this by specifying checkout with a path . The specified path is relative to
$(Agent.BuildDirectory) . For example: if the checkout path value is mycustompath and $(Agent.BuildDirectory) is
C:\agent\_work\1 , then the source code will be checked out into C:\agent\_work\1\mycustompath .

If you are using multiple checkout steps and checking out multiple repositories, and not explicitly specifying the
folder using path , each repository is placed in a subfolder of s named after the repository. For example if you
check out two repositories named tools and code , the source code will be checked out into
C:\agent\_work\1\s\tools and C:\agent\_work\1\s\code .

Please note that the checkout path value cannot be set to go up any directory levels above
$(Agent.BuildDirectory) , so path\..\anotherpath will result in a valid checkout path (i.e.
C:\agent\_work\1\anotherpath ), but a value like ..\invalidpath will not (i.e. C:\agent\_work\invalidpath ).

You can configure the path setting in the Checkout step of your pipeline.

steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch

Submodules
YAML
Classic
You can configure the submodules setting in the Checkout step of your pipeline if you want to download files from
submodules.

steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch

The build pipeline will check out your Git submodules as long as they are:
Unauthenticated: A public, unauthenticated repo with no credentials required to clone or fetch.
Authenticated:
Contained in the same project as the Azure Repos Git repo specified above. The same credentials that
are used by the agent to get the sources from the main repository are also used to get the sources
for submodules.
Added by using a URL relative to the main repository. For example
This one would be checked out:
git submodule add ../../../FabrikamFiberProject/_git/FabrikamFiber FabrikamFiber

In this example the submodule refers to a repo (FabrikamFiber) in the same Azure DevOps
organization, but in a different project (FabrikamFiberProject). The same credentials that are
used by the agent to get the sources from the main repository are also used to get the
sources for submodules. This requires that the job access token has access to the repository in
the second project. If you restricted the job access token as explained in the section above,
then you won't be able to do this.
This one would not be checked out:
git submodule add https://[email protected]/fabrikam-
fiber/FabrikamFiberProject/_git/FabrikamFiber FabrikamFiber

Alternative to using the Checkout submodules option


In some cases you can't use the Checkout submodules option. You might have a scenario where a different set
of credentials are needed to access the submodules. This can happen, for example, if your main repository and
submodule repositories aren't stored in the same Azure DevOps organization, or if your job access token does not
have access to the repository in a different project.
If you can't use the Checkout submodules option, then you can instead use a custom script step to fetch
submodules. First, get a personal access token (PAT) and prefix it with pat: . Next, base64-encode this prefixed
string to create a basic auth token. Finally, add this script to your pipeline:

git -c http.https://<url of submodule repository>.extraheader="AUTHORIZATION: Bearer <BASE64_ENCODED_STRING>"


submodule update --init --recursive

Be sure to replace "<BASE64_ENCODED_STRING>" with your Base64-encoded "pat:token" string.


Use a secret variable in your project or build pipeline to store the basic auth token that you generated. Use that
variable to populate the secret in the above Git command.

NOTE
Q: Why can't I use a Git credential manager on the agent? A: Storing the submodule credentials in a Git credential
manager installed on your private build agent is usually not effective as the credential manager may prompt you to re-enter
the credentials whenever the submodule is updated. This isn't desirable during automated builds when user interaction isn't
possible.

Shallow fetch
You may want to limit how far back in history to download. Effectively this results in git fetch --depth=n . If your
repository is large, this option might make your build pipeline more efficient. Your repository might be large if it
has been in use for a long time and has sizeable history. It also might be large if you added and later deleted large
files.
YAML
Classic
You can configure the fetchDepth setting in the Checkout step of your pipeline.

steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch
In these cases this option can help you conserve network and storage resources. It might also save time. The
reason it doesn't always save time is because in some situations the server might need to spend time calculating
the commits to download for the depth you specify.

NOTE
When the pipeline is started, the branch to build is resolved to a commit ID. Then, the agent fetches the branch and checks
out the desired commit. There is a small window between when a branch is resolved to a commit ID and when the agent
performs the checkout. If the branch updates rapidly and you set a very small value for shallow fetch, the commit may not
exist when the agent attempts to check it out. If that happens, increase the shallow fetch depth setting.

Don't sync sources


You may want to skip fetching new commits. This option can be useful in cases when you want to:
Git init, config, and fetch using your own custom options.
Use a build pipeline to just run automation (for example some scripts) that do not depend on code in
version control.
YAML
Classic
You can configure the Don't sync sources setting in the Checkout step of your pipeline, by setting
checkout: none .

steps:
- checkout: none # Don't sync sources

NOTE
When you use this option, the agent also skips running Git commands that clean the repo.

Clean build
You can perform different forms of cleaning the working directory of your self-hosted agent before a build runs.
In general, for faster performance of your self-hosted agents, don't clean the repo. In this case, to get the best
performance, make sure you're also building incrementally by disabling any Clean option of the task or tool you're
using to build.
If you do need to clean the repo (for example to avoid problems caused by residual files from a previous build),
your options are below.

NOTE
Cleaning is not effective if you're using a Microsoft-hosted agent because you'll get a new agent every time.

YAML
Classic
You can configure the clean setting in the Checkout step of your pipeline.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch

When clean is set to true the build pipeline performs an undo of any changes in $(Build.SourcesDirectory) .
More specifically, the following Git commands are executed prior to fetching the source.

git clean -ffdx


git reset --hard HEAD

For more options, you can configure the workspace setting of a Job.

jobs:
- job: string # name of the job, A-Z, a-z, 0-9, and underscore
...
workspace:
clean: outputs | resources | all # what to clean up before the job runs

This gives the following clean options.


outputs : Same operation as the clean setting described in the previous the checkout task, plus: Deletes and
recreates $(Build.BinariesDirectory) . Note that the $(Build.ArtifactStagingDirectory) and
$(Common.TestResultsDirectory) are always deleted and recreated prior to every build regardless of any of
these settings.
resources : Deletes and recreates $(Build.SourcesDirectory) . This results in initializing a new, local Git
repository for every build.
all : Deletes and recreates $(Agent.BuildDirectory) . This results in initializing a new, local Git repository for
every build.
Label sources
You may want to label your source code files to enable your team to easily identify which version of each file is
included in the completed build. You also have the option to specify whether the source code should be labeled for
all builds or only for successful builds.
YAML
Classic
You can't currently configure this setting in YAML but you can in the classic editor. When editing a YAML pipeline,
you can access the classic editor by choosing either Triggers from the YAML editor menu.
From the classic editor, choose YAML , choose the Get sources task, and then configure the desired properties
there.

In the Tag format you can use user-defined and predefined variables that have a scope of "All." For example:

$(Build.DefinitionName)_$(Build.DefinitionVersion)_$(Build.BuildId)_$(Build.BuildNumber)_$(My.Variable)

The first four variables are predefined. My.Variable can be defined by you on the variables tab.
The build pipeline labels your sources with a Git tag.
Some build variables might yield a value that is not a valid label. For example, variables such as
$(Build.RequestedFor) and $(Build.DefinitionName) can contain white space. If the value contains white space,
the tag is not created.
After the sources are tagged by your build pipeline, an artifact with the Git ref refs/tags/{tag} is automatically
added to the completed build. This gives your team additional traceability and a more user-friendly way to
navigate from the build to the code that was built.
FAQ
Problems related to Azure Repos integration fall into three categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Wrong version : My pipeline runs, but it is using an unexpected version of the source/YAML.
Failing triggers
I just created a new YAML pipeline with CI/PR triggers, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Are your YAML CI or PR triggers being overridden by pipeline settings in the UI? While editing your
pipeline, choose ... and then Triggers .

Check the Override the YAML trigger from here setting for the types of trigger (Continuous
integration or Pull request validation ) available for your repo.

Are you configuring the PR trigger in the YAML file or in branch policies for the repo? For an Azure Repos Git
repo, you cannot configure a PR trigger in the YAML file. You need to use branch policies.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you updated the YAML file in the correct branch? If you push an update to a branch, then the YAML file
in that same branch governs the CI behavior. If you push an update to a source branch, then the YAML file
resulting from merging the source branch with the target branch governs the PR behavior. Make sure that
the YAML file in the correct branch has the necessary CI or PR configuration.
Have you configured the trigger correctly? When you define a YAML trigger, you can specify both include
and exclude clauses for branches, tags, and paths. Ensure that the include clause matches the details of your
commit and that the exclude clause doesn't exclude them. Check the syntax for the triggers and make sure
that it is accurate.
Have you used variables in defining the trigger or the paths? That is not supported.
Did you use templates for your YAML file? If so, make sure that your triggers are defined in the main YAML
file. Triggers defined inside template files are not supported.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
Do you have wildcards in your path filters? Understand the limitations of wildcards in your paths as
described in this article.
Did you just push a new branch? If so, the new branch may not start a new run. See the section "Behavior of
triggers when new branches are created".
My CI or PR triggers have been working fine. But, they stopped working now.
First go through the troubleshooting steps in the previous question. Then, follow these additional steps:
Do you have merge conflicts in your PR? For a PR that did not trigger a pipeline, open it and check whether
it has a merge conflict. Resolve the merge conflict.
Are you experiencing a delay in the processing of push or PR events? You can usually verify this by seeing if
the issue is specific to a single pipeline or is common to all pipelines or repos in your project. If a push or a
PR update to any of the repos exhibits this symptom, we might be experiencing delays in processing the
update events. Check if we are experiencing a service outage on our status page. If the status page shows
an issue, then our team must have already started working on it. Check the page frequently for updates on
the issue.
I do not want users to override the list of branches for triggers when they update the YAML file. How can I do this?
Users with permissions to contribute code can update the YAML file and include/exclude additional branches. As a
result, users can include their own feature or user branch in their YAML file and push that update to a feature or
user branch. This may cause the pipeline to be triggered for all updates to that branch. If you want to prevent this
behavior, then you can:
1. Edit the pipeline in the Azure Pipelines UI.
2. Navigate to the Triggers menu.
3. Select Override the YAML continuous Integration trigger from here .
4. Specify the branches to include or exclude for the trigger.
When you follow these steps, any CI triggers specified in the YAML file are ignored.
I have multiple repositories in my YAML pipeline. How do I set up triggers for each repository?
See triggers in Using multiple repositories.
Failing checkout
I see the following error in the log file during checkout step. How do I fix it?

remote: TF401019: The Git repository with name or identifier XYZ does not exist or you do not have permissions
for the operation you are attempting.
fatal: repository 'XYZ' not found
##[error] Git fetch failed with exit code: 128

Follow each of these steps to troubleshoot your failing triggers:


Does the repository still exist? First, make sure it does by opening it in the Repos page.
Are you accessing the repository using a script? If so, check the Limit job authorization scope to referenced
Azure DevOps repositories setting. When Limit job authorization scope to referenced Azure DevOps
repositories is enabled, you won't be able to check out Azure Repos Git repositories using a script unless
they are explicitly referenced first in the pipeline.
What is the job authorization scope of the pipeline?
If the scope is collection :
This may be an intermittent error. Re-run the pipeline.
Someone may have removed the access to Project Collection Build Ser vice account .
Go to the project settings of the project in which the repository exists. Select Repos ->
Repositories -> specific repository.
Check if Project Collection Build Ser vice (your-collection-name) exists in the list of
users.
Check if that account has Create tag and Read access.
If the scope is project :
Is the repo in the same project as the pipeline?
Yes:
This may be an intermittent error. Re-run the pipeline.
Someone may have removed the access to Project Build Ser vice account .
Go to the project settings of the project in which the repository exists.
Select Repos -> Repositories -> specific repository.
Check if your-project-name Build Ser vice (your-collection-name)
exists in the list of users.
Check if that account has Create tag and Read access.
No:
Is your pipeline in a public project?
Yes: You cannot access resources outside of your public project. Make the
project private.
No: You need to take additional steps to grant access. Let us say that your
pipeline exists in project A and that your repository exists in project B .
Go to the project settings of the project in which the repository exists
(B). Select Repos -> Repositories -> specific repository.
Add your-project-name Build Ser vice (your-collection-name)
to the list of users, where your-project-name is the name of the project
in which your pipeline exists (A).
Give Create tag and Read access to the account.
Wrong version
A wrong version of the YAML file is being used in the pipeline. Why is that?
For CI triggers, the YAML file that is in the branch you are pushing is evaluated to see if a CI build should be run.
For PR triggers, the YAML file resulting from merging the source and target branches of the PR is evaluated to
see if a PR build should be run.

Related articles
Scheduled triggers
Pipeline completion triggers
Build GitHub repositories
11/2/2020 • 53 minutes to read • Edit Online

Azure Pipelines
Azure Pipelines can automatically build and validate every pull request and commit to your GitHub repository.
This article describes how to configure the integration between GitHub and Azure Pipelines.
If you're new to Azure Pipelines integration with GitHub, follow the steps in Create your first pipeline to get your
first pipeline working with a GitHub repository, and then come back to this article to learn more about
configuring and customizing the integration between GitHub and Azure Pipelines.

Organizations and users


GitHub and Azure Pipelines are two independent services that integrate well together. Each of them have their
own organization and user management. This section makes a recommendation on how to replicate the
organization and users from GitHub to Azure Pipelines.
Organizations
GitHub's structure consists of organizations and user accounts that contain repositories . See GitHub's
documentation.

Azure DevOps' structure consists of organizations that contain projects . See Plan your organizational structure.

Azure DevOps can reflect your GitHub structure with:


An Azure DevOps organization for your GitHub organization or user account
Azure DevOps Projects for your GitHub repositories

To set up an identical structure in Azure DevOps:


1. Create an Azure DevOps organization named after your GitHub organization or user account. It will have a
URL like https://dev.azure.com/your-organization .
2. In the Azure DevOps organization, create projects named after your repositories. They will have URLs like
https://dev.azure.com/your-organization/your-repository .
3. In the Azure DevOps Project, create pipelines named after the GitHub organization and repository they build,
such as your-organization.your-repository . Then, it's clear which repositories they're for.

Following this pattern, your GitHub repositories and Azure DevOps Projects will have matching URL paths. For
example:

SERVIC E URL

GitHub https://github.com/python/cpython

Azure DevOps https://dev.azure.com/python/cpython

Users
Your GitHub users do not automatically get access to Azure Pipelines. Azure Pipelines is unaware of GitHub
identities. For this reason, there is no way to configure Azure Pipelines to automatically notify users of a build
failure or a PR validation failure using their GitHub identity and email address. You must explicitly create new
users in Azure Pipelines to replicate GitHub users. Once you create new users, you can configure their
permissions in Azure DevOps to reflect their permissions in GitHub. You can also configure notifications in Azure
DevOps using their Azure DevOps identity.
GitHub organization roles
GitHub organization member roles are found at https://github.com/orgs/your-organization/people (replace
your-organization ).

Azure DevOps organization member permissions are found at


https://dev.azure.com/your-organization/_settings/security (replace your-organization ).
Roles in a GitHub organization and equivalent roles in an Azure DevOps organization are shown below.
GIT H UB O RGA N IZ AT IO N RO L E A Z URE DEVO P S O RGA N IZ AT IO N EQ UIVA L EN T

Owner Member of Project Collection Administrators

Billing manager Member of Project Collection Administrators

Member Member of Project Collection Valid Users . By default,


this group lacks permission to create new projects. To change
this, set the group's Create new projects permission to
Allow , or create a new group with permissions you need.

GitHub user account roles


A GitHub user account has one role, which is ownership of the account.
Azure DevOps organization member permissions are found at
https://dev.azure.com/your-organization/_settings/security (replace your-organization ).
The GitHub user account role maps to Azure DevOps organization permissions as follows.

GIT H UB USER A C C O UN T RO L E A Z URE DEVO P S O RGA N IZ AT IO N EQ UIVA L EN T

Owner Member of Project Collection Administrators

GitHub repository permissions


GitHub repository permissions are found at
https://github.com/your-organization/your-repository/settings/collaboration (replace your-organization and
your-repository ).
Azure DevOps project permissions are found at
https://dev.azure.com/your-organization/your-project/_settings/security (replace your-organization and
your-project ).
Equivalent permissions between GitHub repositories and Azure DevOps Projects are as follows.

GIT H UB REP O SITO RY P ERM ISSIO N A Z URE DEVO P S P RO JEC T EQ UIVA L EN T

Admin Member of Project Administrators

Write Member of Contributors

Read Member of Readers

If your GitHub repository grants permission to teams, you can create matching teams in the Teams section of
your Azure DevOps project settings. Then, add the teams to the security groups above, just like users.
Pipeline-specific permissions
To grant permissions to users or teams for specific pipelines in an Azure DevOps project, follow these steps:
1. Visit the project's Pipelines page (for example, https://dev.azure.com/your-organization/your-project/_build ).
2. Select the pipeline for which to set specific permissions.
3. From the '...' context menu, select Security .
4. Click Add... to add a specific user, team, or group and customize their permissions for the pipeline.

Access to GitHub repositories


YAML
Classic
You create a new pipeline by first selecting a GitHub repository and then a YAML file in that repository. The
repository in which the YAML file is present is called self repository. By default, this is the repository that your
pipeline builds.
You can later configure your pipeline to check out a different repository or multiple repositories. To learn how to
do this, see multi-repo checkout.
Azure Pipelines must be granted access to your repositories to trigger their builds, and fetch their code during
builds.
There are 3 authentication types for granting Azure Pipelines access to your GitHub repositories while creating a
pipeline.

A UT H EN T IC AT IO N T Y P E P IP EL IN ES RUN USIN G W O RK S W IT H GIT H UB C H EC K S

1. GitHub App The Azure Pipelines identity Yes

2. OAuth Your personal GitHub identity No

3. Personal access token (PAT) Your personal GitHub identity No

GitHub app authentication


The Azure Pipelines GitHub App is the recommended authentication type for continuous integration pipelines.
By installing the GitHub App in your GitHub account or organization, your pipeline can run without using your
personal GitHub identity. Builds and GitHub status updates will be performed using the Azure Pipelines identity.
The app works with GitHub Checks to display build, test, and code coverage results in GitHub.
To use the GitHub App, install it in your GitHub organization or user account for some or all repositories. The
GitHub App can be installed and uninstalled from the app's homepage.
After installation, the GitHub App will become Azure Pipelines' default method of authentication to GitHub
(instead of OAuth) when pipelines are created for the repositories.
If you install the GitHub App for all repositories in a GitHub organization, you don't need to worry about Azure
Pipelines sending mass emails or automatically setting up pipelines on your behalf. As an alternative to installing
the app for all repositories, repository admins can install it one at a time for individual repositories. This requires
more work for admins, but has no advantage nor disadvantage.
Permissions needed in GitHub
Installation of Azure Pipelines GitHub app requires you to be a GitHub organization owner or repository admin. In
addition, to create a pipeline for a GitHub repository with continuous integration and pull request triggers, you
must have the required GitHub permissions configured. Otherwise, the repositor y will not appear in the
repository list while creating a pipeline. Depending on the authentication type and ownership of the repository,
ensure that the appropriate access is configured.
If the repo is in your personal GitHub account, install the Azure Pipelines GitHub App in your personal
GitHub account. You will be able to list this repository when create the pipeline in Azure Pipelines.
If the repo is in someone else's personal GitHub account, the other person must install the Azure Pipelines
GitHub App in their personal GitHub account. You must be added as a collaborator in the repository's
settings under "Collaborators". Accept the invitation to be a collaborator using the link that is emailed to
you. Once you have done so, you can create a pipeline for that repository.
If the repo is in a GitHub organization that you own, install the Azure Pipelines GitHub App in the GitHub
organization. You must also be added as a collaborator, or your team must be added, in the repository's
settings under "Collaborators and teams".
If the repo is in a GitHub organization that someone else owns, a GitHub organization owner or repository
admin must install the Azure Pipelines GitHub App in the organization. You must be added as a
collaborator, or your team must be added, in the repository's settings under "Collaborators and teams".
Accept the invitation to be a collaborator using the link that is emailed to you.
GitHub App permissions
The GitHub App requests the following permissions during installation:

P ERM ISSIO N W H AT A Z URE P IP EL IN ES DO ES W IT H IT

Write access to code Only upon your deliberate action, Azure Pipelines will simplify
creating a pipeline by committing a YAML file to a selected
branch of your GitHub repository.

Read access to metadata Azure Pipelines will retrieve GitHub metadata for displaying
the repository, branches, and issues associated with a build in
the build's summary.

Read and write access to checks Azure Pipelines will read and write its own build, test, and
code coverage results to be displayed in GitHub.

Read and write access to pull requests Only upon your deliberate action, Azure Pipelines will simplify
creating a pipeline by creating a pull request for a YAML file
that was committed to a selected branch of your GitHub
repository. Azure Pipelines will retrieve pull request metadata
to display in build summaries associated with pull requests.

Troubleshooting GitHub App installation


GitHub may display an error such as:
You do not have permission to modify this app on your-organization. Please contact an Organization Owner.

This means that the GitHub App is likely already installed for your organization. When you create a pipeline for a
repository in the organization, the GitHub App will automatically be used to connect to GitHub.
Create pipelines in multiple Azure DevOps organizations and projects
Once the GitHub App is installed, pipelines can be created for the organization's repositories in different Azure
DevOps organizations and projects. However, if you create pipelines for a single repository in multiple Azure
DevOps organizations, only the first organization's pipelines can be automatically triggered by GitHub commits or
pull requests. Manual or scheduled builds are still possible in secondary Azure DevOps organizations.
OAuth authentication
OAuth is the simplest authentication type to get started with for repositories in your personal GitHub account.
GitHub status updates will be performed on behalf of your personal GitHub identity. For pipelines to keep
working, your repository access must remain active. Some GitHub features, like Checks, are unavailable with
OAuth and require the GitHub App.
To use OAuth, click Choose a different connection below the list of repositories while creating a pipeline. Then,
click Authorize to sign into GitHub and authorize with OAuth. An OAuth connection will be saved in your Azure
DevOps project for later use, as well as used in the pipeline being created.
Permissions needed in GitHub
To create a pipeline for a GitHub repository with continuous integration and pull request triggers, you must have
the required GitHub permissions configured. Otherwise, the repositor y will not appear in the repository list
while creating a pipeline. Depending on the authentication type and ownership of the repository, ensure that the
appropriate access is configured.
If the repo is in your personal GitHub account, at least once, authenticate to GitHub with OAuth using your
personal GitHub account credentials. This can be done in Azure DevOps project settings under Pipelines >
Service connections > New service connection > GitHub > Authorize. Grant Azure Pipelines access to your
repositories under "Permissions" here.
If the repo is in someone else's personal GitHub account, at least once, the other person must authenticate
to GitHub with OAuth using their personal GitHub account credentials. This can be done in Azure DevOps
project settings under Pipelines > Service connections > New service connection > GitHub > Authorize.
The other person must grant Azure Pipelines access to their repositories under "Permissions" here. You
must be added as a collaborator in the repository's settings under "Collaborators". Accept the invitation to
be a collaborator using the link that is emailed to you.
If the repo is in a GitHub organization that you own, at least once, authenticate to GitHub with OAuth using
your personal GitHub account credentials. This can be done in Azure DevOps project settings under
Pipelines > Service connections > New service connection > GitHub > Authorize. Grant Azure Pipelines
access to your organization under "Organization access" here. You must be added as a collaborator, or your
team must be added, in the repository's settings under "Collaborators and teams".
If the repo is in a GitHub organization that someone else owns, at least once, a GitHub organization owner
must authenticate to GitHub with OAuth using their personal GitHub account credentials. This can be done
in Azure DevOps project settings under Pipelines > Service connections > New service connection >
GitHub > Authorize. The organization owner must grant Azure Pipelines access to the organization under
"Organization access" here. You must be added as a collaborator, or your team must be added, in the
repository's settings under "Collaborators and teams". Accept the invitation to be a collaborator using the
link that is emailed to you.
Revoke OAuth access
After authorizing Azure Pipelines to use OAuth, to later revoke it and prevent further use, visit OAuth Apps in your
GitHub settings. You can also delete it from the list of GitHub service connections in your Azure DevOps project
settings.
Personal access token (PAT ) authentication
PATs are effectively the same as OAuth, but allow you to control which permissions are granted to Azure Pipelines.
Builds and GitHub status updates will be performed on behalf of your personal GitHub identity. For builds to keep
working, your repository access must remain active.
To create a PAT, visit Personal access tokens in your GitHub settings. The required permissions are repo ,
admin:repo_hook , read:user , and user:email . These are the same permissions required when using OAuth
above. Copy the generated PAT to the clipboard and paste it into a new GitHub service connection in your Azure
DevOps project settings. For future recall, name the service connection after your GitHub username. It will be
available in your Azure DevOps project for later use when creating pipelines.
Permissions needed in GitHub
To create a pipeline for a GitHub repository with continuous integration and pull request triggers, you must have
the required GitHub permissions configured. Otherwise, the repositor y will not appear in the repository list
while creating a pipeline. Depending on the authentication type and ownership of the repository, ensure that the
following access is configured.
If the repo is in your personal GitHub account, the PAT must have the required access scopes under
Personal access tokens: repo , admin:repo_hook , read:user , and user:email .
If the repo is in someone else's personal GitHub account, the PAT must have the required access scopes
under Personal access tokens: repo , admin:repo_hook , read:user , and user:email . You must be added as
a collaborator in the repository's settings under "Collaborators". Accept the invitation to be a collaborator
using the link that is emailed to you.
If the repo is in a GitHub organization that you own, the PAT must have the required access scopes under
Personal access tokens: repo , admin:repo_hook , read:user , and user:email . You must be added as a
collaborator, or your team must be added, in the repository's settings under "Collaborators and teams".
If the repo is in a GitHub organization that someone else owns, the PAT must have the required access
scopes under Personal access tokens: repo , admin:repo_hook , read:user , and user:email . You must be
added as a collaborator, or your team must be added, in the repository's settings under "Collaborators and
teams". Accept the invitation to be a collaborator using the link that is emailed to you.
Revoke PAT access
After authorizing Azure Pipelines to use a PAT, to later delete it and prevent further use, visit Personal access
tokens in your GitHub settings. You can also delete it from the list of GitHub service connections in your Azure
DevOps project settings.

CI triggers
Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified
branches or you push specified tags.
YAML
Classic
YAML pipelines are configured by default with a CI trigger on all branches.
Branches
You can control which branches get CI triggers with a simple syntax:

trigger:
- master
- releases/*

You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ). See
Wildcards for information on the wildcard syntax.

NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).

NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.

For more complex triggers that use exclude or batch , you must use the full syntax as shown in the following
example.
# specific branch build
trigger:
branches:
include:
- master
- releases/*
exclude:
- releases/old*

In the above example, the pipeline will be triggered if a change is pushed to master or to any releases branch.
However, it won't be triggered if a change is made to a releases branch that starts with old .
If you specify an exclude clause without an include clause, then it is equivalent to specifying * in the include
clause.
In addition to specifying branch names in the branches lists, you can also configure triggers based on tags by
using the following format:

trigger:
branches:
include:
- refs/tags/{tagname}
exclude:
- refs/tags/{othertagname}

If you don't specify any triggers, the default is as if you wrote:

trigger:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string

IMPORTANT
When you specify a trigger, it replaces the default implicit trigger, and only pushes to branches that are explicitly configured
to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list.

Batching CI runs
If you have many team members uploading changes often, you may want to reduce the number of runs you start.
If you set batch to true , when a pipeline is running, the system waits until the run is completed, then starts
another run with all changes that have not yet been built.

# specific branch build with batching


trigger:
batch: true
branches:
include:
- master

To clarify this example, let us say that a push A to master caused the above pipeline to run. While that pipeline is
running, additional pushes B and C occur into the repository. These updates do not start new independent runs
immediately. But after the first run is completed, all pushes until that point of time are batched together and a new
run is started.
NOTE
If the pipeline has multiple jobs and stages, then the first run should still reach a terminal state by completing or skipping all
its jobs and stages before the second run can start. For this reason, you must exercise caution when using this feature in a
pipeline with multiple stages or approvals. If you wish to batch your builds in such cases, it is recommended that you split
your CI/CD process into two pipelines - one for build (with batching) and one for deployments.

Paths
You can specify file paths to include or exclude. Note that the wildcard syntax is different between branches/tags
and file paths.

# specific path build


trigger:
branches:
include:
- master
- releases/*
paths:
include:
- docs/*
exclude:
- docs/README.md

When you specify paths, you must explicitly specify branches to trigger on. You can't trigger a pipeline with only a
path filter; you must also have a branch filter, and the changed files that match the path filter must be from a
branch that matches the branch filter.

Tips:
Paths are always specified relative to the root of the repository.
If you don't set path filters, then the root folder of the repo is implicitly included by default.
If you exclude a path, you cannot also include it unless you qualify it to a deeper folder. For example if you
exclude /tools then you could include /tools/trigger-runs-on-these
The order of path filters doesn't matter.
Paths in Git are case-sensitive. Be sure to use the same case as the real folders.

NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).

Tags
In addition to specifying tags in the branches lists as covered in the previous section, you can directly specify tags
to include or exclude:

# specific tag
trigger:
tags:
include:
- v2.*
exclude:
- v2.0

If you don't specify any tag triggers, then by default, tags will not trigger pipelines.
IMPORTANT
If you specify tags in combination with branch filters, the trigger will fire if either the branch filter is satisfied or the tag filter
is satisfied. For example, if a pushed tag satisfies the branch filter, the pipeline triggers even if the tag is excluded by the tag
filter, because the push satisfied the branch filter.

Opting out of CI
Disabling the CI trigger
You can opt out of CI triggers entirely by specifying trigger: none .

# A pipeline with no CI trigger


trigger: none

IMPORTANT
When you push a change to a branch, the YAML file in that branch is evaluated to determine if a CI run should be started.

For more information, see Triggers in the YAML schema.


Skipping CI for individual commits
You can also tell Azure Pipelines to skip running a pipeline that a commit would normally trigger. Just include
[skip ci] in the commit message or description of the HEAD commit and Azure Pipelines will skip running CI.
You can also use any of the variations below.
[skip ci] or [ci skip]
skip-checks: true or skip-checks:true
[skip azurepipelines] or [azurepipelines skip]
[skip azpipelines] or [azpipelines skip]
[skip azp] or [azp skip]
***NO_CI***

Using the trigger type in conditions


It is a common scenario to run different steps, jobs, or stages in your pipeline depending on the type of trigger
that started the run. You can do this using the system variable Build.Reason . For example, add the following
condition to your step, job, or stage to exclude it from PR validations.
condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))

Behavior of triggers when new branches are created


It is common to configure multiple pipelines for the same repository. For instance, you may have one pipeline to
build the docs for your app and another to build the source code. You may configure CI triggers with appropriate
branch filters and path filters in each of these pipelines. For instance, you may want one pipeline to trigger when
you push an update to the docs folder, and another one to trigger when you push an update to your application
code. In these cases, you need to understand how the pipelines are triggered when a new branch is created.
Here is the behavior when you push a new branch (that matches the branch filters) to your repository:
If your pipeline has path filters, it will be triggered only if the new branch has changes to files that match that
path filter.
If your pipeline does not have path filters, it will be triggered even if there are no changes in the new branch.
Wildcards
When specifying a branch or tag, you may use an exact name or a wildcard. Wildcards patterns allow * to match
zero or more characters and ? to match a single character.
If you start your pattern with * in a YAML pipeline, you must wrap the pattern in quotes, like "*-releases" .
For branches and tags:
A wildcard may appear anywhere in the pattern.
For paths:
You may include * as the final character, but it doesn't do anything differently from specifying the
directory name by itself.
You may not include * in the middle of a path filter, and you may not use ? .

trigger:
branches:
include:
- master
- releases/*
- feature/*
exclude:
- releases/old*
- feature/*-working
paths:
include:
- '*' # same as '/' for the repository root
exclude:
- 'docs/*' # same as 'docs/'

PR triggers
Pull request (PR) triggers cause a pipeline to run whenever a pull request is opened with one of the specified
target branches, or when updates are made to such a pull request.
YAML
Classic
Branches
You can specify the target branches when validating your pull requests. For example, to validate pull requests that
target master and releases/* , you can use the following pr trigger.

pr:
- master
- releases/*

This configuration starts a new run the first time a new pull request is created, and after every update made to the
pull request.
You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ).

NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).
NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.

GitHub creates a new ref when a pull request is created. The ref points to a merge commit, which is the merged
code between the source and target branches of the pull request. The PR validation pipeline builds the commit
this ref points to. This means that the YAML file that is used to run the pipeline is also a merge between the source
and the target branch. As a result, the changes you make to the YAML file in source branch of the pull request can
override the behavior defined by the YAML file in target branch.
If no pr triggers appear in your YAML file, pull request validations are automatically enabled for all branches, as
if you wrote the following pr trigger. This configuration triggers a build when any pull request is created, and
when commits come into the source branch of any active pull request.

pr:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string

IMPORTANT
When you specify a pr trigger, it replaces the default implicit pr trigger, and only pushes to branches that are explicitly
configured to be included will trigger a pipeline.

For more complex triggers that need to exclude certain branches, you must use the full syntax as shown in the
following example.

# specific branch
pr:
branches:
include:
- master
- releases/*
exclude:
- releases/old*

Paths
You can specify file paths to include or exclude. For example:

# specific path
pr:
branches:
include:
- master
- releases/*
paths:
include:
- docs/*
exclude:
- docs/README.md
NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).

Multiple PR updates
You can specify whether additional updates to a PR should cancel in-progress validation runs for the same PR. The
default is true .

# auto cancel false


pr:
autoCancel: false
branches:
include:
- master

Draft PR validation
By default, pull request triggers fire on draft pull requests as well as pull requests that are ready for review. To
disable pull request triggers for draft pull requests, set the drafts property to false .

pr:
autoCancel: boolean # indicates whether additional pushes to a PR should cancel in-progress runs for the
same PR. Defaults to true
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
drafts: boolean # whether to build draft PRs, defaults to true

Opting out of PR validation


You can opt out of pull request validation entirely by specifying pr: none .

# no PR triggers
pr: none

For more information, see PR trigger in the YAML schema.

NOTE
If your pr trigger isn't firing, follow the troubleshooting steps in the FAQ.

NOTE
Draft pull requests do not trigger a pipeline.

Protected branches
You can run a validation build with each commit or pull request that targets a branch, and even prevent pull
requests from merging until a validation build succeeds.
To configure mandatory validation builds for a GitHub repository, you must be its owner, a collaborator with the
Admin role, or a GitHub organization member with the Write role.
1. First, create a pipeline for the repository and build it at least once so that its status is posted to GitHub,
thereby making GitHub aware of the pipeline's name.
2. Next, follow GitHub's documentation for configuring protected branches in the repository's settings.
For the status check, select the name of your pipeline in the Status checks list.

IMPORTANT
If your pipeline doesn't show up in this list, please ensure the following:
You are using GitHub app authentication
Your pipeline has run at least once in the last week

Contributions from external sources


If your GitHub repository is open source, you can make your Azure DevOps project public so that anyone can
view your pipeline's build results, logs, and test results without signing in. When users outside your organization
fork your repository and submit pull requests, they can view the status of builds that automatically validate those
pull requests.
You should keep in mind the following considerations when using Azure Pipelines in a public project when
accepting contributions from external sources.
Access restrictions
Validate contributions from forks
Important security considerations
Access restrictions
Be aware of the following access restrictions when you're running pipelines in Azure DevOps public projects:
Secrets: By default, secrets associated with your pipeline are not made available to pull request validations of
forks. See Validate contributions from forks.
Cross-project access: All pipelines in an Azure DevOps public project run with an access token restricted to
the project. Pipelines in a public project can access resources such as build artifacts or test results only within
the project and not in other projects of the Azure DevOps organization.
Azure Ar tifacts packages: If your pipelines need access to packages from Azure Artifacts, you must
explicitly grant permission to the Project Build Ser vice account to access the package feeds.
Contributions from forks

IMPORTANT
These settings affect the security of your pipeline.
When you create a pipeline, it is automatically triggered for pull requests from forks of your repository. You can
change this behavior, carefully considering how it affects security. To enable or disable this behavior:
1. Go to your Azure DevOps project. Select Pipelines , locate your pipeline, and select Edit .
2. Select the Triggers tab. After enabling the Pull request trigger , enable or disable the Build pull requests
from forks of this repositor y check box.
By default with GitHub pipelines, secrets associated with your build pipeline are not made available to pull request
builds of forks. These secrets are enabled by default with GitHub Enterprise Server pipelines. Secrets include:
A security token with access to your GitHub repository.
These items, if your pipeline uses them:
Service connection credentials
Files from the secure files library
Build variables marked secret
To bypass this precaution on GitHub pipelines, enable the Make secrets available to builds of forks check
box. Be aware of this setting's effect on security.
Important security considerations
A GitHub user can fork your repository, change it, and create a pull request to propose changes to your
repository. This pull request could contain malicious code to run as part of your triggered build. For example, an
ill-intentioned script or unit test change might leak secrets or compromise the agent machine that's performing
the build. We recommend the following actions to address this risk:
Do not enable the Make secrets available to builds of forks check box if your repository is public or
untrusted users can submit pull requests that automatically trigger builds. Otherwise, secrets might leak
during a build.
Use a Microsoft-hosted agent pool to build pull requests from forks. Microsoft-hosted agent machines are
immediately deleted after they complete a build, so there is no lasting impact if they're compromised.
If you must use a self-hosted agent, do not store any secrets or perform other builds and releases that use
secrets on the same agent, unless your repository is private and you trust pull request creators. Otherwise,
secrets might leak, and the repository contents or secrets of other builds and releases might be revealed.

Comment triggers
Repository collaborators can comment on a pull request to manually run a pipeline. You might use this to run an
optional test suite or validation build. The following commands can be issued to Azure Pipelines in comments:

C OMMAND RESULT

/AzurePipelines help Display help for all supported commands.

/AzurePipelines help <command-name> Display help for the specified command.

/AzurePipelines run Run all pipelines that are associated with this repository and
whose triggers do not exclude this pull request.

/AzurePipelines run <pipeline-name> Run the specified pipeline unless its triggers exclude this pull
request.
NOTE
For brevity, you can comment using /azp instead of /AzurePipelines .

IMPORTANT
Responses to these commands will appear in the pull request discussion only if your pipeline uses the Azure Pipelines
GitHub App.

Run pull request validation only when authorized by your team


You may not want to automatically build pull requests from unknown users until their changes can be reviewed.
You can configure Azure Pipelines to build GitHub pull requests only when authorized by your team.
To enable this, in Azure Pipelines, select the Triggers tab in your pipeline's settings. Then, under Pull request
validation , enable Only trigger builds for collaborators' pull request comments and save the pipeline.
Now, the pull request validation build will not be triggered automatically. Only repository owners and
collaborators with 'Write' permission can trigger the build by commenting on the pull request with
/AzurePipelines run or /AzurePipelines run <pipeline-name> as described above.

Troubleshoot pull request comment triggers


If you have the necessary repository permissions, but pipelines aren't getting triggered by your comments, make
sure that your membership is public in the repository's organization, or directly add yourself as a repository
collaborator. Azure Pipelines cannot see private organization members unless they are direct collaborators or
belong to a team that is a direct collaborator. You can change your GitHub organization membership from private
to public here (replace Your-Organization with your organization name):
https://github.com/orgs/Your-Organization/people .

Checkout
When a pipeline is triggered, Azure Pipelines pulls your source code from the Azure Repos Git repository. You can
control various aspects of how this happens.
Preferred version of Git
The Windows agent comes with its own copy of Git. If you prefer to supply your own Git rather than use the
included copy, set System.PreferGitFromPath to true . This setting is always true on non-Windows agents.
Checkout path
YAML
Classic
If you are checking out a single repository, by default, your source code will be checked out into a directory called
s . For YAML pipelines, you can change this by specifying checkout with a path . The specified path is relative to
$(Agent.BuildDirectory) . For example: if the checkout path value is mycustompath and $(Agent.BuildDirectory) is
C:\agent\_work\1 , then the source code will be checked out into C:\agent\_work\1\mycustompath .

If you are using multiple checkout steps and checking out multiple repositories, and not explicitly specifying the
folder using path , each repository is placed in a subfolder of s named after the repository. For example if you
check out two repositories named tools and code , the source code will be checked out into
C:\agent\_work\1\s\tools and C:\agent\_work\1\s\code .

Please note that the checkout path value cannot be set to go up any directory levels above
$(Agent.BuildDirectory) , so path\..\anotherpath will result in a valid checkout path (i.e.
C:\agent\_work\1\anotherpath ), but a value like ..\invalidpath will not (i.e. C:\agent\_work\invalidpath ).
You can configure the path setting in the Checkout step of your pipeline.

steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch

Submodules
YAML
Classic
You can configure the submodules setting in the Checkout step of your pipeline if you want to download files from
submodules.

steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch

The build pipeline will check out your Git submodules as long as they are:
Unauthenticated: A public, unauthenticated repo with no credentials required to clone or fetch.
Authenticated:
Contained in the same project as the Azure Repos Git repo specified above. The same credentials
that are used by the agent to get the sources from the main repository are also used to get the
sources for submodules.
Added by using a URL relative to the main repository. For example
This one would be checked out:
git submodule add ../../../FabrikamFiberProject/_git/FabrikamFiber FabrikamFiber

In this example the submodule refers to a repo (FabrikamFiber) in the same Azure DevOps
organization, but in a different project (FabrikamFiberProject). The same credentials that are
used by the agent to get the sources from the main repository are also used to get the
sources for submodules. This requires that the job access token has access to the repository
in the second project. If you restricted the job access token as explained in the section above,
then you won't be able to do this.
This one would not be checked out:
git submodule add https://[email protected]/fabrikam-
fiber/FabrikamFiberProject/_git/FabrikamFiber FabrikamFiber

Alternative to using the Checkout submodules option


In some cases you can't use the Checkout submodules option. You might have a scenario where a different set
of credentials are needed to access the submodules. This can happen, for example, if your main repository and
submodule repositories aren't stored in the same Azure DevOps organization, or if your job access token does not
have access to the repository in a different project.
If you can't use the Checkout submodules option, then you can instead use a custom script step to fetch
submodules. First, get a personal access token (PAT) and prefix it with pat: . Next, base64-encode this prefixed
string to create a basic auth token. Finally, add this script to your pipeline:

git -c http.https://<url of submodule repository>.extraheader="AUTHORIZATION: Bearer <BASE64_ENCODED_STRING>"


submodule update --init --recursive

Be sure to replace "<BASE64_ENCODED_STRING>" with your Base64-encoded "pat:token" string.


Use a secret variable in your project or build pipeline to store the basic auth token that you generated. Use that
variable to populate the secret in the above Git command.

NOTE
Q: Why can't I use a Git credential manager on the agent? A: Storing the submodule credentials in a Git credential
manager installed on your private build agent is usually not effective as the credential manager may prompt you to re-
enter the credentials whenever the submodule is updated. This isn't desirable during automated builds when user
interaction isn't possible.

Shallow fetch
You may want to limit how far back in history to download. Effectively this results in git fetch --depth=n . If your
repository is large, this option might make your build pipeline more efficient. Your repository might be large if it
has been in use for a long time and has sizeable history. It also might be large if you added and later deleted large
files.
YAML
Classic
You can configure the fetchDepth setting in the Checkout step of your pipeline.

steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch

In these cases this option can help you conserve network and storage resources. It might also save time. The
reason it doesn't always save time is because in some situations the server might need to spend time calculating
the commits to download for the depth you specify.

NOTE
When the pipeline is started, the branch to build is resolved to a commit ID. Then, the agent fetches the branch and checks
out the desired commit. There is a small window between when a branch is resolved to a commit ID and when the agent
performs the checkout. If the branch updates rapidly and you set a very small value for shallow fetch, the commit may not
exist when the agent attempts to check it out. If that happens, increase the shallow fetch depth setting.
Don't sync sources
You may want to skip fetching new commits. This option can be useful in cases when you want to:
Git init, config, and fetch using your own custom options.
Use a build pipeline to just run automation (for example some scripts) that do not depend on code in
version control.
YAML
Classic
You can configure the Don't sync sources setting in the Checkout step of your pipeline, by setting
checkout: none .

steps:
- checkout: none # Don't sync sources

NOTE
When you use this option, the agent also skips running Git commands that clean the repo.

Clean build
You can perform different forms of cleaning the working directory of your self-hosted agent before a build runs.
In general, for faster performance of your self-hosted agents, don't clean the repo. In this case, to get the best
performance, make sure you're also building incrementally by disabling any Clean option of the task or tool
you're using to build.
If you do need to clean the repo (for example to avoid problems caused by residual files from a previous build),
your options are below.

NOTE
Cleaning is not effective if you're using a Microsoft-hosted agent because you'll get a new agent every time.

YAML
Classic
You can configure the clean setting in the Checkout step of your pipeline.

steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch

When clean is set to true the build pipeline performs an undo of any changes in $(Build.SourcesDirectory) .
More specifically, the following Git commands are executed prior to fetching the source.
git clean -ffdx
git reset --hard HEAD

For more options, you can configure the workspace setting of a Job.

jobs:
- job: string # name of the job, A-Z, a-z, 0-9, and underscore
...
workspace:
clean: outputs | resources | all # what to clean up before the job runs

This gives the following clean options.


outputs : Same operation as the clean setting described in the previous the checkout task, plus: Deletes
and recreates $(Build.BinariesDirectory) . Note that the $(Build.ArtifactStagingDirectory) and
$(Common.TestResultsDirectory) are always deleted and recreated prior to every build regardless of any of
these settings.
resources : Deletes and recreates $(Build.SourcesDirectory) . This results in initializing a new, local Git
repository for every build.
all : Deletes and recreates $(Agent.BuildDirectory) . This results in initializing a new, local Git repository for
every build.
Label sources
You may want to label your source code files to enable your team to easily identify which version of each file is
included in the completed build. You also have the option to specify whether the source code should be labeled
for all builds or only for successful builds.
YAML
Classic
You can't currently configure this setting in YAML but you can in the classic editor. When editing a YAML pipeline,
you can access the classic editor by choosing either Triggers from the YAML editor menu.

From the classic editor, choose YAML , choose the Get sources task, and then configure the desired properties
there.
In the Tag format you can use user-defined and predefined variables that have a scope of "All." For example:

$(Build.DefinitionName)_$(Build.DefinitionVersion)_$(Build.BuildId)_$(Build.BuildNumber)_$(My.Variable)

The first four variables are predefined. My.Variable can be defined by you on the variables tab.
The build pipeline labels your sources with a Git tag.
Some build variables might yield a value that is not a valid label. For example, variables such as
$(Build.RequestedFor) and $(Build.DefinitionName) can contain white space. If the value contains white space,
the tag is not created.
After the sources are tagged by your build pipeline, an artifact with the Git ref refs/tags/{tag} is automatically
added to the completed build. This gives your team additional traceability and a more user-friendly way to
navigate from the build to the code that was built.

Pre-defined variables
When you build a GitHub repository, most of the pre-defined variables are available to your jobs. However, since
Azure Pipelines does not recognize the identity of a user making an update in GitHub, the following variables are
set to system identity instead of user's identity:
Build.RequestedFor
Build.RequestedForId
Build.RequestedForEmail

Status updates
There are two types of statuses that Azure Pipelines posts back to GitHub - basic statuses and GitHub Check Runs.
GitHub Checks functionality is only available with GitHub Apps.
Pipeline statuses show up in various places in the GitHub UI.
For PRs, they are displayed on the PR conversations tab.
For individual commits, they are displayed when hovering over the status mark after the commit time on the
repo's commits tab.
PAT or OAuth GitHub connections
For pipelines using PAT or OAuth GitHub connections, statuses are posted back to the commit/PR that triggered
the run. The GitHub status API is used to post such updates. These statuses contain limited information: pipeline
status (failed, success), URL to link back to the build pipeline, and a brief description of the status.
Statuses for PAT or OAuth GitHub connections are only sent at the run level. In other words, you can have a single
status updated for an entire run. If you have multiple jobs in a run, you cannot post a separate status for each job.
However, multiple pipelines can post separate statuses to the same commit.
GitHub Checks
For pipelines set up using the Azure Pipelines GitHub app), the status is posted back in the form of GitHub Checks.
GitHub Checks allow for sending detailed information about the pipeline status as well as test, code coverage, and
errors. The GitHub Checks API can be found here.
For every pipeline using the GitHub App, Checks are posted back for the overall run as well as each job in that
run.
GitHub allows three options when one or more Check Runs fail for a PR/commit. You can choose to "re-run" the
individual Check, re-run all the failing Checks on that PR/commit, or re-run all the Checks, whether they
succeeded initially or not.

Clicking on the "Re-run" link next to the Check Run name will result in Azure Pipelines retrying the run that
generated the Check Run. The resultant run will have the same run number and will use the same version of the
source code, configuration, and YAML file as the initial build. Only those jobs that failed in the initial run and any
dependent downstream jobs will be run again. Clicking on the "Re-run all failing checks" link will have the same
effect. This is the same behavior as clicking "Re-try run" in the Azure Pipelines UI. Clicking on "Re-run all checks"
will result in a new run, with a new run number and will pick up changes in the configuration or YAML file.

FAQ
Problems related to GitHub integration fall into the following categories:
Connection types : I am not sure what connection type I am using to connect my pipeline to GitHub.
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Wrong version : My pipeline runs, but it is using an unexpected version of the source/YAML.
Missing status updates : My GitHub PRs are blocked because Azure Pipeline did not report a status update.
Connection types
To troubleshoot triggers, how do I know the type of GitHub connection I'm using for my pipeline?
Troubleshooting problems with triggers very much depends on the type of GitHub connection you use in your
pipeline. There are two ways to determine the type of connection - from GitHub and from Azure Pipelines.
From GitHub: If a repo is set up to use the GitHub app, then the statuses on PRs and commits will be Check
Runs. If the repo has Azure Pipelines set up with OAuth or PAT connections, the statuses will be the "old"
style of statuses. A quick way to determine if the statuses are Check Runs or simple statuses is to look at
the "conversation" tab on a GitHub PR.
If the "Details" link redirects to the Checks tab, it is a Check Run and the repo is using the app.
If the "Details" link redirects to the Azure DevOps pipeline, then the status is an "old style" status and the
repo is not using the app.
From Azure Pipelines: You can also determine the type of connection by inspecting the pipeline in Azure
Pipelines UI. Open the editor for the pipeline. Select Triggers to open the classic editor for the pipeline.
Then, select YAML tab and then the Get sources step. You'll notice a banner Authorized using
connection: indicating the service connection that was used to integrate the pipeline with GitHub. The
name of the service connection is a hyperlink. Select it to navigate to the service connection properties.
The properties of the service connection will indicate the type of connection being used:
Azure Pipelines app indicates GitHub app connection
oauth indicates OAuth connection
personalaccesstoken indicates PAT authentication
How do I switch my pipeline to use GitHub app instead of OAuth?
Using a GitHub app instead of OAuth or PAT connection is the recommended integration between GitHub and
Azure Pipelines. To switch to GitHub app, follow these steps:
1. Navigate here and install the app in the GitHub organization of your repository.
2. During installation, you'll be redirected to Azure DevOps to choose an Azure DevOps organization and project.
Choose the organization and project that contain the classic build pipeline you want to use the app for. This
choice associates the GitHub App installation with your Azure DevOps organization. If you choose incorrectly,
you can visit this page to uninstall the GitHub app from your GitHub org and start over.
3. In the next page that appears, you do not need to proceed creating a new pipeline.
4. Edit your pipeline by visiting the Pipelines page (e.g.,
https://dev.azure.com/YOUR_ORG_NAME/YOUR_PROJECT_NAME/_build), selecting your pipeline, and clicking
Edit.
5. If this is a YAML pipeline, select the Triggers menu to open the classic editor.
6. Select the "Get sources" step in the pipeline.
7. On the green bar with text "Authorized using connection", click "Change" and select the GitHub App connection
with the same name as the GitHub organization in which you installed the app.
8. On the toolbar, select "Save and queue" and then "Save and queue". Click the link to the pipeline run that was
queued to make sure it succeeds.
9. Create (or close and reopen) a pull request in your GitHub repository to verify that a build is successfully
queued in its "Checks" section.
Why isn't a GitHub repository displayed for me to choose in Azure Pipelines?
Depending on the authentication type and ownership of the repository, specific permissions are required.
If you're using the GitHub App, see GitHub App authentication.
If you're using OAuth, see OAuth authentication.
If you're using PATs, see Personal access token (PAT) authentication.
When I select a repository during pipeline creation, I get an error "The repository {repo-name} is in use with the Azure Pipelines
GitHub App in another Azure DevOps organization."
This means that your repository is already associated with a pipeline in a different organization. CI and PR events
from this repository won't work as they will be delivered to the other organization. Here are the steps you should
take to remove the mapping to the other organization before proceeding to create a pipeline.
1. Open a pull request in your GitHub repository, and make the comment /azp where . This reports back the
Azure DevOps organization that the repository is mapped to.
2. To change the mapping, uninstall the app from the GitHub organization, and re-install it. As you re-install it,
make sure to select the correct organization when you are redirected to Azure DevOps.
Failing triggers
I just created a new YAML pipeline with CI/PR triggers, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Are your YAML CI or PR triggers being overridden by pipeline settings in the UI? While editing your
pipeline, choose ... and then Triggers .

Check the Override the YAML trigger from here setting for the types of trigger (Continuous
integration or Pull request validation ) available for your repo.

Are you using the GitHub app connection to connect the pipeline to GitHub? See Connection types to
determine the type of connection you have. If you are using a GitHub app connection, follow these steps:
Is the mapping set up properly between GitHub and Azure DevOps? Open a pull request in your
GitHub repository, and make the comment /azp where . This reports back the Azure DevOps
organization that the repository is mapped to.
If no organizations are set up to build this repository using the app, go to
https://github.com/<org_name>/<repo_name>/settings/installations and complete the
configuration of the app.
If a different Azure DevOps organization is reported, then someone has already established a
pipeline for this repo in a different organization. We currently have the limitation that we can
only map a GitHub repo to a single DevOps org. Only the pipelines in the first Azure DevOps
org can be automatically triggered. To change the mapping, uninstall the app from the GitHub
organization, and re-install it. As you re-install it, make sure to select the correct organization
when you are redirected to Azure DevOps.
Are you using OAuth or PAT to connect the pipeline to GitHub? See Connection types to determine the type
of connection you have. If you are using a GitHub connection, follow these steps:
1. OAuth and PAT connections rely on webhooks to communicate updates to Azure Pipelines. In
GitHub, navigate to the settings for your repository, then to Webhooks. Verify that the webhooks
exist. Usually you should see three webhooks - push, pull_request, and issue_comment. If you don't,
then you must re-create the service connection and update the pipeline to use the new service
connection.
2. Select each of the webhooks in GitHub and verify that the payload that corresponds to the user's
commit exists and was sent successfully to Azure DevOps. You may see an error here if the event
could not be communicated to Azure DevOps.
The traffic from Azure DevOps could be throttled by GitHub. When Azure Pipelines receives a notification
from GitHub, it tries to contact GitHub and fetch more information about the repo and YAML file. If you
have a repo with a large number of updates and pull requests, this call may fail due to such throttling. In
this case, see if you can reduce the frequency of builds by using batching or stricter path/branch filters.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you updated the YAML file in the correct branch? If you push an update to a branch, then the YAML
file in that same branch governs the CI behavior. If you push an update to a source branch, then the YAML
file resulting from merging the source branch with the target branch governs the PR behavior. Make sure
that the YAML file in the correct branch has the necessary CI or PR configuration.
Have you configured the trigger correctly? When you define a YAML trigger, you can specify both include
and exclude clauses for branches, tags, and paths. Ensure that the include clause matches the details of
your commit and that the exclude clause doesn't exclude them. Check the syntax for the triggers and make
sure that it is accurate.
Have you used variables in defining the trigger or the paths? That is not supported.
Did you use templates for your YAML file? If so, make sure that your triggers are defined in the main YAML
file. Triggers defined inside template files are not supported.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
Do you have wildcards in your path filters? Understand the limitations of wildcards in your paths as
described in this article.
Did you just push a new branch? If so, the new branch may not start a new run. See the section "Behavior
of triggers when new branches are created".
My CI or PR triggers have been working fine. But, they stopped working now.
First go through the troubleshooting steps in the previous question. Then, follow these additional steps:
Do you have merge conflicts in your PR? For a PR that did not trigger a pipeline, open it and check whether
it has a merge conflict. Resolve the merge conflict.
Are you experiencing a delay in the processing of push or PR events? You can usually verify this by seeing
if the issue is specific to a single pipeline or is common to all pipelines or repos in your project. If a push or
a PR update to any of the repos exhibits this symptom, we might be experiencing delays in processing the
update events. Check if we are experiencing a service outage on our status page. If the status page shows
an issue, then our team must have already started working on it. Check the page frequently for updates on
the issue.
I do not want users to override the list of branches for triggers when they update the YAML file. How can I do this?
Users with permissions to contribute code can update the YAML file and include/exclude additional branches. As a
result, users can include their own feature or user branch in their YAML file and push that update to a feature or
user branch. This may cause the pipeline to be triggered for all updates to that branch. If you want to prevent this
behavior, then you can:
1. Edit the pipeline in the Azure Pipelines UI.
2. Navigate to the Triggers menu.
3. Select Override the YAML continuous Integration trigger from here .
4. Specify the branches to include or exclude for the trigger.
When you follow these steps, any CI triggers specified in the YAML file are ignored.
Failing checkout
I see the following error in the log file during checkout step. How do I fix it?

remote: Repository not found.


fatal: repository <repo> not found

This could be caused by an outage of GitHub. Try to access the repository in GitHub and make sure that you are
able to.
Wrong version
A wrong version of the YAML file is being used in the pipeline. Why is that?
For CI triggers, the YAML file that is in the branch you are pushing is evaluated to see if a CI build should be
run.
For PR triggers, the YAML file resulting from merging the source and target branches of the PR is evaluated to
see if a PR build should be run.
Missing status updates
My PR in GitHub is blocked since Azure Pipelines did not update the status.
This could be a transient error that resulted in Azure DevOps not being able to communicate with GitHub. Retry
the check in GitHub if you use the GitHub app. Or, make a trivial update to the PR to see if the problem can be
resolved.

Related articles
Scheduled triggers
Pipeline completion triggers
Build GitHub Enterprise Server repositories
11/2/2020 • 11 minutes to read • Edit Online

You can integrate your on-premises GitHub Enterprise Server with Azure Pipelines. Your on-premises server may
be exposed to the Internet or it may not be.
If your GitHub Enterprise Server is reachable from the servers that run Azure Pipelines service, then:
you can set up classic build and YAML pipelines
you can configure CI, PR, and scheduled triggers
If your GitHub Enterprise Server is not reachable from the servers that run Azure Pipelines service, then:
you can only set up classic build pipelines
you can only start manual or scheduled builds
you cannot set up YAML pipelines
you cannot configure CI or PR triggers for your classic build pipelines
If your on-premises server is reachable from Microsoft-hosted agents, then you can use them to run your
pipelines. Otherwise, you must set up self-hosted agents that can access your on-premises server and fetch the
code.

Reachable from Azure Pipelines


The first thing to check is whether your GitHub Enterprise Server is reachable from Azure Pipelines service.
1. In your Azure DevOps UI, navigate to your project settings, and select Ser vice Connections under Pipelines .
2. Select New ser vice connection and choose GitHub Enterprise Ser ver as the connection type.
3. Enter the required information to create a connection to your GitHub Enterprise Server.
4. Select Verify in the service connection panel.
If the verification passes, then the servers that run Azure Pipelines service are able to reach your on-premises
GitHub Enterprise Server. You can proceed and set up the connection. Then, you can use this service connection
when creating a classic build or YAML pipeline. You can also configure CI and PR triggers for the pipeline. A
majority of features in Azure Pipelines that work with GitHub also work with GitHub Enterprise Server. Review the
documentation for GitHub to understand these features. Here are some differences:
The integration between GitHub and Azure Pipelines is made easier through an Azure Pipelines app in GitHub
marketplace. This app allows you to set up an integration without having to rely on a particular user's OAuth
token. We do not have a similar app that works with GitHub Enterprise Server. So, you must use a PAT,
username and password, or OAuth to set up the connection between Azure Pipelines and GitHub Enterpriser
server.
When working with GitHub, Azure Pipelines supports a number of security features to validate contributions
from external forks. For instance, secrets stored in a pipeline are not made available to a running job. These
protections are not available when working with GitHub Enterpriser server.
Comment triggers are not available with GitHub Enterpriser server. You cannot use comments in a GitHub
Enterpriser server repo pull request to trigger a pipeline.
GitHub Checks are not available in GitHub Enterpriser server. All status updates are through basic statuses.

Not reachable from Azure Pipelines


When the verification of a GitHub Enterprise Server connection as explained in the above section fails, then Azure
Pipelines cannot communicate with your server. This is likely caused by how your enterprise network is set up. For
instance, a firewall in your network may prevent external traffic from reaching your servers. You have two options
in this case:
Work with your IT department to open a network path between Azure Pipelines and GitHub Enterprise
Server. For example, you can add exceptions to your firewall rules to allow traffic from Azure Pipelines to
flow through. See the section on Azure DevOps IPs to see which IP addresses you need to allow.
Furthermore, you need to have a public DNS entry for the GitHub Enterprise Server so that Azure Pipelines
can resolve the FQDN of your server to an IP address. With all of these changes, attempt to create and verify
a GitHub Enterprise Server connection in Azure Pipelines.
Instead of a using a GitHub Enterprise Server connection, you can use a Other Git connection. Make sure to
uncheck the option to Attempt accessing this Git ser ver from Azure Pipelines . With this connection
type, you can only configure a classic build pipeline. CI and PR triggers will not work in this configuration.
You can only start manual or scheduled pipeline runs.

Reachable from Microsoft-hosted agents


Another decision you possibly have to make is whether to use Microsoft-hosted agents or self-hosted agents to run
your pipelines. This often comes down to whether Microsoft-hosted agents can reach your server. To check whether
they can, create a simple pipeline to use Microsoft-hosted agents and make sure to add a step to checkout source
code from your server. If this passes, then you can continue using Microsoft-hosted agents.

Not reachable from Microsoft-hosted agents


If the simple test pipeline mentioned in the above section fails with the error
TF401019: The Git repository with name or identifier <your repo name> does not exist or you do not have
permissions for the operation you are attempting
, then the GitHub Enterpriser Server is not reachable from Microsoft-hosted agents. This is again probably caused
by a firewall blocking traffic from these servers. You have two options in this case:
Work with your IT department to open a network path between Microsoft-hosted agents and GitHub
Enterprise Server. See the section on networking in Microsoft-hosted agents.
Switch to using self-hosted agents or scale-set agents. These agents can be set up within your network and
hence will have access to the GitHub Enterprise Server. These agents only require outbound connections to
Azure Pipelines. There is no need to open a firewall for inbound connections. Make sure that the name of the
server you specified when creating the GitHub Enterprise Server connection is resolvable from the self-
hosted agents.

Azure DevOps IP addresses


Azure Pipelines sends requests to GitHub Enterprise Server to:
Query for a list of repositories during pipeline creation (classic and YAML pipelines)
Look for existing YAML files during pipeline creation (YAML pipelines)
Check-in YAML files (YAML pipelines)
Register a webhook during pipeline creation (classic and YAML pipelines)
Present an editor for YAML files (YAML pipelines)
Resolve templates and expand YAML files prior to execution (YAML pipelines)
Check if there are any changes since the last scheduled run (classic and YAML pipelines)
Fetch details about latest commit and display that in the user interface (classic and YAML pipelines)
You can observe that YAML pipelines fundamentally require communication between Azure Pipelines and GitHub
Enterprise Server. Hence, it is not possible to set up a YAML pipeline if the GitHub Enterprise Server is not visible to
Azure Pipelines.
When you use Other Git connection to set up a classic pipeline, disable communication between Azure Pipelines
service and GitHub Enterprise Server, and use self-hosted agents to build code, you will get a degraded experience:
You will have to type in the name of the repository manually during pipeline creation
You cannot use CI or PR triggers as Azure Pipelines cannot register a webhook in GitHub Enterprise Server
You cannot use scheduled triggers with the option to build only when there are changes
You cannot view information about the latest commit in the user interface
If you want to set up YAML pipelines or if you want to enhance the experience with classic pipelines, it is important
that you enable communication from Azure Pipelines to GitHub Enterprise Server.
Determine the region your Azure DevOps organization is hosted in. Go to the Organization settings in your
Azure DevOps UI. The region is listed under Region in the Over view page.
Use the list below to find the appropriate range of IP addresses for your region.
Central Canada

shprodcca1ip1 40.82.185.225

tfsprodcca1ip1 40.82.190.38

Central US

tfsprodcus1ip1 13.86.38.60

tfsprodcus2ip1 13.86.33.223

shprodcus1ip1 13.86.39.243

tfsprodcus4ip1 52.158.209.56

tfsprodcus5ip1 13.89.136.165

tfsprodcus3ip1 13.86.36.181

East Asia

shprodea1ip1 20.189.72.51

tfsprodea1ip1 40.81.25.218

East Australia

tfsprodeausu7ip1 40.82.217.103

shprodeausu7ip1 40.82.220.184

East US
tfsprodeus2su5ip1 20.41.47.137

tfsprodeus2su3ip1 20.44.80.98

shprodeus2su1ip1 20.36.242.132

tfsprodeus2su1ip1 20.44.80.197

South Brazil

shprodsbr1ip1 20.40.112.11

tfsprodsbr1ip1 20.40.114.3

South India

tfsprodsin1ip1 40.81.75.130

shprodsin1ip1 40.81.76.87

South UK

tfsproduks1ip1 40.81.159.67

shproduks1ip1 40.81.156.105

West Central US

shprodwcus0ip1 52.159.49.185

Western Europe

tfsprodweu2ip1 52.236.147.103

shprodweusu4ip1 52.142.238.243

tfsprodweu5ip1 51.144.61.32

tfsprodweu3ip1 52.236.147.236

tfsprodweu6ip1 40.74.28.0

tfsprodweusu4ip1 52.142.235.223

Western US 2

tfsprodwus22ip1 40.91.93.92

tfsprodwus23ip1 40.91.93.56

tfsprodwus24ip1 40.91.88.106

tfsprodwus25ip1 51.143.58.182

tfsprodwus2su6ip1 40.91.75.130
Add the corresponding range of IP addresses to your firewall exception rules.

FAQ
Problems related to GitHub Enterprise integration fall into the following categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Wrong version : My pipeline runs, but it is using an unexpected version of the source/YAML.
Failing triggers
I just created a new YAML pipeline with CI/PR triggers, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Are your YAML CI or PR triggers being overridden by pipeline settings in the UI? While editing your pipeline,
choose ... and then Triggers .

Check the Override the YAML trigger from here setting for the types of trigger (Continuous
integration or Pull request validation ) available for your repo.

Webhooks are used to communicate updates from GitHub Enterprise to Azure Pipelines. In GitHub
Enterprise, navigate to the settings for your repository, then to Webhooks. Verify that the webhooks exist.
Usually you should see two webhooks - push, pull_request. If you don't, then you must re-create the service
connection and update the pipeline to use the new service connection.
Select each of the webhooks in GitHub Enterprise and verify that the payload that corresponds to the user's
commit exists and was sent successfully to Azure DevOps. You may see an error here if the event could not
be communicated to Azure DevOps.
When Azure Pipelines receives a notification from GitHub, it tries to contact GitHub and fetch more
information about the repo and YAML file. If the GitHub Enterprise Server is behind a firewall, this traffic
may not reach your server. See Azure DevOps IP Addresses and verify that you have granted exceptions to
all the required IP addresses. These IP addresses may have changed since you have originally set up the
exception rules.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you updated the YAML file in the correct branch? If you push an update to a branch, then the YAML file
in that same branch governs the CI behavior. If you push an update to a source branch, then the YAML file
resulting from merging the source branch with the target branch governs the PR behavior. Make sure that
the YAML file in the correct branch has the necessary CI or PR configuration.
Have you configured the trigger correctly? When you define a YAML trigger, you can specify both include
and exclude clauses for branches, tags, and paths. Ensure that the include clause matches the details of your
commit and that the exclude clause doesn't exclude them. Check the syntax for the triggers and make sure
that it is accurate.
Have you used variables in defining the trigger or the paths? That is not supported.
Did you use templates for your YAML file? If so, make sure that your triggers are defined in the main YAML
file. Triggers defined inside template files are not supported.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
Do you have wildcards in your path filters? Understand the limitations of wildcards in your paths as
described in this article.
Did you just push a new branch? If so, the new branch may not start a new run. See the section "Behavior of
triggers when new branches are created".
My CI or PR triggers have been working fine. But, they stopped working now.
First go through the troubleshooting steps in the previous question. Then, follow these additional steps:
Do you have merge conflicts in your PR? For a PR that did not trigger a pipeline, open it and check whether
it has a merge conflict. Resolve the merge conflict.
Are you experiencing a delay in the processing of push or PR events? You can usually verify this by seeing if
the issue is specific to a single pipeline or is common to all pipelines or repos in your project. If a push or a
PR update to any of the repos exhibits this symptom, we might be experiencing delays in processing the
update events. Check if we are experiencing a service outage on our status page. If the status page shows an
issue, then our team must have already started working on it. Check the page frequently for updates on the
issue.
Failing checkout
Do you use Microsoft-hosted agents? If so, these agents may not be able to reach your GitHub Enterprise Server.
See Not reachable from Microsoft-hosted agents for more information.
Wrong version
A wrong version of the YAML file is being used in the pipeline. Why is that?
For CI triggers, the YAML file that is in the branch you are pushing is evaluated to see if a CI build should be run.
For PR triggers, the YAML file resulting from merging the source and target branches of the PR is evaluated to
see if a PR build should be run.
Build Bitbucket Cloud repositories
11/2/2020 • 16 minutes to read • Edit Online

Azure Pipelines
Azure Pipelines can automatically build and validate every pull request and commit to your Bitbucket Cloud
repository. This article describes how to configure the integration between Bitbucket Cloud and Azure Pipelines.
Bitbucket and Azure Pipelines are two independent services that integrate well together. Your Bitbucket Cloud users
do not automatically get access to Azure Pipelines. You must add them explicitly to Azure Pipelines.

Access to Bitbucket repositories


YAML
Classic
You create a new pipeline by first selecting a Bitbucket Cloud repository and then a YAML file in that repository. The
repository in which the YAML file is present is called self repository. By default, this is the repository that your
pipeline builds.
You can later configure your pipeline to check out a different repository or multiple repositories. To learn how to do
this, see multi-repo checkout.
Azure Pipelines must be granted access to your repositories to trigger their builds, and fetch their code during
builds.
There are 2 authentication types for granting Azure Pipelines access to your Bitbucket Cloud repositories while
creating a pipeline.

A UT H EN T IC AT IO N T Y P E P IP EL IN ES RUN USIN G

1. OAuth Your personal Bitbucket identity

2. Username and password Your personal Bitbucket identity

OAuth authentication
OAuth is the simplest authentication type to get started with for repositories in your Bitbucket account. Bitbucket
status updates will be performed on behalf of your personal Bitbucket identity. For pipelines to keep working, your
repository access must remain active.
To use OAuth, login to Bitbucket when prompted during pipeline creation. Then, click Authorize to authorize with
OAuth. An OAuth connection will be saved in your Azure DevOps project for later use, as well as used in the
pipeline being created.
Password authentication
Builds and Bitbucket status updates will be performed on behalf of your personal identity. For builds to keep
working, your repository access must remain active.
To create a password connection, visit Service connections in your Azure DevOps project settings. Create a new
Bitbucket service connection and provide the user name and password to connect to your Bitbucket Cloud
repository.
CI triggers
Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified
branches or you push specified tags.
YAML
Classic
YAML pipelines are configured by default with a CI trigger on all branches.
Branches
You can control which branches get CI triggers with a simple syntax:

trigger:
- master
- releases/*

You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ). See
Wildcards for information on the wildcard syntax.

NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).

NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.

For more complex triggers that use exclude or batch , you must use the full syntax as shown in the following
example.

# specific branch build


trigger:
branches:
include:
- master
- releases/*
exclude:
- releases/old*

In the above example, the pipeline will be triggered if a change is pushed to master or to any releases branch.
However, it won't be triggered if a change is made to a releases branch that starts with old .
If you specify an exclude clause without an include clause, then it is equivalent to specifying * in the include
clause.
In addition to specifying branch names in the branches lists, you can also configure triggers based on tags by
using the following format:
trigger:
branches:
include:
- refs/tags/{tagname}
exclude:
- refs/tags/{othertagname}

If you don't specify any triggers, the default is as if you wrote:

trigger:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string

IMPORTANT
When you specify a trigger, it replaces the default implicit trigger, and only pushes to branches that are explicitly configured
to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list.

Batching CI runs
If you have many team members uploading changes often, you may want to reduce the number of runs you start.
If you set batch to true , when a pipeline is running, the system waits until the run is completed, then starts
another run with all changes that have not yet been built.

# specific branch build with batching


trigger:
batch: true
branches:
include:
- master

To clarify this example, let us say that a push A to master caused the above pipeline to run. While that pipeline is
running, additional pushes B and C occur into the repository. These updates do not start new independent runs
immediately. But after the first run is completed, all pushes until that point of time are batched together and a new
run is started.

NOTE
If the pipeline has multiple jobs and stages, then the first run should still reach a terminal state by completing or skipping all
its jobs and stages before the second run can start. For this reason, you must exercise caution when using this feature in a
pipeline with multiple stages or approvals. If you wish to batch your builds in such cases, it is recommended that you split
your CI/CD process into two pipelines - one for build (with batching) and one for deployments.

Paths
You can specify file paths to include or exclude. Note that the wildcard syntax is different between branches/tags
and file paths.
# specific path build
trigger:
branches:
include:
- master
- releases/*
paths:
include:
- docs/*
exclude:
- docs/README.md

When you specify paths, you must explicitly specify branches to trigger on. You can't trigger a pipeline with only a
path filter; you must also have a branch filter, and the changed files that match the path filter must be from a
branch that matches the branch filter.

Tips:
Paths are always specified relative to the root of the repository.
If you don't set path filters, then the root folder of the repo is implicitly included by default.
If you exclude a path, you cannot also include it unless you qualify it to a deeper folder. For example if you
exclude /tools then you could include /tools/trigger-runs-on-these
The order of path filters doesn't matter.
Paths in Git are case-sensitive. Be sure to use the same case as the real folders.

NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).

NOTE
For Bitbucket Cloud repos, using branches syntax is the only way to specify tag triggers. The tags: syntax is not
supported for Bitbucket.

Opting out of CI
Disabling the CI trigger
You can opt out of CI triggers entirely by specifying trigger: none .

# A pipeline with no CI trigger


trigger: none

IMPORTANT
When you push a change to a branch, the YAML file in that branch is evaluated to determine if a CI run should be started.

For more information, see Triggers in the YAML schema.


Skipping CI for individual commits
You can also tell Azure Pipelines to skip running a pipeline that a commit would normally trigger. Just include
[skip ci] in the commit message or description of the HEAD commit and Azure Pipelines will skip running CI.
You can also use any of the variations below.
[skip ci] or [ci skip]
skip-checks: true or skip-checks:true
[skip azurepipelines] or [azurepipelines skip]
[skip azpipelines] or [azpipelines skip]
[skip azp] or [azp skip]
***NO_CI***

Using the trigger type in conditions


It is a common scenario to run different steps, jobs, or stages in your pipeline depending on the type of trigger that
started the run. You can do this using the system variable Build.Reason . For example, add the following condition
to your step, job, or stage to exclude it from PR validations.
condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))

Behavior of triggers when new branches are created


It is common to configure multiple pipelines for the same repository. For instance, you may have one pipeline to
build the docs for your app and another to build the source code. You may configure CI triggers with appropriate
branch filters and path filters in each of these pipelines. For instance, you may want one pipeline to trigger when
you push an update to the docs folder, and another one to trigger when you push an update to your application
code. In these cases, you need to understand how the pipelines are triggered when a new branch is created.
Here is the behavior when you push a new branch (that matches the branch filters) to your repository:
If your pipeline has path filters, it will be triggered only if the new branch has changes to files that match that
path filter.
If your pipeline does not have path filters, it will be triggered even if there are no changes in the new branch.
Wildcards
When specifying a branch or tag, you may use an exact name or a wildcard. Wildcards patterns allow * to match
zero or more characters and ? to match a single character.
If you start your pattern with * in a YAML pipeline, you must wrap the pattern in quotes, like "*-releases" .
For branches and tags:
A wildcard may appear anywhere in the pattern.
For paths:
You may include * as the final character, but it doesn't do anything differently from specifying the
directory name by itself.
You may not include * in the middle of a path filter, and you may not use ? .

trigger:
branches:
include:
- master
- releases/*
- feature/*
exclude:
- releases/old*
- feature/*-working
paths:
include:
- '*' # same as '/' for the repository root
exclude:
- 'docs/*' # same as 'docs/'
PR triggers
Pull request (PR) triggers cause a pipeline to run whenever a pull request is opened with one of the specified target
branches, or when updates are made to such a pull request.
YAML
Classic
Branches
You can specify the target branches when validating your pull requests. For example, to validate pull requests that
target master and releases/* , you can use the following pr trigger.

pr:
- master
- releases/*

This configuration starts a new run the first time a new pull request is created, and after every update made to the
pull request.
You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ).

NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).

NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.

Each new run builds the latest commit from the source branch of the pull request. This is different from how Azure
Pipelines builds pull requests in other repositories (e.g., Azure Repos or GitHub), where it builds the merge commit.
Unfortunately, Bitbucket does not expose information about the merge commit, which contains the merged code
between the source and target branches of the pull request.
If no pr triggers appear in your YAML file, pull request validations are automatically enabled for all branches, as if
you wrote the following pr trigger. This configuration triggers a build when any pull request is created, and when
commits come into the source branch of any active pull request.

pr:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string

IMPORTANT
When you specify a pr trigger, it replaces the default implicit pr trigger, and only pushes to branches that are explicitly
configured to be included will trigger a pipeline.

For more complex triggers that need to exclude certain branches, you must use the full syntax as shown in the
following example.
# specific branch
pr:
branches:
include:
- master
- releases/*
exclude:
- releases/old*

Paths
You can specify file paths to include or exclude. For example:

# specific path
pr:
branches:
include:
- master
- releases/*
paths:
include:
- docs/*
exclude:
- docs/README.md

NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).

Multiple PR updates
You can specify whether additional updates to a PR should cancel in-progress validation runs for the same PR. The
default is true .

# auto cancel false


pr:
autoCancel: false
branches:
include:
- master

Opting out of PR validation


You can opt out of pull request validation entirely by specifying pr: none .

# no PR triggers
pr: none

For more information, see PR trigger in the YAML schema.

NOTE
If your pr trigger isn't firing, ensure that you have not overridden YAML PR triggers in the UI.

FAQ
Problems related to Bitbucket integration fall into the following categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Wrong version : My pipeline runs, but it is using an unexpected version of the source/YAML.
Failing triggers
I just created a new YAML pipeline with CI/PR triggers, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Are your YAML CI or PR triggers being overridden by pipeline settings in the UI? While editing your pipeline,
choose ... and then Triggers .

Check the Override the YAML trigger from here setting for the types of trigger (Continuous
integration or Pull request validation ) available for your repo.

Webhooks are used to communicate updates from Bitbucket to Azure Pipelines. In Bitbucket, navigate to the
settings for your repository, then to Webhooks. Verify that the webhooks exist.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you updated the YAML file in the correct branch? If you push an update to a branch, then the YAML file
in that same branch governs the CI behavior. If you push an update to a source branch, then the YAML file
resulting from merging the source branch with the target branch governs the PR behavior. Make sure that
the YAML file in the correct branch has the necessary CI or PR configuration.
Have you configured the trigger correctly? When you define a YAML trigger, you can specify both include
and exclude clauses for branches, tags, and paths. Ensure that the include clause matches the details of your
commit and that the exclude clause doesn't exclude them. Check the syntax for the triggers and make sure
that it is accurate.
Have you used variables in defining the trigger or the paths? That is not supported.
Did you use templates for your YAML file? If so, make sure that your triggers are defined in the main YAML
file. Triggers defined inside template files are not supported.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
Do you have wildcards in your path filters? Understand the limitations of wildcards in your paths as
described in this article.
Did you just push a new branch? If so, the new branch may not start a new run. See the section "Behavior of
triggers when new branches are created".
My CI or PR triggers have been working fine. But, they stopped working now.
First go through the troubleshooting steps in the previous question. Then, follow these additional steps:
Do you have merge conflicts in your PR? For a PR that did not trigger a pipeline, open it and check whether
it has a merge conflict. Resolve the merge conflict.
Are you experiencing a delay in the processing of push or PR events? You can usually verify this by seeing if
the issue is specific to a single pipeline or is common to all pipelines or repos in your project. If a push or a
PR update to any of the repos exhibits this symptom, we might be experiencing delays in processing the
update events. Check if we are experiencing a service outage on our status page. If the status page shows an
issue, then our team must have already started working on it. Check the page frequently for updates on the
issue.
I do not want users to override the list of branches for triggers when they update the YAML file. How can I do this?
Users with permissions to contribute code can update the YAML file and include/exclude additional branches. As a
result, users can include their own feature or user branch in their YAML file and push that update to a feature or
user branch. This may cause the pipeline to be triggered for all updates to that branch. If you want to prevent this
behavior, then you can:
1. Edit the pipeline in the Azure Pipelines UI.
2. Navigate to the Triggers menu.
3. Select Override the YAML continuous Integration trigger from here .
4. Specify the branches to include or exclude for the trigger.
When you follow these steps, any CI triggers specified in the YAML file are ignored.
Wrong version
A wrong version of the YAML file is being used in the pipeline. Why is that?
For CI triggers, the YAML file that is in the branch you are pushing is evaluated to see if a CI build should be run.
For PR triggers, the YAML file resulting from merging the source and target branches of the PR is evaluated to
see if a PR build should be run.
Build on-premises Bitbucket repositories
11/2/2020 • 6 minutes to read • Edit Online

NOTE
To integrate Bitbucket Cloud with Azure Pipelines, see Bitbucket Cloud.

You can integrate your on-premises Bitbucket server or another Git server with Azure Pipelines. Your on-premises
server may be exposed to the Internet or it may not be.
If your on-premises server is reachable from the servers that run Azure Pipelines service, then:
you can set up classic build and configure CI triggers
If your on-premises server is not reachable from the servers that run Azure Pipelines service, then:
you can set up classic build pipelines and start manual builds
you cannot configure CI triggers

NOTE
YAML pipelines do not work with on-premises Bitbucket repositories.

NOTE
PR triggers are not available with on-premises Bitbucket repositories.

If your on-premises server is reachable from the hosted agents, then you can use the hosted agents to run manual,
scheduled, or CI builds. Otherwise, you must set up self-hosted agents that can access your on-premises server and
fetch the code.

Reachable from Azure Pipelines


If your on-premises Bitbucket server is reachable from Azure Pipelines service, create a Other Git service
connection and use that to create a pipeline. Check the option to Attempt accessing this Git ser ver from
Azure Pipelines .
CI triggers work through polling and not through webhooks. In other words, Azure Pipelines periodically checks
the Bitbucket server if there are any updates to code. If there are, then Azure Pipelines will start a new run.
Not reachable from Azure Pipelines
If the Bitbucket server cannot be reached from Azure Pipelines, you have two options:
Work with your IT department to open a network path between Azure Pipelines and on-premises Git server.
For example, you can add exceptions to your firewall rules to allow traffic from Azure Pipelines to flow
through. See the section on Azure DevOps IPs to see which IP addresses you need to allow. Furthermore,
you need to have a public DNS entry for the Bitbucket server so that Azure Pipelines can resolve the FQDN
of your server to an IP address.
You can use a Other Git connection but tell Azure Pipelines not to attempt accessing this Git ser ver
from Azure Pipelines . CI and PR triggers will not work in this configuration. You can only start manual or
scheduled pipeline runs.
Reachable from Microsoft-hosted agents
Another decision you possibly have to make is whether to use Microsoft-hosted agents or self-hosted agents to run
your pipelines. This often comes down to whether Microsoft-hosted agents can reach your server. To check whether
they can, create a simple pipeline to use Microsoft-hosted agents and make sure to add a step to checkout source
code from your server. If this passes, then you can continue using Microsoft-hosted agents.
Not reachable from Microsoft-hosted agents
If the simple test pipeline mentioned in the above section fails with the error
TF401019: The Git repository with name or identifier <your repo name> does not exist or you do not have
permissions for the operation you are attempting
, then the Bitbucket server is not reachable from Microsoft-hosted agents. This is again probably caused by a
firewall blocking traffic from these servers. You have two options in this case:
Work with your IT department to open a network path between Microsoft-hosted agents and Bitbucket
server. See the section on networking in Microsoft-hosted agents.
Switch to using self-hosted agents or scale-set agents. These agents can be set up within your network and
hence will have access to the Bitbucket server. These agents only require outbound connections to Azure
Pipelines. There is no need to open a firewall for inbound connections. Make sure that the name of the
server you specified when creating the service connection is resolvable from the self-hosted agents.

Azure DevOps IP addresses


When you use Other Git connection to set up a classic pipeline, disable communication between Azure Pipelines
service and Bitbucket server, and use self-hosted agents to build code, you will get a degraded experience:
You will have to type in the name of the repository manually during pipeline creation
You cannot use CI triggers as Azure Pipelines won't be able to poll for changes to the code
You cannot use scheduled triggers with the option to build only when there are changes
You cannot view information about the latest commit in the user interface
If you want to enhance this experience, it is important that you enable communication from Azure Pipelines to
Bitbucket Server.
Determine the region your Azure DevOps organization is hosted in. Go to the Organization settings in your
Azure DevOps UI. The region is listed under Region in the Over view page.
Use the list below to find the appropriate range of IP addresses for your region.
Central Canada

shprodcca1ip1 40.82.185.225

tfsprodcca1ip1 40.82.190.38

Central US
tfsprodcus1ip1 13.86.38.60

tfsprodcus2ip1 13.86.33.223

shprodcus1ip1 13.86.39.243

tfsprodcus4ip1 52.158.209.56

tfsprodcus5ip1 13.89.136.165

tfsprodcus3ip1 13.86.36.181

East Asia

shprodea1ip1 20.189.72.51

tfsprodea1ip1 40.81.25.218

East Australia

tfsprodeausu7ip1 40.82.217.103

shprodeausu7ip1 40.82.220.184

East US

tfsprodeus2su5ip1 20.41.47.137

tfsprodeus2su3ip1 20.44.80.98

shprodeus2su1ip1 20.36.242.132

tfsprodeus2su1ip1 20.44.80.197

South Brazil

shprodsbr1ip1 20.40.112.11

tfsprodsbr1ip1 20.40.114.3

South India

tfsprodsin1ip1 40.81.75.130

shprodsin1ip1 40.81.76.87

South UK

tfsproduks1ip1 40.81.159.67

shproduks1ip1 40.81.156.105

West Central US
shprodwcus0ip1 52.159.49.185

Western Europe

tfsprodweu2ip1 52.236.147.103

shprodweusu4ip1 52.142.238.243

tfsprodweu5ip1 51.144.61.32

tfsprodweu3ip1 52.236.147.236

tfsprodweu6ip1 40.74.28.0

tfsprodweusu4ip1 52.142.235.223

Western US 2

tfsprodwus22ip1 40.91.93.92

tfsprodwus23ip1 40.91.93.56

tfsprodwus24ip1 40.91.88.106

tfsprodwus25ip1 51.143.58.182

tfsprodwus2su6ip1 40.91.75.130

Add the corresponding range of IP addresses to your firewall exception rules.


Allow Azure Pipelines to attempt accessing the Git server in the Other Git service connection.

FAQ
Problems related to Bitbucket Server integration fall into the following categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Failing triggers
I pushed a change to my server, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Is your Bitbucket server accessible from Azure Pipelines? Azure Pipelines periodically polls Bitbucket server
for changes. If the Bitbucket server is behind a firewall, this traffic may not reach your server. See Azure
DevOps IP Addresses and verify that you have granted exceptions to all the required IP addresses. These IP
addresses may have changed since you have originally set up the exception rules. You can only start manual
runs if you used an external Git connection and if your server is not accessible from Azure Pipelines.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
I did not push any updates to my code, however the pipeline is still being triggered.
The continuous integration trigger for Bitbucket works through polling. After each polling interval, Azure
Pipelines attempts to contact the Bitbucket server to check if there have been any updates to the code. If Azure
Pipelines is unable to reach the Bitbucket server (possibly due to a network issue), then we start a new run
anyway assuming that there might have been code changes. In a few cases, Azure Pipelines may also create a
dummy failed build with an error message to indicate that it was unable to reach the server.
Failing checkout
When I attempt to start a new run manually, there is a delay of 4-8 minutes before it starts.
Your Bitbucket server is not reachable from Azure Pipelines. Make sure that you have not selected the option to
attempt accessing this Git ser ver from Azure Pipelines in the Bitbucket service connection. If that option
is selected, Azure Pipelines will attempt to contact to your server and since your server is unreachable, it
eventually times out and starts the run anyway. Unchecking that option speeds up your manual runs.
The checkout step fails with the error that the server cannot be resolved.
Do you use Microsoft-hosted agents? If so, these agents may not be able to reach your Bitbucket server. See Not
reachable from Microsoft-hosted agents for more information.
Build TFVC repositories
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Choose the repository to build


While editing a pipeline that uses a TFVC repo, you have the following options.

A Z URE P IP EL IN ES, T F S 2018, T F S 2017,


F EAT URE T F S 2015. 4 T F S 2015 RT M

Clean Yes Yes

Specify local path Yes No

Label sources Yes No

NOTE
Azure Pipelines, TFS 2017.2 and newer : Click Advanced settings to see some of the following options.

Repository name
Ignore this text box (TFS 2017 RTM or older).
Mappings (workspace )
Include with a type value of Map only the folders that your build pipeline requires. If a subfolder of a mapped
folder contains files that the build pipeline does not require, map it with a type value of Cloak .
Make sure that you Map all folders that contain files that your build pipeline requires. For example, if you add
another project, you might have to add another mapping to the workspace.
Cloak folders you don't need. By default the root folder of project is mapped in the workspace. This configuration
results in the build agent downloading all the files in the version control folder of your project. If this folder
contains lots of data, your build could waste build system resources and slow down your build pipeline by
downloading large amounts of data that it does not require.
When you remove projects, look for mappings that you can remove from the workspace.
If this is a CI build, in most cases you should make sure that these mappings match the filter settings of your CI
trigger on the Triggers tab.
For more information on how to optimize a TFVC workspace, see Optimize your workspace.
Clean the local repo on the agent
You can perform different forms of cleaning the working directory of your self-hosted agent before a build runs.
In general, for faster performance of your self-hosted agents, don't clean the repo. In this case, to get the best
performance, make sure you're also building incrementally by disabling any Clean option of the task or tool you're
using to build.
If you do need to clean the repo (for example to avoid problems caused by residual files from a previous build),
your options are below.

NOTE
Cleaning is not relevant if you are using a Microsoft-hosted agent because you get a new agent every time in that case.

Azure Pipelines, TFS 2018, TFS 2017.2


If you want to clean the repo, then select true , and then select one of the following options:
Sources : The build pipeline performs an undo of any changes and scorches the current workspace under
$(Build.SourcesDirectory) .

Sources and output director y : Same operation as Sources option above, plus: Deletes and recreates
$(Build.BinariesDirectory) .

Sources director y : Deletes and recreates $(Build.SourcesDirectory) .


All build directories : Deletes and recreates $(Agent.BuildDirectory) .
TFS 2017 RTM, TFS 2015.4
If you select True then the build pipeline performs an undo of any changes and scorches the workspace.
If you want the Clean switch described above to work differently, then on the Variables tab, define the
Build.Clean variable and set its value to:

all if you want to delete $(Agent.BuildDirectory) , which is the entire working folder that contains the
sources folder, binaries folder, artifacts folder, and so on.
source if you want to delete $(Build.SourcesDirectory) .
binary If you want to delete $(Build.BinariesDirectory) .
TFS 2015 RTM
Select true to delete the repository folder.
Label sources
You may want to label your source code files to enable your team to easily identify which version of each file is
included in the completed build. You also have the option to specify whether the source code should be labeled for
all builds or only for successful builds.

NOTE
You can only use this feature when the source repository in your build is a GitHub repository, or a Git or TFVC repository
from your project.

In the Label format you can use user-defined and predefined variables that have a scope of "All." For example:

$(Build.DefinitionName)_$(Build.DefinitionVersion)_$(Build.BuildId)_$(Build.BuildNumber)_$(My.Variable)

The first four variables are predefined. My.Variable can be defined by you on the variables tab.
The build pipeline labels your sources with a TFVC label.
CI triggers
Select Enable continuous integration on the Triggers tab to enable this trigger if you want the build to run
whenever someone checks in code.

Batch changes
Select this check box if you have many team members uploading changes often and you want to reduce the
number of builds you are running. If you select this option, when a build is running, the system waits until the
build is completed and then queues another build of all changes that have not yet been built.

You can batch changes and build them together.

Path filters
Select the version control paths you want to include and exclude. In most cases, you should make sure that these
filters are consistent with your TFVC mappings. You can use path filters to reduce the set of files that you want to
trigger a build.

Tips:
Paths are always specified relative to the root of the workspace.
If you don't set path filters, then the root folder of the workspace is implicitly included by default.
If you exclude a path, you cannot also include it unless you qualify it to a deeper folder. For example if you
exclude /tools then you could include /tools/trigger-runs-on-these
The order of path filters doesn't matter.

Gated check-in
You can use gated check-in to protect against breaking changes.
By default Use workspace mappings for filters is selected. Builds are triggered whenever a change is checked
in under a path specified in your source mappings.
Otherwise, you can clear this check box and specify the paths in the trigger.
How it affects your developers
When developers try to check-in, they are prompted to build their changes.

The system then creates a shelveset and builds it.


For details on the gated check-in experience, see Check in to a folder that is controlled by a gated check-in build
pipeline.
Option to run CI builds
By default, CI builds are not run after the gated check-in process is complete and the changes are checked in.
However, if you do want CI builds to run after a gated check-in, select the Run CI triggers for committed
changes check box. When you do this, the build pipeline does not add ***NO_CI*** to the changeset description.
As a result, CI builds that are affected by the check-in are run.
A few other things to know
Make sure the folders you include in your trigger are also included in your workspace mappings.
You can run gated builds on either a Microsoft-hosted agent or a self-hosted agent.

FAQ
I get the following error when running a pipeline:
The shelveset <xyz> could not be found for check-in

Is your job authorization scope set to collection ? TFVC repositories are usually spread across the projects in
your collection. You may be reading or writing to a folder that can only be accessed when the scope is the entire
collection. You can set this in organization settings or in project setting under the Pipelines tab.
I get the following error when running a pipeline:
The underlying connection was closed: An unexpected error occurred on a receive. ##[error]Exit code 100
returned from process: file name 'tf', arguments 'vc workspace /new /location:local /permission:Public

This is usually an intermittent error caused when the service is experiencing technical issues. Please re-run the
pipeline.
What is scorch?
Scorch is a TFVC power tool that ensures source control on the server and the local disk are identical. See
Microsoft Visual Studio Team Foundation Server 2015 Power Tools.
Build Subversion repositories
11/2/2020 • 4 minutes to read • Edit Online

You can integrate your on-premises Subversion server with Azure Pipelines. The Subversion server must be
accessible to Azure Pipelines.

NOTE
YAML pipelines do not work with Subversion repositories.

If your server is reachable from the hosted agents, then you can use the hosted agents to run manual, scheduled, or
CI builds. Otherwise, you must set up self-hosted agents that can access your on-premises server and fetch the
code.
To integrate with Subversion, create a Subversion service connection and use that to create a pipeline. CI triggers
work through polling. In other words, Azure Pipelines periodically checks the Subversion server if there are any
updates to code. If there are, then Azure Pipelines will start a new run.
If the Subversion server cannot be reached from Azure Pipelines, work with your IT department to open a network
path between Azure Pipelines and your server. For example, you can add exceptions to your firewall rules to allow
traffic from Azure Pipelines to flow through. See the section on Azure DevOps IPs to see which IP addresses you
need to allow. Furthermore, you need to have a public DNS entry for the Subversion server so that Azure Pipelines
can resolve the FQDN of your server to an IP address.
Reachable from Microsoft-hosted agents
A decision you have to make is whether to use Microsoft-hosted agents or self-hosted agents to run your pipelines.
This often comes down to whether Microsoft-hosted agents can reach your server. To check whether they can,
create a simple pipeline to use Microsoft-hosted agents and make sure to add a step to checkout source code from
your server. If this passes, then you can continue using Microsoft-hosted agents.
Not reachable from Microsoft-hosted agents
If the simple test pipeline mentioned in the above section fails with an error, then the Subversion server is probably
not reachable from Microsoft-hosted agents. This is probably caused by a firewall blocking traffic from these
servers. You have two options in this case:
Work with your IT department to open a network path between Microsoft-hosted agents and Subversion
server. See the section on networking in Microsoft-hosted agents.
Switch to using self-hosted agents or scale-set agents. These agents can be set up within your network and
hence will have access to the Subversion server. These agents only require outbound connections to Azure
Pipelines. There is no need to open a firewall for inbound connections. Make sure that the name of the
server you specified when creating the service connection is resolvable from the self-hosted agents.

Azure DevOps IP addresses


To enable communication from Azure Pipelines to Subversion server, first determine the region your Azure DevOps
organization is hosted in. Go to the Organization settings in your Azure DevOps UI. The region is listed under
Region in the Over view page.
Use the list below to find the appropriate range of IP addresses for your region.
Central Canada
shprodcca1ip1 40.82.185.225

tfsprodcca1ip1 40.82.190.38

Central US

tfsprodcus1ip1 13.86.38.60

tfsprodcus2ip1 13.86.33.223

shprodcus1ip1 13.86.39.243

tfsprodcus4ip1 52.158.209.56

tfsprodcus5ip1 13.89.136.165

tfsprodcus3ip1 13.86.36.181

East Asia

shprodea1ip1 20.189.72.51

tfsprodea1ip1 40.81.25.218

East Australia

tfsprodeausu7ip1 40.82.217.103

shprodeausu7ip1 40.82.220.184

East US

tfsprodeus2su5ip1 20.41.47.137

tfsprodeus2su3ip1 20.44.80.98

shprodeus2su1ip1 20.36.242.132

tfsprodeus2su1ip1 20.44.80.197

South Brazil

shprodsbr1ip1 20.40.112.11

tfsprodsbr1ip1 20.40.114.3

South India

tfsprodsin1ip1 40.81.75.130

shprodsin1ip1 40.81.76.87

South UK
tfsproduks1ip1 40.81.159.67

shproduks1ip1 40.81.156.105

West Central US

shprodwcus0ip1 52.159.49.185

Western Europe

tfsprodweu2ip1 52.236.147.103

shprodweusu4ip1 52.142.238.243

tfsprodweu5ip1 51.144.61.32

tfsprodweu3ip1 52.236.147.236

tfsprodweu6ip1 40.74.28.0

tfsprodweusu4ip1 52.142.235.223

Western US 2

tfsprodwus22ip1 40.91.93.92

tfsprodwus23ip1 40.91.93.56

tfsprodwus24ip1 40.91.88.106

tfsprodwus25ip1 51.143.58.182

tfsprodwus2su6ip1 40.91.75.130

Add the corresponding range of IP addresses to your firewall exception rules.

FAQ
Problems related to Subversion server integration fall into the following categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Failing triggers
I pushed a change to my server, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Is your Subversion server accessible from Azure Pipelines? Azure Pipelines periodically polls Subversion
server for changes. If the Subversion server is behind a firewall, this traffic may not reach your server. See
Azure DevOps IP Addresses and verify that you have granted exceptions to all the required IP addresses.
These IP addresses may have changed since you have originally set up the exception rules.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
I did not push any updates to my code, however the pipeline is still being triggered.
The continuous integration trigger for Subversion works through polling. After each polling interval, Azure
Pipelines attempts to contact the Subversion server to check if there have been any updates to the code. If Azure
Pipelines is unable to reach the server (possibly due to a network issue), then we start a new run anyway
assuming that there might have been code changes. In a few cases, Azure Pipelines may also create a dummy
failed build with an error message to indicate that it was unable to reach the server.
Failing checkout
The checkout step fails with the error that the server cannot be resolved.
Do you use Microsoft-hosted agents? If so, these agents may not be able to reach your Bitbucket server. See Not
reachable from Microsoft-hosted agents for more information.
Check out multiple repositories in your pipeline
11/2/2020 • 9 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020


Pipelines often rely on multiple repositories that contain source, tools, scripts, or other items that you need to
build your code. By using multiple checkout steps in your pipeline, you can fetch and check out other repositories
in addition to the one you use to store your YAML pipeline.

Specify multiple repositories


Repositories can be specified as a repository resource, or inline with the checkout step.
Supported repositories are Azure Repos Git ( git ), GitHub ( github ), GitHubEnterprise ( githubenterprise ), and
Bitbucket Cloud ( bitbucket ).
The following combinations of checkout steps are supported.
No checkout steps
The default behavior is as if checkout: self were the first step, and the current repository is checked out.

A single checkout: none step


No repositories are synced or checked out.

A single checkout: self step


The current repository is checked out.

A single checkout step that isn't self or none

The designated repository is checked out instead of self .

Multiple checkout steps


Each designated repository is checked out to a folder named after the repository, unless a different path is
specified in the checkout step. To check out self as one of the repositories, use checkout: self as one of the
checkout steps.

NOTE
When you check out Azure Repos Git repositories other than the one containing the pipeline, you may be prompted to
authorize access to that resource before the pipeline runs for the first time. For more information, see Why am I am
prompted to authorize resources the first time I try to check out a different repository? in the FAQ section.

Repository resource definition


You must use a repository resource if your repository type requires a service connection or other extended
resources field. The following repository types require a service connection.
Bitbucket cloud repositories
GitHub and GitHub Enterprise Server repositories
Azure Repos Git repositories in a different organization than your pipeline
You may use a repository resource even if your repository type doesn't require a service connection, for example
if you have a repository resource defined already for templates in a different repository.
In the following example, three repositories are declared as repository resources. The Azure Repos Git repository
in another organization, GitHub, and Bitbucket Cloud repository resources require service connections, which are
specified as the endpoint for those repository resources. This example has four checkout steps, which checks out
the three repositories declared as repository resources along with the current self repository that contains the
pipeline YAML.

resources:
repositories:
- repository: MyGitHubRepo # The name used to reference this repository in the checkout step
type: github
endpoint: MyGitHubServiceConnection
name: MyGitHubOrgOrUser/MyGitHubRepo
- repository: MyBitbucketRepo
type: bitbucket
endpoint: MyBitbucketServiceConnection
name: MyBitbucketOrgOrUser/MyBitbucketRepo
- repository: MyAzureReposGitRepository # In a different organization
endpoint: MyAzureReposGitServiceConnection
type: git
name: OtherProject/MyAzureReposGitRepo

trigger:
- main

pool:
vmImage: 'ubuntu-latest'

steps:
- checkout: self
- checkout: MyGitHubRepo
- checkout: MyBitbucketRepo
- checkout: MyAzureReposGitRepository

- script: dir $(Build.SourcesDirectory)

If the self repository is named CurrentRepo , the command produces the following output:
script
CurrentRepo MyAzureReposGitRepo MyBitbucketRepo MyGitHubRepo . In this example, the names of the repositories are
used for the folders, because no path is specified in the checkout step. For more information on repository folder
names and locations, see the following Checkout path section.

Inline syntax checkout


If your repository doesn't require a service connection, you can declare it inline with your checkout step.

NOTE
Only Azure Repos Git repositories in the same organization can use the inline syntax. Azure Repos Git repositories in a
different organization, and other supported repository types require a service connection and must be declared as a
repository resource.
steps:
- checkout: git://MyProject/MyRepo # Azure Repos Git repository in the same organization

NOTE
In the previous example, the self repository is not checked out. If you specify any checkout steps, you must include
checkout: self in order for self to be checked out.

Checkout path
Unless a path is specified in the checkout step, source code is placed in a default directory. This directory is
different depending on whether you are checking out a single repository or multiple repositories.
Single repositor y : If you have a single checkout step in your job, or you have no checkout step which is
equivalent to checkout: self , your source code is checked out into a directory called s located as a
subfolder of (Agent.BuildDirectory) . If (Agent.BuildDirectory) is C:\agent\_work\1 , your code is checked
out to C:\agent\_work\1\s .
Multiple repositories : If you have multiple checkout steps in your job, your source code is checked out
into directories named after the repositories as a subfolder of s in (Agent.BuildDirectory) . If
(Agent.BuildDirectory) is C:\agent\_work\1 and your repositories are named tools and code , your
code is checked out to C:\agent\_work\1\s\tools and C:\agent\_work\1\s\code .

NOTE
If no path is specified in the checkout step, the name of the repository is used for the folder, not the
repository value which is used to reference the repository in the checkout step.

If a path is specified for a checkout step, that path is used, relative to (Agent.BuildDirectory) .

NOTE
If you are using default paths, adding a second repository checkout step changes the default path of the code for the first
repository. For example, the code for a repository named tools would be checked out to C:\agent\_work\1\s when
tools is the only repository, but if a second repository is added, tools would then be checked out to
C:\agent\_work\1\s\tools . If you have any steps that depend on the source code being in the original location, those
steps must be updated.

Checking out a specific ref


The default branch is checked out unless you designate a specific ref.
If you are using inline syntax, designate the ref by appending @<ref> . For example:

- checkout: git://MyProject/MyRepo@features/tools # checks out the features/tools branch


- checkout: git://MyProject/MyRepo@refs/heads/features/tools # also checks out the features/tools branch
- checkout: git://MyProject/MyRepo@refs/tags/MyTag # checks out the commit referenced by MyTag.

When using a repository resource, specify the ref using the ref property. The following example checks out the
features/tools/ branch of the designated repository.
resources:
repositories:
- repository: MyGitHubRepo
type: github
endpoint: MyGitHubServiceConnection
name: MyGitHubOrgOrUser/MyGitHubRepo
ref: features/tools

steps:
- checkout: MyGitHubRepo

Triggers
You can trigger a pipeline when an update is pushed to the self repository or to any of the repositories declared
as resources. This is useful, for instance, in the following scenarios:
You consume a tool or a library from a different repository. You want to run tests for your application
whenever the tool or library is updated.
You keep your YAML file in a separate repository from the application code. You want to trigger the pipeline
every time an update is pushed to the application repository.

IMPORTANT
Repository resource triggers only work for Azure Repos Git repositories in the same organization at present. They do not
work for GitHub or Bitbucket repository resources.

If you do not specify a trigger section in a repository resource, then the pipeline won't be triggered by changes
to that repository. If you specify a trigger section, then the behavior for triggering is similar to how CI triggers
work for the self repository.
If you specify a trigger section for multiple repository resources, then a change to any of them will start a new
run.
The trigger for self repository can be defined in a trigger section at the root of the YAML file, or in a
repository resource for self . For example, the following two are equivalent.

trigger:
- main

steps:
...

resources:
repositories:
- repository: self
type: git
name: MyProject/MyGitRepo
trigger:
- main

steps:
...
NOTE
It is an error to define the trigger for self repository twice. Do not define it both at the root of the YAML file and in the
resources section.

When a pipeline is triggered, Azure Pipelines has to determine the version of the YAML file that should be used
and a version for each repository that should be checked out. If a change to the self repository triggers a
pipeline, then the commit that triggered the pipeline is used to determine the version of the YAML file. If a change
to any other repository resource triggers the pipeline, then the latest version of YAML from the default branch
of self repository is used.
When an update to one of the repositories triggers a pipeline, then the following variables are set based on
triggering repository:
Build.Repository.ID
Build.Repository.Name
Build.Repository.Provider
Build.Repository.Uri
Build.SourceBranch
Build.SourceBranchName
Build.SourceVersion
Build.SourceVersionMessage

For the triggering repository, the commit that triggered the pipeline determines the version of the code that is
checked out. For other repositories, the ref defined in the YAML for that repository resource determines the
default version that is checked out.
Consider the following example, where the self repository contains the YAML file and repositories A and B
contain additional source code.

trigger:
- main
- feature

resources:
repositories:
- repository: A
type: git
name: MyProject/A
ref: main
trigger:
- main

- repository: B
type: git
name: MyProject/B
ref: release
trigger:
- main
- release

The following table shows which versions are checked out for each repository by a pipeline using the above YAML
file, unless you explicitly override the behavior during checkout .
C H A N GE M A DE P IP EL IN E VERSIO N O F VERSIO N O F
TO T RIGGERED YA M L SELF VERSIO N O F A VERSIO N O F B

main in self Yes commit from commit from latest from latest from
main that main that main release
triggered the triggered the
pipeline pipeline

feature in Yes commit from commit from latest from latest from
self feature that feature that main release
triggered the triggered the
pipeline pipeline

main in A Yes latest from latest from commit from latest from
main main main that release
triggered the
pipeline

main in B Yes latest from latest from latest from commit from
main main main main that
triggered the
pipeline

release in B Yes latest from latest from latest from commit from
main main main release that
triggered the
pipeline

You can also trigger the pipeline when you create or update a pull request in any of the repositories. To do this,
declare the repository resources in the YAML files as in the examples above, and configure a branch policy in the
repository (Azure Repos only).

Repository details
When you check out multiple repositories, some details about the self repository are available as variables.
When you use multi-repo triggers, some of those variables have information about the triggering repository
instead. Details about all of the repositories consumed by the job are available as a template context object called
resources.repositories .

For example, to get the ref of a non- self repository, you could write a pipeline like this:

resources:
repositories:
- repository: other
type: git
name: MyProject/OtherTools

variables:
tools.ref: $[ resources.repositories['other'].ref ]

steps:
- checkout: self
- checkout: other
- bash: echo "Tools version: $TOOLS_REF"

FAQ
Why can't I check out a repository from another project? It used to work.
Why am I am prompted to authorize resources the first time I try to check out a different repository?
Why can't I check out a repository from another project? It used to work.
Azure Pipelines provides a Limit job authorization scope to current project setting, that when enabled,
doesn't permit the pipeline to access resources outside of the project that contains the pipeline. This setting can be
set at either the organization or project level. If this setting is enabled, you won't be able to check out a repository
in another project unless you explicitly grant access. For more information, see Job authorization scope.
Why am I am prompted to authorize resources the first time I try to check out a different repository?
When you check out Azure Repos Git repositories other than the one containing the pipeline, you may be
prompted to authorize access to that resource before the pipeline runs for the first time. These prompts are
displayed on the pipeline run summary page.

Choose View or Authorize resources , and follow the prompts to authorize the resources.

For more information, see Troubleshooting authorization for a YAML pipeline.


Specify events that trigger pipelines
11/2/2020 • 2 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.

Use triggers to run a pipeline automatically. Azure Pipelines supports many types of triggers. Based on your
pipeline's type, select the appropriate trigger from the list below:

Classic build pipelines and YAML pipelines


Continuous integration (CI) triggers vary based on the type of repository you build in your pipeline.
CI triggers in Azure Repos Git
CI triggers in GitHub
CI triggers in Bitbucket Cloud
CI triggers in TFVC
Pull request validation (PR) triggers also vary based on the type of repository.
PR triggers in Azure Repos Git
PR triggers in GitHub
PR triggers in Bitbucket Cloud
Gated check-in is supported for TFVC repositories.
Comment triggers are supported only for GitHub repositories.
Scheduled triggers are independent of the repository and allow you to run a pipeline according to a schedule.
Pipeline triggers in YAML pipelines and build completion triggers in classic build pipelines allow you to trigger
one pipeline upon the completion of another.

Classic release pipelines


Continuous deployment triggers help you start classic releases after a classic build or YAML pipeline
completes.
Scheduled release triggers allow you to run a release pipeline according to a schedule.
Pull request release triggers are used to deploy a pull request directly using classic releases.
Stage triggers in classic release are used to configure how each stage in a classic release is triggered.
Configure schedules for pipelines
11/2/2020 • 17 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

You can configure a pipeline to run on a schedule.


YAML
Classic

IMPORTANT
Scheduled triggers defined using the pipeline settings UI take precedence over YAML scheduled triggers.
If your YAML pipeline has both YAML scheduled triggers and UI defined scheduled triggers, only the UI defined scheduled
triggers are run. To run the YAML defined scheduled triggers in your YAML pipeline, you must remove the scheduled
triggers defined in the pipeline setting UI. Once all UI scheduled triggers are removed, a push must be made in order for the
YAML scheduled triggers to start running.

Scheduled triggers cause a pipeline to run on a schedule defined using cron syntax.

NOTE
If you want to run your pipeline by only using scheduled triggers, you must disable PR and continuous integration triggers
by specifying pr: none and trigger: none in your YAML file. If you're using Azure Repos Git, PR builds are configured
using branch policy and must be disabled there.

schedules:
- cron: string # cron syntax defining a schedule
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes since
the last successful scheduled run. The default is false.

In the following example, two schedules are defined.


schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master
- releases/*
exclude:
- releases/ancient/*
- cron: "0 12 * * 0"
displayName: Weekly Sunday build
branches:
include:
- releases/*
always: true

The first schedule, Daily midnight build , runs a pipeline at midnight every day, but only if the code has changed
since the last successful scheduled run, for master and all releases/* branches, except those under
releases/ancient/* .

The second schedule, Weekly Sunday build , runs a pipeline at noon on Sundays, whether the code has changed
or not since the last run, for all releases/* branches.

NOTE
The time zone for cron schedules is UTC, so in these examples, the midnight build and the noon build are at midnight and
noon in UTC.

NOTE
If you specify an exclude clause without an include clause for branches , it is equivalent to specifying * in the
include clause.

NOTE
You cannot use pipeline variables when specifying schedules.

NOTE
If you use templates in your YAML file, then the schedules must be specified in the main YAML file and not in the template
files.

Scheduled runs view


You can view a preview of upcoming scheduled builds by choosing Scheduled runs from the context menu on
the pipeline details page for your pipeline.
After you create or update your scheduled triggers, you can verify them using this view.

In this example, the scheduled runs for the following schedule are displayed.

schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master

The Scheduled runs windows displays the times converted to the local time zone set on the computer used to
browse to the Azure DevOps portal. In this example the screenshot was taken in the EST time zone.

Scheduled triggers evaluation


Scheduled triggers are evaluated for a branch when the following events occur.
A pipeline is created.
A pipeline's YAML file is updated, either from a push, or by editing it in the pipeline editor.
A pipeline's YAML file path is updated to reference a different YAML file. This change only updates the default
branch, and therefore will only pick up schedules in the updated YAML file for the default branch. If any other
branches subsequently merge the default branch, for example git pull origin master , the scheduled triggers
from the newly referenced YAML file are evaluated for that branch.
A new branch is created.
After one of these events occurs in a branch, any scheduled runs for that branch are added, if that branch matches
the branch filters for the scheduled triggers contained in the YAML file in that branch.

IMPORTANT
Scheduled runs for a branch are added only if the branch matches the branch filters for the scheduled triggers in the YAML
file in that par ticular branch .

Example of scheduled triggers for multiple branches


For example, a pipeline is created with the following schedule, and this version of the YAML file is checked into the
master branch. This schedule builds the master branch on a daily basis.

# YAML file in the master branch


schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master

Next, a new branch is created based off of master , named new-feature . The scheduled triggers from the YAML
file in the new branch are read, and since there is no match for the new-feature branch, no changes are made to
the scheduled builds, and the new-feature branch is not built using a scheduled trigger.
If new-feature is added to the branches list and this change is pushed to the new-feature branch, the YAML file is
read, and since new-feature is now in the branches list, a scheduled build is added for the new-feature branch.

# YAML file in the new-feature-branch


schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master
- new-feature

Now consider that a branch named release is created based off master , and then release is added to the
branch filters in the YAML file in the master branch, but not in the newly created release branch.
# YAML file in the release branch
schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master

# YAML file in the master branch with release added to the branches list
schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master
- release

Because was added to the branch filters in the master branch, but not to the branch filters in the
release
release branch, the release branch won't be built on that schedule. Only when the feature branch is added to
the branch filters in the YAML file in the feature branch will the scheduled build be added to the scheduler.

Supported cron syntax


Each cron expression is a space-delimited expression with five entries in the following order.

mm HH DD MM DW
\ \ \ \ \__ Days of week
\ \ \ \____ Months
\ \ \______ Days
\ \________ Hours
\__________ Minutes

F IEL D A C C EP T ED VA L UES

Minutes 0 through 59

Hours 0 through 23

Days 1 through 31

Months 1 through 12, full English names, first three letters of English
names

Days of week 0 through 6 (starting with Sunday), full English names, first
three letters of English names

Values can be in the following formats.

F O RM AT EXA M P L E DESC RIP T IO N

Wildcard * Matches all values for this field

Single value 5 Specifies a single value for this field


F O RM AT EXA M P L E DESC RIP T IO N

Comma delimited 3,5,6 Specifies multiple values for this field.


Multiple formats can be combined, like
1,3-6

Ranges 1-3 The inclusive range of values for this


field

Intervals */4 or 1-5/2 Intervals to match for this field, such as


every 4th value or the range 1-5 with a
step interval of 2

EXA M P L E C RO N EXP RESSIO N

Build every Monday, Wednesday, and Friday at 6:00 PM 0 18 * * Mon,Wed,Fri , 0 18 * * 1,3,5 , or


0 18 * * 1-5/2

Build every 6 hours 0 0,6,12,18 * * * , 0 */6 * * * or 0 0-18/6 * * *

Build every 6 hours starting at 9:00 AM 0 9,15,21 * * * or 0 9-21/6 * * *

For more information on supported formats, see Crontab Expression.

Running even when there are no code changes


By default, your pipeline does not run as scheduled if there have been no code changes since the last successful
scheduled run. For instance, consider that you have scheduled a pipeline to run every night at 9:00pm. During the
weekdays, you push various changes to your code. The pipeline runs as per schedule. During the weekends, you
do not make any changes to your code. If there have been no code changes since the scheduled run on Friday,
then the pipeline does not run as scheduled during the weekend. To force a pipeline to run even when there are no
code changes, you can use the always keyword.

schedules:
- cron: ...
...
always: true

Limits on the number of scheduled runs


There are certain limits on how often you can schedule a pipeline to run. These limits have been put in place to
prevent misuse of Azure Pipelines resources - particularly the Microsoft-hosted agents. This limit is around 1000
runs per pipeline per week.

Migrating from the classic editor


The following examples show you how to migrate your schedules from the classic editor to YAML.
Example: Nightly build of Git repo in multiple time zones
Example: Nightly build with different frequencies
Example: Nightly build of Git repo in multiple time zones
In this example, the classic editor scheduled trigger has two entries, producing the following builds.
Every Monday - Friday at 3:00 AM (UTC + 5:30 time zone), build branches that meet the features/india/*
branch filter criteria

Every Monday - Friday at 3:00 AM (UTC - 5:00 time zone), build branches that meet the features/nc/*
branch filter criteria

The equivalent YAML scheduled trigger is:

schedules:
- cron: "30 21 * * Sun-Thu"
displayName: M-F 3:00 AM (UTC + 5:30) India daily build
branches:
include:
- /features/india/*
- cron: "0 8 * * Mon-Fri"
displayName: M-F 3:00 AM (UTC - 5) NC daily build
branches:
include:
- /features/nc/*

In the first schedule, M-F 3:00 AM (UTC + 5:30) India daily build , the cron syntax ( mm HH DD MM DW ) is
30 21 * * Sun-Thu .

Minutes and Hours - 30 21 - This maps to 21:30 UTC ( 9:30 PM UTC ). Since the specified time zone in the
classic editor is UTC + 5:30 , we need to subtract 5 hours and 30 minutes from the desired build time of 3:00
AM to arrive at the desired UTC time to specify for the YAML trigger.
Days and Months are specified as wildcards since this schedule doesn't specify to run only on certain days of
the month or on a specific month.
Days of the week - Sun-Thu - because of the timezone conversion, for our builds to run at 3:00 AM in the UTC
+ 5:30 India time zone, we need to specify starting them the previous day in UTC time. We could also specify
the days of the week as 0-4 or 0,1,2,3,4 .

In the second schedule, M-F 3:00 AM (UTC - 5) NC daily build , the cron syntax is 0 8 * * Mon-Fri .
Minutes and Hours - 0 8 - This maps to 8:00 AM UTC . Since the specified time zone in the classic editor is
UTC - 5:00 , we need to add 5 hours from the desired build time of 3:00 AM to arrive at the desired UTC time
to specify for the YAML trigger.
Days and Months are specified as wildcards since this schedule doesn't specify to run only on certain days of
the month or on a specific month.
Days of the week - Mon-Fri - Because our timezone conversions don't span multiple days of the week for our
desired schedule, we don't need to do any conversion here. We could also specify the days of the week as 1-5
or 1,2,3,4,5 .

IMPORTANT
The UTC time zones in YAML scheduled triggers don't account for daylight savings time.

Example: Nightly build with different frequencies


In this example, the classic editor scheduled trigger has two entries, producing the following builds.
Every Monday - Friday at 3:00 AM UTC, build branches that meet the master and releases/* branch filter
criteria

Every Sunday at 3:00 AM UTC, build the releases/lastversion branch, even if the source or pipeline hasn't
changed
The equivalent YAML scheduled trigger is:

schedules:
- cron: "0 3 * * Mon-Fri"
displayName: M-F 3:00 AM (UTC) daily build
branches:
include:
- master
- /releases/*
- cron: "0 3 * * Sun"
displayName: Sunday 3:00 AM (UTC) weekly latest version build
branches:
include:
- /releases/lastversion
always: true

In the first schedule, M-F 3:00 AM (UTC) daily build , the cron syntax is 0 3 * * Mon-Fri .
Minutes and Hours - 0 3 - This maps to 3:00 AM UTC . Since the specified time zone in the classic editor is
UTC , we don't need to do any time zone conversions.
Days and Months are specified as wildcards since this schedule doesn't specify to run only on certain days of
the month or on a specific month.
Days of the week - Mon-Fri - because there is no timezone conversion, the days of the week map directly from
the classic editor schedule. We could also specify the days of the week as 1,2,3,4,5 .
In the second schedule, Sunday 3:00 AM (UTC) weekly latest version build , the cron syntax is 0 3 * * Sun .
Minutes and Hours - 0 3 - This maps to 3:00 AM UTC . Since the specified time zone in the classic editor is
UTC , we don't need to do any time zone conversions.
Days and Months are specified as wildcards since this schedule doesn't specify to run only on certain days of
the month or on a specific month.
Days of the week - Sun - Because our timezone conversions don't span multiple days of the week for our
desired schedule, we don't need to do any conversion here. We could also specify the days of the week as 0 .
We also specify always: true since this build is scheduled to run whether or not the source code has been
updated.
Scheduled builds are not yet supported in YAML syntax. After you create your YAML build pipeline, you can use
pipeline settings to specify a scheduled trigger.
YAML pipelines are not yet available on TFS.
FAQ
I defined a schedule in the YAML file. But it didn't run. What happened?
My YAML schedules were working fine. But, they stopped working now. How do I debug this?
My code hasn't changed, yet a scheduled build is triggered. Why?
I see the planned run in the Scheduled runs panel. However, it does not run at that time. Why?
Schedules defined in YAML pipeline work for one branch but not the other. How do I fix this?
I defined a schedule in the YAML file. But it didn't run. What happened?
Check the next few runs that Azure Pipelines has scheduled for your pipeline. You can find these by
selecting the Scheduled runs action in your pipeline. The list is filtered down to only show you the
upcoming few runs over the next few days. If this does not meet your expectation, it is probably the case
that you have mistyped your cron schedule, or you do not have the schedule defined in the correct branch.
Read the topic above to understand how to configure schedules. Reevaluate your cron syntax. All the times
for cron schedules are in UTC.
Make a small trivial change to your YAML file and push that update into your repository. If there was any
problem in reading the schedules from the YAML file earlier, it should be fixed now.
If you have any schedules defined in the UI, then your YAML schedules are not honored. Ensure that you do
not have any UI schedules by navigating to the editor for your pipeline and then selecting Triggers .
There is a limit on the number of runs you can schedule for a pipeline. Read more about limits.
If there are no changes to your code, they Azure Pipelines may not start new runs. Learn how to override
this behavior.
My YAML schedules were working fine. But, they stopped working now. How do I debug this?
If you did not specify always:true , your pipeline won't be scheduled unless there are any updates made to
your code. Check whether there have been any code changes and how you configured the schedules.
There is a limit on how many times you can schedule your pipeline. Check if you have exceeded those
limits.
Check if someone enabled additional schedules in the UI. Open the editor for your pipeline, and select
Triggers . If they defined schedules in the UI, then your YAML schedules won't be honored.
Check if your pipeline is paused or disabled. Select Settings for your pipeline.
Check the next few runs that Azure Pipelines has scheduled for your pipeline. You can find these by
selecting the Scheduled runs action in your pipeline. If you do not see the schedules that you expected,
make a small trivial change to you YAML file, and push the update to your repository. This should re-sync
the schedules.
If you use GitHub for storing your code, it is possible that Azure Pipelines may have been throttled by
GitHub when it tried to start a new run. Check if you can start a new run manually.
My code hasn't changed, yet a scheduled build is triggered. Why?
You might have enabled an option to always run a scheduled build even if there are no code changes. If
you use a YAML file, verify the syntax for the schedule in the YAML file. If you use classic pipelines, verify if
you checked this option in the scheduled triggers.
You might have updated the build pipeline or some property of the pipeline. This will cause a new run to be
scheduled even if you have not updated your source code. Verify the Histor y of changes in the pipeline
using the classic editor.
You might have updated the service connection used to connect to the repository. This will cause a new run
to be scheduled even if you have not updated your source code.
Azure Pipelines first checks if there are any updates to your code. If Azure Pipelines is unable to reach your
repository or get this information, it will either start a scheduled run anyway or it will create a failed run to
indicate that it is unable to reach the repository. If you notice that a run was created and that failed
immediately, this is likely the reason. It is a dummy build to let you know that Azure Pipelines is unable to
reach your repository.
I see the planned run in the Scheduled runs panel. However, it does not run at that time. Why?
The Scheduled runs panel shows all potential schedules. However, it may not actually run unless you have
made real updates to the code. To force a schedule to always run, ensure that you have set the always
property in the YAML pipeline, or checked the option to always run in a classic pipeline.
Schedules defined in YAML pipeline work for one branch but not the other. How do I fix this?
Schedules are defined in YAML files, and these files are associated with branches. If you want a pipeline to be
scheduled for a particular branch, say features/X, then make sure that the YAML file in that branch has the cron
schedule defined in it, and that it has the correct branch inclusions for the schedule. The YAML file in features/X
branch should have the following in this example:

schedules:
- cron: "0 12 * * 0" # replace with your schedule
branches:
include:
- features/X
Trigger one pipeline after another
11/2/2020 • 7 minutes to read • Edit Online

Large products have several components that are dependent on each other. These components are often
independently built. When an upstream component (a library, for example) changes, the downstream
dependencies have to be rebuilt and revalidated.
In situations like these, add a pipeline trigger to run your pipeline upon the successful completion of the
triggering pipeline .
YAML
Classic
To trigger a pipeline upon the completion of another, specify the triggering pipeline as a pipeline resource.

NOTE
Previously, you may have navigated to the classic editor for your YAML pipeline and configured build completion
triggers in the UI. While that model still works, it is no longer recommended. The recommended approach is to specify
pipeline triggers directly within the YAML file. Build completion triggers as defined in the classic editor have various
drawbacks, which have now been addressed in pipeline triggers. For instance, there is no way to trigger a pipeline on the
same branch as that of the triggering pipeline using build completion triggers.

In the following example, we have two pipelines - app-ci (the pipeline defined by the YAML snippet) and
security-lib-ci (the pipeline referenced by the pipeline resource). We want the app-ci pipeline to run
automatically every time a new version of the security library is built in the master branch or any releases branch.

# this is being defined in app-ci pipeline


resources:
pipelines:
- pipeline: securitylib # Name of the pipeline resource
source: security-lib-ci # Name of the pipeline referenced by the pipeline resource
trigger:
branches:
- releases/*
- master

pipeline: securitylib specifies the name of the pipeline resource, and is used when referring to the
pipeline resource from other parts of the pipeline, such as pipeline resource variables.
source: security-lib-ci specifies the name of the pipeline referenced by this pipeline resource. You can
retrieve a pipeline's name from the Azure DevOps portal in several places, such as the Pipelines landing
page. To configure the pipeline name setting, edit the YAML pipeline, choose Triggers from the settings
menu, and navigate to the YAML pane.
NOTE
If the triggering pipeline is in another Azure DevOps project, you must specify the project name using
project: OtherProjectName . For more information, see pipeline resource.

Similar to CI triggers, you can specify the branches to include or exclude:

# this is being defined in app-ci pipeline


resources:
pipelines:
- pipeline: securitylib
source: security-lib-ci
trigger:
branches:
include:
- releases/*
exclude:
- releases/old*

NOTE
If your filters aren't working, try using the prefix refs/heads/ . For example, use refs/heads/releases/old* instead of
releases/old* .

If the triggering pipeline and the triggered pipeline use the same repository, then both the pipelines will run using
the same commit when one triggers the other. This is helpful if your first pipeline builds the code, and the second
pipeline tests it. However, if the two pipelines use different repositories, then the triggered pipeline will use the
version of the code in the branch specified by the Default branch for manual and scheduled builds setting,
as described in the following section.
Branch considerations for pipeline completion triggers
Pipeline completion triggers use the Default branch for manual and scheduled builds setting to determine
which branch's version of a YAML pipeline's branch filters to evaluate when determining whether to run a pipeline
as the result of another pipeline completing. By default this setting points to the default branch of the repository.
When a pipeline completes, the Azure DevOps runtime evaluates the pipeline resource trigger branch filters of
any pipelines with pipeline completion triggers that reference the completed pipeline. A pipeline can have
multiple versions in different branches, so the runtime evaluates the branch filters in the pipeline version in the
branch specified by the Default branch for manual and scheduled builds setting. If there is a match, the
pipeline runs, but the version of the pipeline that runs may be in a different branch depending on whether the
triggered pipeline is in the same repository as the completed pipeline.
If the two pipelines are in different repositories, the triggered pipeline version in the branch specified by
Default branch for manual and scheduled builds is run.
If the two pipelines are in the same repository, the triggered pipeline version in the same branch as the
triggering pipeline is run, even if that branch is different than the Default branch for manual and
scheduled builds , and even if that version does not have branch filters that match the completed pipeline's
branch. This is because the branch filters from the Default branch for manual and scheduled builds
branch are used to determine if the pipeline should run, and not the branch filters in the version that is in the
completed pipeline branch.
If your pipeline completion triggers don't seem to be firing, check the value of the Default branch for manual
and scheduled builds setting for the triggered pipeline. The branch filters in that branch's version of the
pipeline are used to determine whether the pipeline completion trigger initiates a run of the pipeline. By default,
Default branch for manual and scheduled builds is set to the default branch of the repository, but you can
change it after the pipeline is created.
A typical scenario in which the pipeline completion trigger doesn't fire is when a new branch is created, the
pipeline completion trigger branch filters are modified to include this new branch, but when the first pipeline
completes on a branch that matches the new branch filters, the second pipeline doesn't trigger. This happens if the
branch filters in the pipeline version in the Default branch for manual and scheduled builds branch don't
match the new branch. To resolve this trigger issue you have the following two options.
Update the branch filters in the pipeline in the Default branch for manual and scheduled builds branch
so that they match the new branch.
Update the Default branch for manual and scheduled builds setting to a branch that has a version of the
pipeline with the branch filters that match the new branch.
To view and update the Default branch for manual and scheduled builds setting:
1. Navigate to the pipeline details for your pipeline, and choose Edit .

2. Choose ... and select Triggers .


3. Select YAML , Get sources , and view the Default branch for manual and scheduled builds setting. If
you change it, choose Save or Save & queue to save the change.

Behavior when pipeline completion triggers and CI triggers are present


When you specify both CI triggers and pipeline triggers, you can expect new runs to be started every time (a) an
update is made to the repository and (b) a run of the upstream pipeline is completed. Consider an example of a
pipeline B that depends on A . Let us also assume that both of these pipelines use the same repository for the
source code, and that both of them also have CI triggers configured. When you push an update to the repository,
then:
A new run of A is started.
At the same time, a new run of B is started. This run will consume the previously produced artifacts from A .
As A completes, it will trigger another run of B .

To prevent triggering two runs of B in this example, you must remove its CI trigger or pipeline trigger.
Triggers in pipeline resources are not in Azure DevOps Server 2019. Choose the Classic tab in the documentation
for information on build completion triggers.
Release triggers
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

NOTE
This topic covers classic release pipelines. To understand triggers in YAML pipelines, see pipeline triggers.

Release triggers are an automation tool to deploy your application. When the trigger conditions are met, the
pipeline will deploy your artifacts to the environment/stages you already specified.

Continuous deployment triggers


Continuous deployment triggers allow you to create a release every time a new build artifact is available. This
feature is currently available only to build from Azure DevOps, TFS and Git-based repositories.

Build branch filters allow you to trigger a release only for a build that is from one of the branches selected here.
You also have the option to specify branch tags. If you do so, a release will be triggered only if a new build
tagged with the keywords specified here, is available.

NOTE
Automatically creating a release does not mean it will be automatically deployed to a stage. You must set up stages
triggers to deploy your app to the various stages.
Scheduled release triggers
Scheduled release trigger allow you to create new releases at specific times.
Select the schedule icon under the Ar tifacts section. Toggle the Enabled/Disabled button and specify your
release schedule. You can set up multiple schedules to trigger a release.

Pull request triggers


If you chose to enable the pull-request triggers, a release will be created every time a selected artifact is
available as part of a pull request workflow.

To use a pull request trigger, you must also enable it for specific stages. We will go through stage triggers in the
next section. You may also want to set up a branch policies for your branches.

Stage triggers
Stage triggers allow you set up specific conditions to trigger deployment to a specific stage.
Select trigger : Set the trigger that will start the deployment to this stage automatically. Select "Release"
to deploy to the stage every time a new release is created. Use the "Stage" option to deploy after
deployments to selected stages are successful. To allow only manual deployments, select "Manual".
Ar tifacts filter : Select artifact condition(s) to trigger a new deployment. A release will be deployed to
this stage only if all artifact conditions are met.

Schedule : Trigger a new deployment to this stage at a specific time.


Pull-request deployment : Enabling this will allow pull request based releases to be deployed to this
stage. Keep it disabled if this is a critical or production stage.
Pre-deployment approvals : Select the users who can approve or reject deployments to this stage. By
default, all users must approve the deployment. If a group is added, one user in the group must approve
the deployment. You can also specify the timeout (the maximum time that an approval is allowed to be
pending before it is automatically rejected) and approval policies.

Gates : Allow you to set up specific gates to evaluate before the deployment.

Deployment queue settings : Allow you to configure actions when multiple releases are queued for
deployment.
NOTE
TFS 2015 : The following features are not available in TFS 2015 - continuous deployment triggers for multiple artifact
sources, multiple scheduled triggers combining scheduled and continuous deployment triggers in the same pipeline,
continuous deployment based on the branch or tag of a build.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature
on our Azure DevOps Developer Community. Support page.
Task types & usage
11/2/2020 • 9 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

A task is the building block for defining automation in a pipeline. A task is simply a packaged script or procedure
that has been abstracted with a set of inputs.
When you add a task to your pipeline, it may also add a set of demands to the pipeline. The demands define the
prerequisites that must be installed on the agent for the task to run. When you run the build or deployment, an
agent that meets these demands will be chosen.
When you run a job, all the tasks are run in sequence, one after the other. To run the same set of tasks in parallel
on multiple agents, or to run some tasks without using an agent, see jobs.
By default, all tasks run in the same context, whether that's on the host or in a job container. You may optionally
use step targets to control context for an individual task.
Learn more about how to specify properties for a task with the YAML schema.
When you run a job, all the tasks are run in sequence, one after the other, on an agent. To run the same set of
tasks in parallel on multiple agents, or to run some tasks without using an agent, see jobs.

Custom tasks
We provide some built-in tasks to enable fundamental build and deployment scenarios. We have also provided
guidance for creating your own custom task.
In addition, Visual Studio Marketplace offers a number of extensions; each of which, when installed to your
subscription or collection, extends the task catalog with one or more tasks. Furthermore, you can write your own
custom extensions to add tasks to Azure Pipelines or TFS.
In YAML pipelines, you refer to tasks by name. If a name matches both an in-box task and a custom task, the in-
box task will take precedence. You can use the task GUID or a fully-qualified name for the custom task to avoid
this risk:

steps:
- task: myPublisherId.myExtensionId.myContributionId.myTaskName@1 #format example
- task: qetza.replacetokens.replacetokens-task.replacetokens@3 #working example

To find myPublisherId and myExtensionId , select Get on a task in the marketplace. The values after the itemName
in your URL string are myPublisherId and myExtensionId . You can also find the fully-qualified name by adding
the task to a Release pipeline and selecting View YAML when editing the task.

Task versions
Tasks are versioned, and you must specify the major version of the task used in your pipeline. This can help to
prevent issues when new versions of a task are released. Tasks are typically backwards compatible, but in some
scenarios you may encounter unpredictable errors when a task is automatically updated.
When a new minor version is released (for example, 1.2 to 1.3), your build or release will automatically use the
new version. However, if a new major version is released (for example 2.0), your build or release will continue to
use the major version you specified until you edit the pipeline and manually change to the new major version.
The build or release log will include an alert that a new major version is available.
You can set which minor version gets used by specifying the full version number of a task after the @ sign
(example: [email protected] ). You can only use task versions that exist for your organization.
YAML
Classic
In YAML, you specify the major version using @ in the task name. For example, to pin to version 2 of the
PublishTestResults task:

steps:
- task: PublishTestResults@2

YAML pipelines aren't available in TFS.

Task control options


Each task offers you some Control Options .
YAML
Classic
Control options are available as keys on the task section.

- task: string # reference to a task and version, e.g. "VSBuild@1"


condition: expression # see below
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to 'false'
enabled: boolean # whether or not to run this step; defaults to 'true'
timeoutInMinutes: number # how long to wait before timing out the task
target: string # 'host' or the name of a container resource to target

The timeout period begins when the task starts running. It does not include the time the task is queued or is
waiting for an agent.
In this YAML, PublishTestResults@2 will run even if the previous step fails because of the succeededOrFailed()
condition.

steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- task: PublishTestResults@2
inputs:
testResultsFiles: "**/TEST-*.xml"
condition: succeededOrFailed()
NOTE
For the full schema, see YAML schema for task .

Conditions
Only when all previous dependencies have succeeded. This is the default if there is not a condition set in
the YAML.
Even if a previous dependency has failed, unless the run was canceled. Use succeededOrFailed() in the
YAML for this condition.
Even if a previous dependency has failed, even if the run was canceled. Use always() in the YAML for this
condition.
Only when a previous dependency has failed. Use failed() in the YAML for this condition.

Custom conditions which are composed of expressions


Step target
Tasks run in an execution context, which is either the agent host or a container. An individual step may override
its context by specifying a target . Available options are the word host to target the agent host plus any
containers defined in the pipeline. For example:

resources:
containers:
- container: pycontainer
image: python:3.8

steps:
- task: SampleTask@1
target: host
- task: AnotherTask@1
target: pycontainer

Here, the SampleTask runs on the host and AnotherTask runs in a container.
YAML pipelines aren't available in TFS.

Build tool installers (Azure Pipelines)


Tool installers enable your build pipeline to install and control your dependencies. Specifically, you can:
Install a tool or runtime on the fly (even on Microsoft-hosted agents) just in time for your CI build.
Validate your app or library against multiple versions of a dependency such as Node.js.
For example, you can set up your build pipeline to run and validate your app for multiple versions of Node.js.
Example: Test and validate your app on multiple versions of Node.js

TIP
Want a visual walkthrough? See our April 19 news release.

YAML
Classic
Create an azure-pipelines.yml file in your project's base directory with the following contents.

pool:
vmImage: 'Ubuntu 16.04'

steps:
# Node install
- task: NodeTool@0
displayName: Node install
inputs:
versionSpec: '6.x' # The version we're installing
# Write the installed version to the command line
- script: which node

Create a new build pipeline and run it. Observe how the build is run. The Node.js Tool Installer downloads the
Node.js version if it is not already on the agent. The Command Line script logs the location of the Node.js version
on disk.
YAML pipelines aren't available in TFS.
Tool installer tasks
For a list of our tool installer tasks, see Tool installer tasks.
Disabling in-box and Marketplace tasks
On the organization settings page, you can disable Marketplace tasks, in-box tasks, or both. Disabling
Marketplace tasks can help increase security of your pipelines. If you disable both in-box and Marketplace tasks,
only tasks you install using tfx will be available.

Related articles
Jobs
Task groups
Built-in task catalog

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Task groups for builds and releases
2/26/2020 • 8 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

NOTE
Task groups are not supported in YAML pipelines. Instead, in that case you can use templates. See YAML schema reference.

A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a
single reusable task that can be added to a build or release pipeline, just like any other task. You can choose to
extract the parameters from the encapsulated tasks as configuration variables, and abstract the rest of the task
information.
The new task group is automatically added to the task catalogue, ready to be added to other release and build
pipelines. Task groups are stored at the project level, and are not accessible outside the project scope.
Task groups are a way to standardize and centrally manage deployment steps for all your applications. When you
include a task group in your definitions, and then make a change centrally to the task group, the change is
automatically reflected in all the definitions that use the task group. There is no need to change each one
individually.

Before you create a task group...


Ensure that all of the tasks you want to include in a task group have their parameters defined as variables,
such as $(MyVariable) , where you want to be able to configure these parameters when you use the task
group. Variables used in the tasks are automatically extracted and converted into parameters for the task
group. Values of these configuration variables will be converted into default values for the task group.
If you specify a value (instead of a variable) for a parameter, that value becomes a fixed parameter value
and cannot be exposed as a parameter to the task group.
Parameters of the encapsulated tasks for which you specified a value (instead of a variable), or you didn't
provide a value for, are not configurable in the task group when added to a build or release pipeline.
Task conditions (such as "Run this task only when a previous task has failed" for a PowerShell Script task)
can be configured in a task group and these settings are persisted with the task group.
When you save the task group, you can provide a name and a description for the new task group, and
select a category where you want it to appear in the Task catalog dialog. You can also change the default
values for each of the parameters.
When you queue a build or a release, the encapsulated tasks are extracted and the values you entered for
the task group parameters are applied to the tasks.
Changes you make to a task group are reflected in every instance of the task group.
Create a task group
1. Ensure that all the tasks you intend to include do not contain any linked parameters. The easy way to do
this is to choose Unlink all in the settings panel for the entire process.

2. Select a sequence of tasks in a build or release pipeline (when using a mouse, click on the checkmarks of
each one). Then open the shortcut menu and choose Create task group .

3. Specify a name and description for the new task group, and the category (tab in the Add tasks panel) you
want to add it to.
4. After you choose Create , the new task group is created and replaces the selected tasks in your pipeline.
5. All the '$(vars)' from the underlying tasks, excluding the predefined variables, will surface as the mandatory
parameters for the newly created task group.
For example, let's say you have a task input $(foobar), which you don't intend to parameterize. However,
when you create a task group, the task input is converted into task group parameter 'foobar'. Now, you can
provide the default value for the task group parameter 'foobar' as $(foobar). This ensures that at runtime,
the expanded task gets the same input it's intended to.
6. Save your updated pipeline.

Manage task groups


All the task groups you create in the current project are listed in the Task Groups page of Azure Pipelines .

Use the Expor t shortcut command to save a copy of the task group as a JSON pipeline, and the Impor t icon to
import previously saved task group definitions. Use this feature to transfer task groups between projects and
enterprises, or replicate and save copies of your task groups.
Select a task group name to open the details page.

In the Tasks page you can edit the tasks that make up the task group. For each encapsulated task you can
change the parameter values for the non-variable parameters, edit the existing parameter variables, or convert
parameter values to and from variables. When you save the changes, all definitions that use this task group will
pick up the changes.
All the variable parameters of the task group will show up as mandatory parameters in the pipeline definition. You
can also set the default value for the task group parameters.
In the Histor y tab you can see the history of changes to the group.
In the References tab you can expand lists of all the build and release pipelines, and other task groups,
that use (reference) this task group. This is useful to ensure changes do not have unexpected effects on
other processes.

Create previews and updated versions of task groups


All of the built-in tasks in Azure Pipelines and TFS are versioned. This allows build and release pipelines to
continue to use the existing version of a task while new versions are developed, tested, and released. In Azure
Pipelines, you can version your own custom task groups so that they behave in the same way and provide the
same advantages.
1. After you finish editing a task group, choose Save as draft instead of Save .

2. The string -test is appended to the task group version number. When you are happy with the changes,
choose Publish draft . You can choose whether to publish it as a preview or as a production-ready version.
3. You can now use the updated task group in your build and release processes; either by changing the
version number of the task group in an existing pipeline or by adding it from the Add tasks panel.

As with the built-in tasks, the default when you add a task group is the highest non-preview version.

4. After you have finished testing the updated task group, choose Publish preview . The Preview string is
removed from the version number string. It will now appear in definitions as a "production-ready" version.
5. In a build or release pipeline that already contains this task group, you can now select the new "production-
ready" version. When you add the task group from the Add tasks panel, it automatically selects the new
"production-ready" version.

Working with taskgroup versions


Any taskgroup update can be a minor or major version update.
Minor version
Action: You directly save the task group after edit instead of saving it as draft.
Effect: The version number doesn’t change. Let’s say you have a taskgroup of version 1.0 . You can have any
number of minor version updates i.e. 1.1 , 1.2 , 1.3 etc. In your pipeline, the taskgroup version shows as 1.*
The latest changes will show up in the pipeline definition automatically.
Reason: This is supposed to be a small change in the task group and you expect the pipelines to use this new
change without editing the version in the pipeline definition.
Major version
Action: You save the task group as draft and then create a preview, validate the task group and then publish the
preview as a major version.
Effect: The task group bumps up to a new version. Let’s say you have a task group of version 1.* . A new version
gets published as 2.* , 3.* , 4.* etc. And a notification about availability of new version shows up in all the
pipeline definitions where this task group is used. User has to explicitly update to new version of the taskgroup in
pipelines.
Reason: When you have a substantial change which might break the existing pipelines, you would like to test it
out and roll out as a new version. Users can choose to upgrade to new version or choose to stay on the same
version. This functionality is same as a normal task version update.
However, if your taskgroup update is not a breaking change but you would like to validate first and then enforce
pipelines to consume the latest changes, you can follow below steps.
1. Update the task group with your desired changes and save it as a draft. A new draft task group ‘-Draft’ will be
created which contains the changes you have done. And this draft task group is accessible for you to consume
in your pipelines.
2. Now, instead of publishing as preview, you can directly consume this draft task group in your test pipeline.
3. Validate this new draft task group in your test pipeline and once you are confident, go back to your main task
group and do the same changes and save it directly. This will be taken as minor version update.
4. The new changes will now show up in all the pipelines where this task group is used.
5. Now you can delete your draft task group.

Related topics
Tasks
Task jobs

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Template types & usage
11/2/2020 • 24 minutes to read • Edit Online

Templates let you define reusable content, logic, and parameters. Templates function in two ways. You can insert
reusable content with a template or you can use a template to control what is allowed in a pipeline.
If a template is used to include content, it functions like an include directive in many programming languages.
Content from one file is inserted into another file. When a template controls what is allowed in a pipeline, the
template defines logic that another file must follow.
Use templates to define your logic once and then reuse it several times. Templates combine the content of
multiple YAML files into a single pipeline. You can pass parameters into a template from your parent pipeline.

Parameters
You can specify parameters and their data types in a template and pass those parameters to a pipeline. You can
also use parameters outside of templates.
Passing parameters
Parameters must contain a name and data type. In azure-pipelines.yml , when the parameter yesNo is set to a
boolean value, the build succeeds. When yesNo is set to a string such as apples , the build fails.

# File: simple-param.yml
parameters:
- name: yesNo # name of the parameter; required
type: boolean # data type of the parameter; required
default: false

steps:
- script: echo ${{ parameters.yesNo }}

# File: azure-pipelines.yml
trigger:
- master

extends:
template: simple-param.yml
parameters:
yesNo: false # set to a non-boolean value to have the build fail

Parameters to select a template at runtime


You can call different templates from a pipeline YAML depending on a condition. In this example, the
experimental.yml YAML will run when the parameter experimentalTemplate is true.
#azure-pipeline.yml
parameters:
- name: experimentalTemplate
displayName: 'Use experimental build process?'
type: boolean
default: false

steps:
- ${{ if eq(parameters.experimentalTemplate, true) }}:
- template: experimental.yml
- ${{ if not(eq(parameters.experimentalTemplate, true)) }}:
- template: stable.yml

Parameter data types


DATA T Y P E N OT ES

string string

number may be restricted to values: , otherwise any number-like


string is accepted

boolean true or false

object any YAML structure

step a single step

stepList sequence of steps

job a single job

jobList sequence of jobs

deployment a single deployment job

deploymentList sequence of deployment jobs

stage a single stage

stageList sequence of stages

The step, stepList, job, jobList, deployment, deploymentList, stage, and stageList data types all use standard
YAML schema format. This example includes string, number, boolean, object, step, and stepList.
parameters:
- name: myString
type: string
default: a string
- name: myMultiString
type: string
default: default
values:
- default
- ubuntu
- name: myNumber
type: number
default: 2
values:
- 1
- 2
- 4
- 8
- 16
- name: myBoolean
type: boolean
default: true
- name: myObject
type: object
default:
foo: FOO
bar: BAR
things:
- one
- two
- three
nested:
one: apple
two: pear
count: 3
- name: myStep
type: step
default:
script: echo my step
- name: mySteplist
type: stepList
default:
- script: echo step one
- script: echo step two

trigger: none

jobs:
- job: stepList
steps: ${{ parameters.mySteplist }}
- job: myStep
steps:
- ${{ parameters.myStep }}

You can iterate through an object and print out each string in the object.
parameters:
- name: listOfStrings
type: object
default:
- one
- two

steps:
- ${{ each value in parameters.listOfStrings }}:
- script: echo ${{ value }}

Extend from a template


To increase security, you can enforce that a pipeline extends from a particular template. The file start.yml
defines the parameter buildSteps , which is then used in the pipeline azure-pipelines.yml . In start.yml , if a
buildStep gets passed with a script step, then it is rejected and the pipeline build fails. When extending from a
template, you can increase security by adding a required template approval.

# File: start.yml
parameters:
- name: buildSteps # the name of the parameter is buildSteps
type: stepList # data type is StepList
default: [] # default value of buildSteps
stages:
- stage: secure_buildstage
pool: Hosted VS2017
jobs:
- job: secure_buildjob
steps:
- script: echo This happens before code
displayName: 'Base: Pre-build'
- script: echo Building
displayName: 'Base: Build'

- ${{ each step in parameters.buildSteps }}:


- ${{ each pair in step }}:
${{ if ne(pair.value, 'CmdLine@2') }}:
${{ pair.key }}: ${{ pair.value }}
${{ if eq(pair.value, 'CmdLine@2') }}:
'${{ pair.value }}': error

- script: echo This happens after code


displayName: 'Base: Signing'

# File: azure-pipelines.yml
trigger:
- master

extends:
template: start.yml
parameters:
buildSteps:
- bash: echo Test #Passes
displayName: succeed
- bash: echo "Test"
displayName: succeed
- task: CmdLine@2
displayName: Test 3 - Will Fail
inputs:
script: echo "Script Test"
Extend from a template with resources
You can also use extends to extend from a template in your Azure pipeline that contains resources.

# File: azure-pipelines.yml
trigger:
- none

extends:
template: resource-template.yml

# File: resource-template.yml
resources:
pipelines:
- pipeline: my-pipeline
source: sourcePipeline

steps:
- script: echo "Testing resource template"

Insert a template
You can copy content from one YAML and reuse it in a different YAML. This saves you from having to manually
include the same logic in multiple places. The include-npm-steps.yml file template contains steps that are
reused in azure-pipelines.yml .

# File: templates/include-npm-steps.yml

steps:
- script: npm install
- script: yarn install
- script: npm run compile

# File: azure-pipelines.yml

jobs:
- job: Linux
pool:
vmImage: 'ubuntu-latest'
steps:
- template: templates/include-npm-steps.yml # Template reference
- job: Windows
pool:
vmImage: 'windows-latest'
steps:
- template: templates/include-npm-steps.yml # Template reference

Step reuse
You can insert a template to reuse one or more steps across several jobs. In addition to the steps from the
template, each job can define additional steps.

# File: templates/npm-steps.yml
steps:
- script: npm install
- script: npm test
# File: azure-pipelines.yml

jobs:
- job: Linux
pool:
vmImage: 'ubuntu-16.04'
steps:
- template: templates/npm-steps.yml # Template reference

- job: macOS
pool:
vmImage: 'macOS-10.14'
steps:
- template: templates/npm-steps.yml # Template reference

- job: Windows
pool:
vmImage: 'vs2017-win2016'
steps:
- script: echo This script runs before the template's steps, only on Windows.
- template: templates/npm-steps.yml # Template reference
- script: echo This step runs after the template's steps.

Job reuse
Much like steps, jobs can be reused with templates.

# File: templates/jobs.yml
jobs:
- job: Ubuntu
pool:
vmImage: 'ubuntu-latest'
steps:
- bash: echo "Hello Ubuntu"

- job: Windows
pool:
vmImage: 'windows-latest'
steps:
- bash: echo "Hello Windows"

# File: azure-pipelines.yml

jobs:
- template: templates/jobs.yml # Template reference

Stage reuse
Stages can also be reused with templates.

# File: templates/stages1.yml
stages:
- stage: Angular
jobs:
- job: angularinstall
steps:
- script: npm install angular
# File: templates/stages2.yml
stages:
- stage: Build
jobs:
- job: build
steps:
- script: npm run build

# File: azure-pipelines.yml
trigger:
- master

pool:
vmImage: 'ubuntu-latest'

stages:
- stage: Install
jobs:
- job: npminstall
steps:
- task: Npm@1
inputs:
command: 'install'
- template: templates/stages1.yml
- template: templates/stages2.yml

Job, stage, and step templates with parameters

# File: templates/npm-with-params.yml

parameters:
- name: name # defaults for any parameters that aren't specified
default: ''
- name: vmImage
default: ''

jobs:
- job: ${{ parameters.name }}
pool:
vmImage: ${{ parameters.vmImage }}
steps:
- script: npm install
- script: npm test

When you consume the template in your pipeline, specify values for the template parameters.
# File: azure-pipelines.yml

jobs:
- template: templates/npm-with-params.yml # Template reference
parameters:
name: Linux
vmImage: 'ubuntu-16.04'

- template: templates/npm-with-params.yml # Template reference


parameters:
name: macOS
vmImage: 'macOS-10.14'

- template: templates/npm-with-params.yml # Template reference


parameters:
name: Windows
vmImage: 'vs2017-win2016'

You can also use parameters with step or stage templates. For example, steps with parameters:

# File: templates/steps-with-params.yml

parameters:
- name: 'runExtendedTests' # defaults for any parameters that aren't specified
type: boolean
default: false

steps:
- script: npm test
- ${{ if eq(parameters.runExtendedTests, true) }}:
- script: npm test --extended

When you consume the template in your pipeline, specify values for the template parameters.

# File: azure-pipelines.yml

steps:
- script: npm install

- template: templates/steps-with-params.yml # Template reference


parameters:
runExtendedTests: 'true'

NOTE
Scalar parameters without a specified type are treated as strings. For example, eq(true, parameters['myparam']) will
return true , even if the myparam parameter is the word false , if myparam is not explicitly made boolean . Non-
empty strings are cast to true in a Boolean context. That expression could be rewritten to explicitly compare strings:
eq(parameters['myparam'], 'true') .

Parameters are not limited to scalar strings. See the list of data types. For example, using the object type:
# azure-pipelines.yml
jobs:
- template: process.yml
parameters:
pool: # this parameter is called `pool`
vmImage: ubuntu-latest # and it's a mapping rather than a string

# process.yml
parameters:
- name: 'pool'
type: object
default: {}

jobs:
- job: build
pool: ${{ parameters.pool }}

Variable reuse
Variables can be defined in one YAML and included in another template. This could be useful if you want to
store all of your variables in one file. If you are using a template to include variables in a pipeline, the included
template can only be used to define variables. You can use steps and more complex logic when you are
extending from a template. Use parameters instead of variables when you want to restrict type.
In this example, the variable favoriteVeggie is included in azure-pipelines.yml .

# File: vars.yml
variables:
favoriteVeggie: 'brussels sprouts'

# File: azure-pipelines.yml

variables:
- template: vars.yml # Template reference

steps:
- script: echo My favorite vegetable is ${{ variables.favoriteVeggie }}.

Use other repositories


You can keep your templates in other repositories. For example, suppose you have a core pipeline that you want
all of your app pipelines to use. You can put the template in a core repo and then refer to it from each of your
app repos:
# Repo: Contoso/BuildTemplates
# File: common.yml
parameters:
- name: 'vmImage'
default: 'ubuntu 16.04'
type: string

jobs:
- job: Build
pool:
vmImage: ${{ parameters.vmImage }}
steps:
- script: npm install
- script: npm test

Now you can reuse this template in multiple pipelines. Use the resources specification to provide the location
of the core repo. When you refer to the core repo, use @ and the name you gave it in resources .

# Repo: Contoso/LinuxProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: github
name: Contoso/BuildTemplates

jobs:
- template: common.yml@templates # Template reference

# Repo: Contoso/WindowsProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: github
name: Contoso/BuildTemplates
ref: refs/tags/v1.0 # optional ref to pin to

jobs:
- template: common.yml@templates # Template reference
parameters:
vmImage: 'vs2017-win2016'

For type: github, name is <identity>/<repo> as in the examples above. For type: git (Azure Repos), name is
<project>/<repo> . If that project is in a separate Azure DevOps organization, you'll need to configure a service
connection with access to the project and include that in YAML:

resources:
repositories:
- repository: templates
name: Contoso/BuildTemplates
endpoint: myServiceConnection # Azure DevOps service connection
jobs:
- template: common.yml@templates

Repositories are resolved only once, when the pipeline starts up. After that, the same resource is used for the
duration of the pipeline. Only the template files are used. Once the templates are fully expanded, the final
pipeline runs as if it were defined entirely in the source repo. This means that you can't use scripts from the
template repo in your pipeline.
If you want to use a particular, fixed version of the template, be sure to pin to a ref . The refs are either
branches ( refs/heads/<name> ) or tags ( refs/tags/<name> ). If you want to pin a specific commit, first create a tag
pointing to that commit, then pin to that tag.
You may also use @self to refer to the repository where the main pipeline was found. This is convenient for
use in extends templates if you want to refer back to contents in the extending pipeline's repository. For
example:

# Repo: Contoso/Central
# File: template.yml
jobs:
- job: PreBuild
steps: []

# Template reference to the repo where this template was


# included from - consumers of the template are expected
# to provide a "BuildJobs.yml"
- template: BuildJobs.yml@self

- job: PostBuild
steps: []

# Repo: Contoso/MyProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: git
name: Contoso/Central

extends:
template: template.yml@templates

# Repo: Contoso/MyProduct
# File: BuildJobs.yml
jobs:
- job: Build
steps: []

Template expressions
Use template expressions to specify how values are dynamically resolved during pipeline initialization. Wrap
your template expression inside this syntax: ${{ }} .
Template expressions can expand template parameters, and also variables. You can use parameters to influence
how a template is expanded. The parameters object works like the variables object in an expression. Only
predefined variables can be used in template expressions.

NOTE
Expressions are only expanded for stages , jobs , steps , and containers (inside resources ). You cannot, for
example, use an expression inside trigger or a resource like repositories . Additionally, on Azure DevOps 2020 RTW,
you can't use template expressions inside containers .

For example, you define a template:


# File: steps/msbuild.yml

parameters:
- name: 'solution'
default: '**/*.sln'
type: string

steps:
- task: msbuild@1
inputs:
solution: ${{ parameters['solution'] }} # index syntax
- task: vstest@2
inputs:
solution: ${{ parameters.solution }} # property dereference syntax

Then you reference the template and pass it the optional solution parameter:

# File: azure-pipelines.yml

steps:
- template: steps/msbuild.yml
parameters:
solution: my.sln

Context
Within a template expression, you have access to the parameters context that contains the values of parameters
passed in. Additionally, you have access to the variables context that contains all the variables specified in the
YAML file plus many of the predefined variables (noted on each variable in that topic). Importantly, it doesn't
have runtime variables such as those stored on the pipeline or given when you start a run. Template expansion
happens very early in the run, so those variables aren't available.
Required parameters
You can add a validation step at the beginning of your template to check for the parameters you require.
Here's an example that checks for the solution parameter using Bash (which enables it to work on any
platform):

# File: steps/msbuild.yml

parameters:
- name: 'solution'
default: ''
type: string

steps:
- bash: |
if [ -z "$SOLUTION" ]; then
echo "##vso[task.logissue type=error;]Missing template parameter \"solution\""
echo "##vso[task.complete result=Failed;]"
fi
env:
SOLUTION: ${{ parameters.solution }}
displayName: Check for required parameters
- task: msbuild@1
inputs:
solution: ${{ parameters.solution }}
- task: vstest@2
inputs:
solution: ${{ parameters.solution }}
To show that the template fails if it's missing the required parameter:

# File: azure-pipelines.yml

# This will fail since it doesn't set the "solution" parameter to anything,
# so the template will use its default of an empty string
steps:
- template: steps/msbuild.yml

Template expression functions


You can use general functions in your templates. You can also use a few template expression functions.
format
Simple string token replacement
Min parameters: 2. Max parameters: N
Example: ${{ format('{0} Build', parameters.os) }} → 'Windows Build'

coalesce
Evaluates to the first non-empty, non-null string argument
Min parameters: 2. Max parameters: N
Example:

parameters:
- name: 'restoreProjects'
default: ''
type: string
- name: 'buildProjects'
default: ''
type: string

steps:
- script: echo ${{ coalesce(parameters.foo, parameters.bar, 'Nothing to see') }}

Insertion
You can use template expressions to alter the structure of a YAML pipeline. For instance, to insert into a
sequence:
# File: jobs/build.yml

parameters:
- name: 'preBuild'
type: stepList
default: []
- name: 'preTest'
type: stepList
default: []
- name: 'preSign'
type: stepList
default: []

jobs:
- job: Build
pool:
vmImage: 'vs2017-win2016'
steps:
- script: cred-scan
- ${{ parameters.preBuild }}
- task: msbuild@1
- ${{ parameters.preTest }}
- task: vstest@2
- ${{ parameters.preSign }}
- script: sign

# File: .vsts.ci.yml

jobs:
- template: jobs/build.yml
parameters:
preBuild:
- script: echo hello from pre-build
preTest:
- script: echo hello from pre-test

When an array is inserted into an array, the nested array is flattened.


To insert into a mapping, use the special property ${{ insert }} .

# Default values
parameters:
- name: 'additionalVariables'
type: object
default: {}

jobs:
- job: build
variables:
configuration: debug
arch: x86
${{ insert }}: ${{ parameters.additionalVariables }}
steps:
- task: msbuild@1
- task: vstest@2

jobs:
- template: jobs/build.yml
parameters:
additionalVariables:
TEST_SUITE: L0,L1
Conditional insertion
If you want to conditionally insert into a sequence or a mapping in a template, use insertions and expression
evaluation. You can also use if statements outside of templates as long as you use template syntax.
For example, to insert into a sequence in a template:

# File: steps/build.yml

parameters:
- name: 'toolset'
default: msbuild
type: string
values:
- msbuild
- dotnet

steps:
# msbuild
- ${{ if eq(parameters.toolset, 'msbuild') }}:
- task: msbuild@1
- task: vstest@2

# dotnet
- ${{ if eq(parameters.toolset, 'dotnet') }}:
- task: dotnet@1
inputs:
command: build
- task: dotnet@1
inputs:
command: test

# File: azure-pipelines.yml

steps:
- template: steps/build.yml
parameters:
toolset: dotnet

For example, to insert into a mapping in a template:

# File: steps/build.yml

parameters:
- name: 'debug'
type: boolean
default: false

steps:
- script: tool
env:
${{ if eq(parameters.debug, true) }}:
TOOL_DEBUG: true
TOOL_DEBUG_DIR: _dbg

steps:
- template: steps/build.yml
parameters:
debug: true

You can also use conditional insertion for variables:


variables:
- name: foo
value: test

pool:
vmImage: 'ubuntu-latest'

steps:
- script: echo "start"
- ${{ if eq(variables.foo, 'test') }}:
- script: echo "this is a test"

Iterative insertion
The each directive allows iterative insertion based on a YAML sequence (array) or mapping (key-value pairs).
For example, you can wrap the steps of each job with additional pre- and post-steps:

# job.yml
parameters:
- name: 'jobs'
type: jobList
default: []

jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: SetupMyBuildTools@1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry@1 # Post steps
condition: always()

# azure-pipelines.yml
jobs:
- template: job.yml
parameters:
jobs:
- job: A
steps:
- script: echo This will get sandwiched between SetupMyBuildTools and PublishMyTelemetry.
- job: B
steps:
- script: echo So will this!

You can also manipulate the properties of whatever you're iterating over. For example, to add additional
dependencies:
# job.yml
- name: 'jobs'
type: jobList
default: []

jobs:
- job: SomeSpecialTool # Run your special tool in its own job first
steps:
- task: RunSpecialTool@1
- ${{ each job in parameters.jobs }}: # Then do each job
- ${{ each pair in job }}: # Insert all properties other than "dependsOn"
${{ if ne(pair.key, 'dependsOn') }}:
${{ pair.key }}: ${{ pair.value }}
dependsOn: # Inject dependency
- SomeSpecialTool
- ${{ if job.dependsOn }}:
- ${{ job.dependsOn }}

# azure-pipelines.yml
jobs:
- template: job.yml
parameters:
jobs:
- job: A
steps:
- script: echo This job depends on SomeSpecialTool, even though it's not explicitly shown here.
- job: B
dependsOn:
- A
steps:
- script: echo This job depends on both Job A and on SomeSpecialTool.

Escape a value
If you need to escape a value that literally contains ${{ , then wrap the value in an expression string. For
example, ${{ 'my${{value' }} or ${{ 'my${{value with a '' single quote too' }}

Imposed limits
Templates and template expressions can cause explosive growth to the size and complexity of a pipeline. To help
prevent runaway growth, Azure Pipelines imposes the following limits:
No more than 100 separate YAML files may be included (directly or indirectly)
No more than 20 levels of template nesting (templates including other templates)
No more than 10 megabytes of memory consumed while parsing the YAML (in practice, this is typically
between 600KB - 2MB of on-disk YAML, depending on the specific features used)

Template parameters
You can pass parameters to templates. The parameters section defines what parameters are available in the
template and their default values. Templates are expanded just before the pipeline runs so that values
surrounded by ${{ }} are replaced by the parameters it receives from the enclosing pipeline. As a result, only
predefined variables can be used in parameters.
To use parameters across multiple pipelines, see how to create a variable group.
Job, stage, and step templates with parameters
# File: templates/npm-with-params.yml

parameters:
name: '' # defaults for any parameters that aren't specified
vmImage: ''

jobs:
- job: ${{ parameters.name }}
pool:
vmImage: ${{ parameters.vmImage }}
steps:
- script: npm install
- script: npm test

When you consume the template in your pipeline, specify values for the template parameters.

# File: azure-pipelines.yml

jobs:
- template: templates/npm-with-params.yml # Template reference
parameters:
name: Linux
vmImage: 'ubuntu-16.04'

- template: templates/npm-with-params.yml # Template reference


parameters:
name: macOS
vmImage: 'macOS-10.13'

- template: templates/npm-with-params.yml # Template reference


parameters:
name: Windows
vmImage: 'vs2017-win2016'

You can also use parameters with step or stage templates. For example, steps with parameters:

# File: templates/steps-with-params.yml

parameters:
runExtendedTests: 'false' # defaults for any parameters that aren't specified

steps:
- script: npm test
- ${{ if eq(parameters.runExtendedTests, 'true') }}:
- script: npm test --extended

When you consume the template in your pipeline, specify values for the template parameters.

# File: azure-pipelines.yml

steps:
- script: npm install

- template: templates/steps-with-params.yml # Template reference


parameters:
runExtendedTests: 'true'
NOTE
Scalar parameters are always treated as strings. For example, eq(parameters['myparam'], true) will almost always
return true , even if the myparam parameter is the word false . Non-empty strings are cast to true in a Boolean
context. That expression could be rewritten to explicitly compare strings: eq(parameters['myparam'], 'true') .

Parameters are not limited to scalar strings. As long as the place where the parameter expands expects a
mapping, the parameter can be a mapping. Likewise, sequences can be passed where sequences are expected.
For example:

# azure-pipelines.yml
jobs:
- template: process.yml
parameters:
pool: # this parameter is called `pool`
vmImage: ubuntu-latest # and it's a mapping rather than a string

# process.yml
parameters:
pool: {}

jobs:
- job: build
pool: ${{ parameters.pool }}

Using other repositories


You can keep your templates in other repositories. For example, suppose you have a core pipeline that you want
all of your app pipelines to use. You can put the template in a core repo and then refer to it from each of your
app repos:

# Repo: Contoso/BuildTemplates
# File: common.yml
parameters:
vmImage: 'ubuntu 16.04'

jobs:
- job: Build
pool:
vmImage: ${{ parameters.vmImage }}
steps:
- script: npm install
- script: npm test

Now you can reuse this template in multiple pipelines. Use the resources specification to provide the location
of the core repo. When you refer to the core repo, use @ and the name you gave it in resources .
# Repo: Contoso/LinuxProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: github
name: Contoso/BuildTemplates

jobs:
- template: common.yml@templates # Template reference

# Repo: Contoso/WindowsProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: github
name: Contoso/BuildTemplates
ref: refs/tags/v1.0 # optional ref to pin to

jobs:
- template: common.yml@templates # Template reference
parameters:
vmImage: 'vs2017-win2016'

For type: github, name is <identity>/<repo> as in the examples above. For type: git (Azure Repos), name is
<project>/<repo> . The project must be in the same organization; cross-organization references are not
supported.
Repositories are resolved only once, when the pipeline starts up. After that, the same resource is used for the
duration of the pipeline. Only the template files are used. Once the templates are fully expanded, the final
pipeline runs as if it were defined entirely in the source repo. This means that you can't use scripts from the
template repo in your pipeline.
If you want to use a particular, fixed version of the template, be sure to pin to a ref. Refs are either branches (
refs/heads/<name> ) or tags ( refs/tags/<name> ). If you want to pin a specific commit, first create a tag pointing
to that commit, then pin to that tag.

Expressions
Use template expressions to specify how values are dynamically resolved during pipeline initialization. Wrap
your template expression inside this syntax: ${{ }} .
Template expressions can expand template parameters, and also variables. You can use parameters to influence
how a template is expanded. The parameters object works like the variables object in an expression.
For example you define a template:
# File: steps/msbuild.yml

parameters:
solution: '**/*.sln'

steps:
- task: msbuild@1
inputs:
solution: ${{ parameters['solution'] }} # index syntax
- task: vstest@2
inputs:
solution: ${{ parameters.solution }} # property dereference syntax

Then you reference the template and pass it the optional solution parameter:

# File: azure-pipelines.yml

steps:
- template: steps/msbuild.yml
parameters:
solution: my.sln

Context
Within a template expression, you have access to the parameters context which contains the values of
parameters passed in. Additionally, you have access to the variables context which contains all the variables
specified in the YAML file plus the system variables. Importantly, it doesn't have runtime variables such as those
stored on the pipeline or given when you start a run. Template expansion happens very early in the run, so
those variables aren't available.
Required parameters
You can add a validation step at the beginning of your template to check for the parameters you require.
Here's an example that checks for the solution parameter using Bash (which enables it to work on any
platform):

# File: steps/msbuild.yml

parameters:
solution: ''

steps:
- bash: |
if [ -z "$SOLUTION" ]; then
echo "##vso[task.logissue type=error;]Missing template parameter \"solution\""
echo "##vso[task.complete result=Failed;]"
fi
env:
SOLUTION: ${{ parameters.solution }}
displayName: Check for required parameters
- task: msbuild@1
inputs:
solution: ${{ parameters.solution }}
- task: vstest@2
inputs:
solution: ${{ parameters.solution }}

To show that the template fails if it's missing the required parameter:
# File: azure-pipelines.yml

# This will fail since it doesn't set the "solution" parameter to anything,
# so the template will use its default of an empty string
steps:
- template: steps/msbuild.yml

Template expression functions


You can use general functions in your templates. You can also use a few template expression functions.
format
Simple string token replacement
Min parameters: 2. Max parameters: N
Example: ${{ format('{0} Build', parameters.os) }} → 'Windows Build'

coalesce
Evaluates to the first non-empty, non-null string argument
Min parameters: 2. Max parameters: N
Example:

parameters:
restoreProjects: ''
buildProjects: ''

steps:
- script: echo ${{ coalesce(parameters.foo, parameters.bar, 'Nothing to see') }}

Insertion
You can use template expressions to alter the structure of a YAML pipeline. For instance, to insert into a
sequence:

# File: jobs/build.yml

parameters:
preBuild: []
preTest: []
preSign: []

jobs:
- job: Build
pool:
vmImage: 'vs2017-win2016'
steps:
- script: cred-scan
- ${{ parameters.preBuild }}
- task: msbuild@1
- ${{ parameters.preTest }}
- task: vstest@2
- ${{ parameters.preSign }}
- script: sign
# File: .vsts.ci.yml

jobs:
- template: jobs/build.yml
parameters:
preBuild:
- script: echo hello from pre-build
preTest:
- script: echo hello from pre-test

When an array is inserted into an array, the nested array is flattened.


To insert into a mapping, use the special property ${{ insert }} .

# Default values
parameters:
additionalVariables: {}

jobs:
- job: build
variables:
configuration: debug
arch: x86
${{ insert }}: ${{ parameters.additionalVariables }}
steps:
- task: msbuild@1
- task: vstest@2

jobs:
- template: jobs/build.yml
parameters:
additionalVariables:
TEST_SUITE: L0,L1

Conditional insertion
If you want to conditionally insert into a sequence or a mapping, then use insertions and expression evaluation.
For example, to insert into a sequence:

# File: steps/build.yml

parameters:
toolset: msbuild

steps:
# msbuild
- ${{ if eq(parameters.toolset, 'msbuild') }}:
- task: msbuild@1
- task: vstest@2

# dotnet
- ${{ if eq(parameters.toolset, 'dotnet') }}:
- task: dotnet@1
inputs:
command: build
- task: dotnet@1
inputs:
command: test
# File: azure-pipelines.yml

steps:
- template: steps/build.yml
parameters:
toolset: dotnet

For example, to insert into a mapping:

# File: steps/build.yml

parameters:
debug: false

steps:
- script: tool
env:
${{ if eq(parameters.debug, 'true') }}:
TOOL_DEBUG: true
TOOL_DEBUG_DIR: _dbg

steps:
- template: steps/build.yml
parameters:
debug: true

Iterative insertion
The each directive allows iterative insertion based on a YAML sequence (array) or mapping (key-value pairs).
For example, you can wrap the steps of each job with additional pre- and post-steps:

# job.yml
parameters:
jobs: []

jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: SetupMyBuildTools@1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry@1 # Post steps
condition: always()

# azure-pipelines.yml
jobs:
- template: job.yml
parameters:
jobs:
- job: A
steps:
- script: echo This will get sandwiched between SetupMyBuildTools and PublishMyTelemetry.
- job: B
steps:
- script: echo So will this!

You can also manipulate the properties of whatever you're iterating over. For example, to add additional
dependencies:

# job.yml
parameters:
jobs: []

jobs:
- job: SomeSpecialTool # Run your special tool in its own job first
steps:
- task: RunSpecialTool@1
- ${{ each job in parameters.jobs }}: # Then do each job
- ${{ each pair in job }}: # Insert all properties other than "dependsOn"
${{ if ne(pair.key, 'dependsOn') }}:
${{ pair.key }}: ${{ pair.value }}
dependsOn: # Inject dependency
- SomeSpecialTool
- ${{ if job.dependsOn }}:
- ${{ job.dependsOn }}

# azure-pipelines.yml
jobs:
- template: job.yml
parameters:
jobs:
- job: A
steps:
- script: echo This job depends on SomeSpecialTool, even though it's not explicitly shown here.
- job: B
dependsOn:
- A
steps:
- script: echo This job depends on both Job A and on SomeSpecialTool.

Escaping
If you need to escape a value that literally contains ${{ , then wrap the value in an expression string. For
example ${{ 'my${{value' }} or ${{ 'my${{value with a '' single quote too' }}

Limits
Templates and template expressions can cause explosive growth to the size and complexity of a pipeline. To help
prevent runaway growth, Azure Pipelines imposes the following limits:
No more than 50 separate YAML files may be included (directly or indirectly)
No more than 10 megabytes of memory consumed while parsing the YAML (in practice, this is typically
between 600KB - 2MB of on-disk YAML, depending on the specific features used)
No more than 2000 characters per template expression are allowed
Add a custom pipelines task extension
11/7/2020 • 19 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2017
Learn how to install extensions to your organization for custom build or release tasks in Azure DevOps. These
tasks appear next to Microsoft-provided tasks in the Add Step wizard.

To learn more about the new cross-platform build/release system, see What is Azure Pipelines?.

NOTE
This article covers agent tasks in agent-based extensions. For information on server tasks/server-based extensions, check
out the Server Task GitHub Documentation.

Prerequisites
To create extensions for Azure DevOps, you need the following software and tools:
An organization in Azure DevOps. For more information, see Create an organization.
A text editor. For many of the tutorials, we use Visual Studio Code , which provides intellisense and
debugging support. Go to code.visualstudio.com to download the latest version.
The latest version of Node.js. The production environment uses Node14, Node10, or Node6 (by using the
"Node" in the "execution" object instead of Node14 or Node10 ).
TypeScript Compiler 2.2.0 or greater, although we recommend version 4.0.2 or newer for tasks that use
Node14. Go to npmjs.com to download the compiler.
[Cross-platform CLI for Azure DevOps (tfx-cli)] to package your extensions. You can install tfx-cli by using
npm , a component of Node.js, by running npm i -g tfx-cli .

A home directory for your project. The home directory of a build or release task extension should look like
the following example after you complete the steps in this tutorial:

|--- README.md
|--- images
|--- extension-icon.png
|--- buildAndReleaseTask // where your task scripts are placed
|--- vss-extension.json // extension's manifest

Develop in Unix vs. Windows


We did this walk-through on Windows with PowerShell. We attempted to make it generic for all platforms, but the
syntax for getting environment variables is different.
If you're using a Mac or Linux, replace any instances of $env:<var>=<val> with export <var>=<val> .

Step 1: Create a custom task


Set up your task. Do every part of Step 1 within the buildAndReleaseTask folder.
Create task scaffolding
Create the folder structure for the task and install the required libraries and dependencies.
Create a directory and package.json file
From within your buildAndReleaseTask folder, run the following command:

npm init

npm init creates the package.json file. You can accept all of the default npm init options.

TIP
The agent doesn't automatically install the required modules because it's expecting your task folder to include the node
modules. To mitigate this, copy the node_modules to buildAndReleaseTask . As your task gets bigger, it's easy to exceed
the size limit (50MB) of a VSIX file. Before you copy the node folder, you may want to run npm install --production or
npm prune --production , or you can write a script to build and pack everything.

Add azure-pipelines-task-lib
We provide a library, azure-pipelines-task-lib, that should be used to create tasks. Add it to your library.

npm install azure-pipelines-task-lib --save

Add typings for external dependencies


Ensure that TypeScript typings are installed for external dependencies.
npm install @types/node --save-dev
npm install @types/q --save-dev

Create a .gitignore file and add node_modules to it. Your build process should do an npm install and a
typings install so that node_modules are built each time and don't need to be checked in.

echo node_modules > .gitignore

Choose typescript version


Tasks can use typescript versions 2.3.4 or 4.0.2. You can install the chosen typescript version using this command:

npm install [email protected] --save-dev

If you skip this step, typescript version 2.3.4 will be used by default.
Create tsconfig.json compiler options
This file ensures that your TypeScript files are compiled to JavaScript files.

tsc --init

For example, we want to compile to the ES6 standard instead of ES5. To ensure the ES6 standard happens, open
the newly generated tsconfig.json and update the target field to "es6."

NOTE
To have the command run successfully, make sure that TypeScript is installed globally with npm on your local machine.

Task implementation
Now that the scaffolding is complete, we can create our custom task.
task.json
Next, we create a task.json file in the buildAndReleaseTask folder. The task.json file describes the build or
release task and is what the build/release system uses to render configuration options to the user and to know
which scripts to execute at build/release time.
Copy the following code and replace the {{placeholders}} with your task's information. The most important
placeholder is the taskguid , and it must be unique. You can generate the taskguid by using Microsoft's online
GuidGen tool.
{
"$schema": "https://raw.githubusercontent.com/Microsoft/azure-pipelines-task-
lib/master/tasks.schema.json",
"id": "{{taskguid}}",
"name": "{{taskname}}",
"friendlyName": "{{taskfriendlyname}}",
"description": "{{taskdescription}}",
"helpMarkDown": "",
"category": "Utility",
"author": "{{taskauthor}}",
"version": {
"Major": 0,
"Minor": 1,
"Patch": 0
},
"instanceNameFormat": "Echo $(samplestring)",
"inputs": [
{
"name": "samplestring",
"type": "string",
"label": "Sample String",
"defaultValue": "",
"required": true,
"helpMarkDown": "A sample string"
}
],
"execution": {
"Node10": {
"target": "index.js"
}
}
}

task .json components


Following are descriptions of some of the components of the task.json file:

P RO P ERT Y DESC RIP T IO N

id A unique GUID for your task.

name Name with no spaces.

friendlyName Descriptive name (spaces allowed).

description Detailed description of what your task does.

author Short string describing the entity developing the build or


release task, for example: "Microsoft Corporation."

instanceNameFormat How the task is displayed within the build or release step list.
You can use variable values by using $(variablename) .

groups Describes groups that task properties may be logically


grouped by in the UI.

inputs Inputs to be used when your build or release task runs. This
task expects an input with the name samplestring .

execution Execution options for this task, including scripts.


NOTE
For a more in-depth look into the task.json file, or to learn how to bundle multiple versions in your extension, check out the
build/release task reference .

index.ts
Create an index.ts file by using the following code as a reference. This code runs when the task is called.

import tl = require('azure-pipelines-task-lib/task');

async function run() {


try {
const inputString: string | undefined = tl.getInput('samplestring', true);
if (inputString == 'bad') {
tl.setResult(tl.TaskResult.Failed, 'Bad input was given');
return;
}
console.log('Hello', inputString);
}
catch (err) {
tl.setResult(tl.TaskResult.Failed, err.message);
}
}

run();

Compile
Enter "tsc" from the buildAndReleaseTask folder to compile an index.js file from index.ts .
Run the task
An agent can run the task with node index.js from PowerShell.
In the following example, the task fails because inputs weren't supplied ( samplestring is a required input).

node index.js
##vso[task.debug]agent.workFolder=undefined
##vso[task.debug]loading inputs and endpoints
##vso[task.debug]loaded 0
##vso[task.debug]task result: Failed
##vso[task.issue type=error;]Input required: samplestring
##vso[task.complete result=Failed;]Input required: samplestring

As a fix, we can set the samplestring input and run the task again.

$env:INPUT_SAMPLESTRING="Human"
node index.js
##vso[task.debug]agent.workFolder=undefined
##vso[task.debug]loading inputs and endpoints
##vso[task.debug]loading INPUT_SAMPLESTRING
##vso[task.debug]loaded 1
##vso[task.debug]Agent.ProxyUrl=undefined
##vso[task.debug]Agent.CAInfo=undefined
##vso[task.debug]Agent.ClientCert=undefined
##vso[task.debug]Agent.SkipCertValidation=undefined
##vso[task.debug]samplestring=Human
Hello Human

This time, the task succeeded because samplestring was supplied, and it correctly outputted "Hello Human"!
Step 2: Unit test your task scripts
The goal of unit testing is to quickly test the task script, not the external tools it's calling. We want to test all aspects
of both success and failure paths.
Install test tools
We use Mocha as the test driver in this walk through.

npm install mocha --save-dev -g


npm install sync-request --save-dev
npm install @types/mocha --save-dev

Create test suite


Create a tests folder containing a _suite.ts file with the following contents:

import * as path from 'path';


import * as assert from 'assert';
import * as ttm from 'azure-pipelines-task-lib/mock-test';

describe('Sample task tests', function () {

before( function() {

});

after(() => {

});

it('should succeed with simple inputs', function(done: Mocha.Done) {


// Add success test here
});

it('it should fail if tool returns 1', function(done: Mocha.Done) {


// Add failure test here
});
});

TIP
Your test folder should be located in the buildAndReleaseTask folder. If you get a sync-request error, you can work around it
by adding sync-request to the buildAndReleaseTask folder with the command npm i --save-dev sync-request .

Create success test


The success test validates that when the tool has the appropriate inputs, it succeeds with no errors or warnings
and returns the correct output.
Create a file containing your task mock runner. This file creation simulates running the task and mocks all calls to
outside methods.
Create a success.ts file in your test directory with the following contents:
import ma = require('azure-pipelines-task-lib/mock-answer');
import tmrm = require('azure-pipelines-task-lib/mock-run');
import path = require('path');

let taskPath = path.join(__dirname, '..', 'index.js');


let tmr: tmrm.TaskMockRunner = new tmrm.TaskMockRunner(taskPath);

tmr.setInput('samplestring', 'human');

tmr.run();

Next, add the following example success test to your _suite.ts file to run the task mock runner:

it('should succeed with simple inputs', function(done: Mocha.Done) {


this.timeout(1000);

let tp = path.join(__dirname, 'success.js');


let tr: ttm.MockTestRunner = new ttm.MockTestRunner(tp);

tr.run();
console.log(tr.succeeded);
assert.equal(tr.succeeded, true, 'should have succeeded');
assert.equal(tr.warningIssues.length, 0, "should have no warnings");
assert.equal(tr.errorIssues.length, 0, "should have no errors");
console.log(tr.stdout);
assert.equal(tr.stdout.indexOf('Hello human') >= 0, true, "should display Hello human");
done();
});

Create failure test


The failure test validates that when the tool gets bad or incomplete input, it fails in the expected way with helpful
output.
First, we create our task mock runner. To do so, create a failure.ts file in your test directory with the following
contents:

import ma = require('azure-pipelines-task-lib/mock-answer');
import tmrm = require('azure-pipelines-task-lib/mock-run');
import path = require('path');

let taskPath = path.join(__dirname, '..', 'index.js');


let tmr: tmrm.TaskMockRunner = new tmrm.TaskMockRunner(taskPath);

tmr.setInput('samplestring', 'bad');

tmr.run();

Next, add the following to your _suite.ts file to run the task mock runner:
it('it should fail if tool returns 1', function(done: Mocha.Done) {
this.timeout(1000);

let tp = path.join(__dirname, 'failure.js');


let tr: ttm.MockTestRunner = new ttm.MockTestRunner(tp);

tr.run();
console.log(tr.succeeded);
assert.equal(tr.succeeded, false, 'should have failed');
assert.equal(tr.warningIssues, 0, "should have no warnings");
assert.equal(tr.errorIssues.length, 1, "should have 1 error issue");
assert.equal(tr.errorIssues[0], 'Bad input was given', 'error issue output');
assert.equal(tr.stdout.indexOf('Hello bad'), -1, "Should not display Hello bad");

done();
});

Run the tests


To run the tests, run the following commands:

tsc
mocha tests/_suite.js

Both tests should pass. If you want to run the tests with more verbose output (what you'd see in the build console),
set the environment variable: TASK_TEST_TRACE=1 .

$env:TASK_TEST_TRACE=1

Step 3: Create the extension manifest file


The extension manifest contains all of the information about your extension. It includes links to your files, including
your task folders and images folders. Ensure you've created an images folder with extension-icon.png. The
following example is an extension manifest that contains the build or release task.
Copy the following .json code and save it as your vss-extension.json file in your home directory. Don't create this
file in the buildAndReleaseTask folder.
{
"manifestVersion": 1,
"id": "build-release-task",
"name": "Fabrikam Build and Release Tools",
"version": "0.0.1",
"publisher": "fabrikam",
"targets": [
{
"id": "Microsoft.VisualStudio.Services"
}
],
"description": "Tools for building/releasing with Fabrikam. Includes one build/release task.",
"categories": [
"Azure Pipelines"
],
"icons": {
"default": "images/extension-icon.png"
},
"files": [
{
"path": "buildAndReleaseTask"
}
],
"contributions": [
{
"id": "custom-build-release-task",
"type": "ms.vss-distributed-task.task",
"targets": [
"ms.vss-distributed-task.tasks"
],
"properties": {
"name": "buildAndReleaseTask"
}
}
]
}

NOTE
The publisher here must be changed to your publisher name. If you want to create a publisher now, go to create your
publisher for instructions.

Contributions
P RO P ERT Y DESC RIP T IO N

id Identifier of the contribution. Must be unique within the


extension. Doesn't need to match the name of the build or
release task. Typically the build or release task name is in the
ID of the contribution.

type Type of the contribution. Should be ms.vss-distributed-


task .task .

targets Contributions "targeted" by this contribution. Should be


ms.vss-distributed-task .tasks .

properties.name Name of the task. This name must match the folder name of
the corresponding self-contained build or release task
pipeline.
Files
P RO P ERT Y DESC RIP T IO N

path Path of the file or folder relative to the home directory.

NOTE
For more information about the extension manifest file , such as its properties and what they do, check out the extension
manifest reference.

Step 4: Package your extension


After you've written your extension, the next step toward getting it into the Visual Studio Marketplace is to package
all of your files together. All extensions are packaged as VSIX 2.0-compatible .vsix files. Microsoft provides a cross-
platform command-line interface (CLI) to package your extension.
Packaging your extension into a .vsix file is effortless after you have the tfx-cli, go to your extension's home
directory, and run the following command:

tfx extension create --manifest-globs vss-extension.json

NOTE
An extension or integration's version must be incremented on every update.
When you're updating an existing extension, either update the version in the manifest or pass the --rev-version
command line switch. This increments the patch version number of your extension and saves the new version to your
manifest. You must rev both the task version and extension version for an update to occur.
tfx extension create --manifest-globs vss-extension.json --rev-version only updates the extension version and
not the task version. For more information, see Build Task in GitHub.

After you have your packaged extension in a .vsix file, you're ready to publish your extension to the Marketplace.

Step 5: Publish your extension


Create your publisher
All extensions, including extensions from Microsoft, are identified as being provided by a publisher. If you aren't
already a member of an existing publisher, you'll create one.
1. Sign in to the Visual Studio Marketplace Publishing Portal.
2. If you aren't already a member of an existing publisher, you're prompted to create a publisher. If you're not
prompted to create a publisher, scroll down to the bottom of the page and select Publish extensions under
Related Sites .
Specify an identifier for your publisher, for example: mycompany-myteam .
This identifier is used as the value for the publisher attribute in your extensions' manifest file.
Specify a display name for your publisher, for example: My Team .
3. Review the Marketplace Publisher Agreement and select Create .
Your publisher is defined. In a future release, you can grant permissions to view and manage your publisher's
extensions. It's easier and more secure to publish extensions under a common publisher, without the need to share
a set of credentials across users.
Upload your extension
After creating a publisher, you can upload your extension to the Marketplace.
Find the Upload new extension button, go to your packaged .vsix file, and select Upload .
You can also upload your extension via the command line by using the tfx extension publish command instead
of tfx extension create to package and publish your extension in one step. You can optionally use --share-with
to share your extension with one or more accounts after publishing. You'll need a personal access token, too. For
more information, see Acquire a personal access token.

tfx extension publish --manifest-globs your-manifest.json --share-with yourOrganization

Share your extension


Now that you've uploaded your extension, it's in the Marketplace, but no one can see it. Share it with your
organization so that you can install and test it.
Right-click your extension and select Share , and enter your organization information. You can share it with
other accounts that you want to have access to your extension, too.

IMPORTANT
Publishers must be verified to share extensions publicly. To learn more, see Package/Publish/Install.

Now that your extension is in the Marketplace and shared, anyone who wants to use it must install it.

Step 6: Create a build and release pipeline to publish the extension to


Marketplace
Create a build and release pipeline on Azure DevOps to help maintain the custom task on the Marketplace.
Prerequisites
A project in your organization. If you need to create one, see Create a project.
An Azure DevOps Extension Tasks extension installed in your organization.
You must first create a pipeline library variable group to hold the variables used by the pipeline. For more
information about creating a variable group, see Add and use variable groups. Keep in mind that you can make
variable groups from the Azure DevOps Library tab or through the CLI. After a variable group is made, use any
variables within that group in your pipeline. Read more on How use a variable group.
Declare the following variables in the variable group:
publisherId : ID of your marketplace publisher.
extensionId : ID of your extension, as declared in the vss-extension.json file.
extensionName : Name of your extension, as declared in the vss-extension.json file.
artifactName : Name of the artifact being created for the VSIX file.

Create a new Visual Studio Marketplace service connection and grant access permissions for all pipelines. For
more information about creating a service connection, see Service connections.
Use the following example to create a new pipeline with YAML. Learn more about how to Create your first pipeline
and YAML schema.
trigger:
- master

pool:
vmImage: "ubuntu-latest"

variables:
- group: variable-group # Rename to whatever you named your variable group in the prerequisite stage of step
6

stages:
- stage: Run_and_publish_unit_tests
jobs:
- job:
steps:
- task: TfxInstaller@3
inputs:
version: "v0.7.x"
- task: Npm@1
inputs:
command: 'install'
workingDir: '/TaskDirectory' # Update to the name of the directory of your task
- task: Bash@3
displayName: Compile Javascript
inputs:
targetType: "inline"
script: |
cd TaskDirectory # Update to the name of the directory of your task
tsc
- task: Npm@1
inputs:
command: 'custom'
workingDir: '/TestsDirectory' # Update to the name of the directory of your task's tests
customCommand: 'testScript' # See the definition in the explanation section below - it may be
called test
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/ResultsFile.xml'
- stage: Package_extension_and_publish_build_artifacts
jobs:
- job:
steps:
- task: TfxInstaller@3
inputs:
version: "v0.7.x"
- task: Npm@1
inputs:
command: 'install'
workingDir: '/TaskDirectory' # Update to the name of the directory of your task
- task: Bash@3
displayName: Compile Javascript
inputs:
targetType: "inline"
script: |
cd TaskDirectory # Update to the name of the directory of your task
tsc
- task: QueryAzureDevOpsExtensionVersion@3
inputs:
connectTo: 'VsTeam'
connectedServiceName: 'ServiceConnection' # Change to whatever you named the service connection
publisherId: '$(PublisherID)'
extensionId: '$(ExtensionID)'
versionAction: 'Patch'
outputVariable: 'Task.Extension.Version'
- task: PackageAzureDevOpsExtension@3
inputs:
rootFolder: '$(System.DefaultWorkingDirectory)'
rootFolder: '$(System.DefaultWorkingDirectory)'
publisherId: '$(PublisherID)'
extensionId: '$(ExtensionID)'
extensionName: '$(ExtensionName)'
extensionVersion: '$(Task.Extension.Version)'
updateTasksVersion: true
updateTasksVersionType: 'patch'
extensionVisibility: 'private' # Change to public if you're publishing to the marketplace
extensionPricing: 'free'
- task: CopyFiles@2
displayName: "Copy Files to: $(Build.ArtifactStagingDirectory)"
inputs:
Contents: "**/*.vsix"
TargetFolder: "$(Build.ArtifactStagingDirectory)"
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: '$(ArtifactName)'
publishLocation: 'Container'
- stage: Download_build_artifacts_and_publish_the_extension
jobs:
- job:
steps:
- task: TfxInstaller@3
inputs:
version: "v0.7.x"
- task: DownloadBuildArtifacts@0
inputs:
buildType: "current"
downloadType: "single"
artifactName: "$(ArtifactName)"
downloadPath: "$(System.DefaultWorkingDirectory)"
- task: PublishAzureDevOpsExtension@3
inputs:
connectTo: 'VsTeam'
connectedServiceName: 'ServiceConnection' # Change to whatever you named the service connection
fileType: 'vsix'
vsixFile: '/Publisher.*.vsix'
publisherId: '$(PublisherID)'
extensionId: '$(ExtensionID)'
extensionName: '$(ExtensionName)'
updateTasksVersion: false
extensionVisibility: 'private' # Change to public if you're publishing to the marketplace
extensionPricing: 'free'

For more help with triggers, such as CI and PR triggers, see Specify events that trigger pipelines.

NOTE
Each job uses a new user agent and requires dependencies to be installed.

Pipeline stages
This section will help you understand how the pipeline stages work.
Stage: Run and publish unit tests
This stage runs unit tests and publishes test results to Azure DevOps.
To run unit tests, add a custom script to the package.json file. For example:

"scripts": {
"testScript": "mocha ./TestFile --reporter xunit --reporter-option output=ResultsFile.xml"
},

1. Add "Use Node CLI for Azure DevOps (tfx-cli)" to install the tfx-cli onto your build agent.
2. Add the "npm" task with the "install" command and target the folder with the package.json file.
3. Add the "Bash" task to compile the TypeScript into JavaScript.
4. Add the "npm" task with the "custom" command, target the folder that contains the unit tests, and input
testScript as the command. Use the following inputs:

Command: custom
Working folder that contains package.json: /TestsDirectory
Command and arguments: testScript
5. Add the "Publish Test Results" task. If you're using the Mocha XUnit reporter, ensure that the result format is
"JUnit" and not "XUnit." Set the search folder to the root directory. Use the following inputs:
Test result format: JUnit
Test results files: **/ResultsFile.xml
Search folder: $(System.DefaultWorkingDirectory)
After the test results have been published, the output under the tests tab should look like this:

Stage: Package the extension and publish build artifacts


1. Add "Use Node CLI for Azure DevOps (tfx-cli)" to install the tfx-cli onto your build agent.
2. Add the "npm" task with the "install" command and target the folder with the package.json file.
3. Add the "Bash" task to compile the TypeScript into JavaScript.
4. Add the "Query Extension Version" task to query the existing extension version. Use the following inputs:
Connect to: Visual Studio Marketplace
Visual Studio Marketplace (Service connection): Service Connection
Publisher ID: ID of your Visual Studio Marketplace publisher
Extension ID: ID of your extension in the vss-extension.json file
Increase version: Patch
Output Variable: Task.Extension.Version
5. Add the "Package Extension" task to package the extensions based on manifest Json. Use the following
inputs:
Root manifests folder: Points to root directory that contains manifest file. For example,
$(System.DefualtWorkingDirectory) is the root directory.
Manifest file(s): vss-extension.json.
Publisher ID: ID of your Visual Studio Marketplace publisher.
Extension ID: ID of your extension in the vss-extension.json file.
Extension Name: Name of your extension in the vss-extension.json file.
Extension Version: $(Task.Extension.Version).
Override tasks version: checked (true).
Override Type: Replace Only Patch (1.0.r).
Extension Visibility: If the extension is still in development, set the value to private. To release the
extension to the public, set the value to public.
6. Add the "Copy files" task to copy published files. Use the following inputs:
Contents: All of the files that need to be copied for publishing them as an artifact
Target folder: The folder that the files will all be copied to
For example: $(Build.ArtifactStagingDirectory)
7. Add "Publish build artifacts" to publish the artifacts for use in other jobs or pipelines. Use the following
inputs:
Path to publish: The path to the folder that contains the files that are being published.
For example: $(Build.ArtifactStagingDirectory).
Artifact name: The name given to the artifact.
Artifact publish location: Choose "Azure Pipelines" to use the artifact in future jobs.
Stage: Download build artifacts and publish the extension
1. Add "Use Node CLI for Azure DevOps (tfx-cli)" to install the tfx-cli onto your build agent.
2. Add the "Download build artifacts" task to download the artifacts onto a new job. Use the following inputs:
Download artifacts produced by: If you're downloading the artifact on a new job from the same pipeline,
select "Current build." If you're downloading on a new pipeline, select "Specific build."
Download type: Choose "Specific artifact" to download all files that were published.
Artifact name: The published artifact's name.
Destination directory: The folder where the files should be downloaded.
3. The last task that you need is the "Publish Extension" task. Use the following inputs:
Connect to: Visual Studio Marketplace
Visual Studio Marketplace connection: ServiceConnection
Input file type: VSIX file
VSIX file: /Publisher.*.vsix
Publisher ID: ID of your Visual Studio Marketplace publisher
Extension ID: ID of your extension in the vss-extension.json file
Extension Name: Name of your extension in the vss-extension.json file
Extension visibility: Either private or public

Optional: Install and test your extension


Install an extension that is shared with you in just a few steps:
1. From your organization control panel ( https://dev.azure.com/{organization}/_admin ), go to the project
collection administration page.
2. In the Extensions tab, find your extension in the "Extensions Shared With Me" group and select the extension
link.
3. Install the extension.
If you can't see the Extensions tab, make sure you're in the control panel (the administration page at the project
collection level, https://dev.azure.com/{organization}/_admin ) and not the administration page for a project.
If you don't see the Extensions tab on the control panel, then extensions aren't enabled for your organization. You
can get early access to the extensions feature by joining the Visual Studio Partner Program.
For build and release tasks to package and publish Azure DevOps Extensions to the Visual Studio Marketplace, you
can download Azure DevOps Extension Tasks.

Helpful links
Extension Manifest Reference
Build/Release Task JSON Schema
Build/Release Task Examples
Specify jobs in your pipeline
11/2/2020 • 22 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS
2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.

You can organize your pipeline into jobs. Every pipeline has at least one job. A job is a series of steps that
run sequentially as a unit. In other words, a job is the smallest unit of work that can be scheduled to run.
You can organize your build or release pipeline into jobs. Every pipeline has at least one job. A job is a
series of steps that run sequentially as a unit. In other words, a job is the smallest unit of work that can be
scheduled to run.

NOTE
You must install TFS 2018.2 to use jobs in build processes. In TFS 2018 RTM you can use jobs in release
deployment processes.

You can organize your release pipeline into jobs. Every release pipeline has at least one job. Jobs are not
supported in a build pipeline in this version of TFS.

NOTE
You must install Update 2 to use jobs in a release pipeline in TFS 2017. Jobs in build pipelines are available in Azure
Pipelines, TFS 2018.2, and newer versions.

Define a single job


YAML
Classic
In the simplest case, a pipeline has a single job. In that case, you do not have to explicitly use the job
keyword unless you are using a template. You can directly specify the steps in your YAML file.
This YAML file has a job that runs on a Microsoft-hosted agent and outputs Hello world .

pool:
vmImage: 'ubuntu-16.04'
steps:
- bash: echo "Hello world"

You may want to specify additional properties on that job. In that case, you can use the job keyword.
jobs:
- job: myJob
timeoutInMinutes: 10
pool:
vmImage: 'ubuntu-16.04'
steps:
- bash: echo "Hello world"

Your pipeline may have multiple jobs. In that case, use the jobs keyword.

jobs:
- job: A
steps:
- bash: echo "A"

- job: B
steps:
- bash: echo "B"

Your pipeline may have multiple stages, each with multiple jobs. In that case, use the stages keyword.

stages:
- stage: A
jobs:
- job: A1
- job: A2

- stage: B
jobs:
- job: B1
- job: B2

The full syntax to specify a job is:

- job: string # name of the job, A-Z, a-z, 0-9, and underscore
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
strategy:
parallel: # parallel strategy
matrix: # matrix strategy
maxParallel: number # maximum number simultaneous matrix legs to run
# note: `parallel` and `matrix` are mutually exclusive
# you may specify one or the other; including both is an error
# `maxParallel` is only valid with `matrix`
continueOnError: boolean # 'true' if future jobs should run even if this job fails; defaults to
'false'
pool: pool # agent pool
workspace:
clean: outputs | resources | all # what to clean up before the job runs
container: containerReference # container to run this job inside
timeoutInMinutes: number # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: number # how much time to give 'run always even if cancelled tasks' before
killing them
variables: { string: string } | [ variable | variableReference ]
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
services: { string: string | container } # container resources to run as a service container

If the primary intent of your job is to deploy your app (as opposed to build or test your app), then you can
use a special type of job called deployment job .
The syntax for a deployment job is:

- deployment: string # instead of job keyword, use deployment keyword


pool:
name: string
demands: string | [ string ]
environment: string
strategy:
runOnce:
deploy:
steps:
- script: echo Hi!

Although you can add steps for deployment tasks in a job , we recommend that you instead use a
deployment job. A deployment job has a few benefits. For example, you can deploy to an environment,
which includes benefits such as being able to see the history of what you've deployed.
YAML is not supported in this version of TFS.

Types of jobs
Jobs can be of different types, depending on where they run.
YAML
Classic
Agent pool jobs run on an agent in an agent pool.
Ser ver jobs run on the Azure DevOps Server.
Container jobs run in a container on an agent in an agent pool. For more information about
choosing containers, see Define container jobs.
YAML
Classic
Agent pool jobs run on an agent in an agent pool.
Ser ver jobs run on the Azure DevOps Server.
Agent pool jobs run on an agent in the agent pool. These jobs are available in build and release
pipelines.
Ser ver jobs run on TFS. These jobs are available in build and release pipelines.
Deployment group jobs run on machines in a deployment group. These jobs are available only in
release pipelines.
Agent pool jobs run on an agent in the agent pool. These jobs are only available release pipelines.
Agent pool jobs
These are the most common type of jobs and they run on an agent in an agent pool. Use demands with
self-hosted agents to specify what capabilities an agent must have to run your job.

NOTE
Demands and capabilities are designed for use with self-hosted agents so that jobs can be matched with an agent
that meets the requirements of the job. When using Microsoft-hosted agents, you select an image for the agent
that matches the requirements of the job, so although it is possible to add capabilities to a Microsoft-hosted
agent, you don't need to use capabilities with Microsoft-hosted agents.
YAML
Classic

pool:
name: myPrivateAgents # your job runs on an agent in this pool
demands: agent.os -equals Windows_NT # the agent must have this capability to run the job
steps:
- script: echo hello world

Or multiple demands:

pool:
name: myPrivateAgents
demands:
- agent.os -equals Darwin
- anotherCapability -equals somethingElse
steps:
- script: echo hello world

YAML is not yet supported in TFS.


Learn more about agent capabilities.
Server jobs
Tasks in a server job are orchestrated by and executed on the server (Azure Pipelines or TFS). A server job
does not require an agent or any target computers. Only a few tasks are supported in a server job at
present.
YAML
Classic
The full syntax to specify a server job is:

jobs:
- job: string
timeoutInMinutes: number
cancelTimeoutInMinutes: number
strategy:
maxParallel: number
matrix: { string: { string: string } }

pool: server

You can also use the simplified syntax:

jobs:
- job: string
pool: server

YAML is not yet supported in TFS.

Dependencies
When you define multiple jobs in a single stage, you can specify dependencies between them. Pipelines
must contain at least one job with no dependencies.
NOTE
Each agent can run only one job at a time. To run multiple jobs in parallel you must configure multiple agents. You
also need sufficient parallel jobs.

YAML
Classic
The syntax for defining multiple jobs and their dependencies is:

jobs:
- job: string
dependsOn: string
condition: string

Example jobs that build sequentially:

jobs:
- job: Debug
steps:
- script: echo hello from the Debug build
- job: Release
dependsOn: Debug
steps:
- script: echo hello from the Release build

Example jobs that build in parallel (no dependencies):

jobs:
- job: Windows
pool:
vmImage: 'vs2017-win2016'
steps:
- script: echo hello from Windows
- job: macOS
pool:
vmImage: 'macOS-10.14'
steps:
- script: echo hello from macOS
- job: Linux
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: echo hello from Linux

Example of fan-out:
jobs:
- job: InitialJob
steps:
- script: echo hello from initial job
- job: SubsequentA
dependsOn: InitialJob
steps:
- script: echo hello from subsequent A
- job: SubsequentB
dependsOn: InitialJob
steps:
- script: echo hello from subsequent B

Example of fan-in:

jobs:
- job: InitialA
steps:
- script: echo hello from initial A
- job: InitialB
steps:
- script: echo hello from initial B
- job: Subsequent
dependsOn:
- InitialA
- InitialB
steps:
- script: echo hello from subsequent

YAML builds are not yet available on TFS.

Conditions
You can specify the conditions under which each job runs. By default, a job runs if it does not depend on
any other job, or if all of the jobs that it depends on have completed and succeeded. You can customize
this behavior by forcing a job to run even if a previous job fails or by specifying a custom condition.
YAML
Classic
Example to run a job based upon the status of running a previous job:

jobs:
- job: A
steps:
- script: exit 1

- job: B
dependsOn: A
condition: failed()
steps:
- script: echo this will run when A fails

- job: C
dependsOn:
- A
- B
condition: succeeded('B')
steps:
- script: echo this will run when B runs and succeeds
Example of using a custom condition:

jobs:
- job: A
steps:
- script: echo hello

- job: B
dependsOn: A
condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
steps:
- script: echo this only runs for master

You can specify that a job run based on the value of an output variable set in a previous job. In this case,
you can only use variables set in directly dependent jobs:

jobs:
- job: A
steps:
- script: "echo ##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar

- job: B
condition: and(succeeded(), ne(dependencies.A.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
steps:
- script: echo hello from B

YAML builds are not yet available on TFS.

Timeouts
To avoid taking up resources when your job is unresponsive or waiting too long, it's a good idea to set a
limit on how long your job is allowed to run. Use the job timeout setting to specify the limit in minutes for
running the job. Setting the value to zero means that the job can run:
Forever on self-hosted agents
For 360 minutes (6 hours) on Microsoft-hosted agents with a public project and public repository
For 60 minutes on Microsoft-hosted agents with a private project or private repository (unless
additional capacity is paid for)
The timeout period begins when the job starts running. It does not include the time the job is queued or is
waiting for an agent.
YAML
Classic
The timeoutInMinutes allows a limit to be set for the job execution time. When not specified, the default is
60 minutes. When 0 is specified, the maximum limit is used (described above).
The cancelTimeoutInMinutes allows a limit to be set for the job cancel time when the deployment task is
set to keep running if a previous task has failed. When not specified, the default is 5 minutes.
jobs:
- job: Test
timeoutInMinutes: 10 # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: 2 # how much time to give 'run always even if cancelled tasks' before
stopping them

YAML is not yet supported in TFS.

Jobs targeting Microsoft-hosted agents have additional restrictions on how long they may run.

You can also set the timeout for each task individually - see task control options.

Multi-job configuration
From a single job you author, you can run multiple jobs on multiple agents in parallel. Some examples
include:
Multi-configuration builds: You can build multiple configurations in parallel. For example, you
could build a Visual C++ app for both debug and release configurations on both x86 and x64
platforms. To learn more, see Visual Studio Build - multiple configurations for multiple platforms.
Multi-configuration deployments: You can run multiple deployments in parallel, for example,
to different geographic regions.
Multi-configuration testing: You can run test multiple configurations in parallel.
YAML
Classic
The matrix strategy enables a job to be dispatched multiple times, with different variable sets. The
maxParallel tag restricts the amount of parallelism. The following job will be dispatched three times with
the values of Location and Browser set as specified. However, only two jobs will run at the same time.

jobs:
- job: Test
strategy:
maxParallel: 2
matrix:
US_IE:
Location: US
Browser: IE
US_Chrome:
Location: US
Browser: Chrome
Europe_Chrome:
Location: Europe
Browser: Chrome

NOTE
Matrix configuration names (like US_IE above) must contain only basic Latin alphabet letters (A-Z, a-z), numbers,
and underscores ( _ ). They must start with a letter. Also, they must be 100 characters or less.

It's also possible to use output variables to generate a matrix. This can be handy if you need to generate
the matrix using a script.
matrix will accept a runtime expression containing a stringified JSON object. That JSON object, when
expanded, must match the matrixing syntax. In the example below, we've hard-coded the JSON string, but
it could be generated by a scripting language or command-line program.

jobs:
- job: generator
steps:
- bash: echo "##vso[task.setVariable variable=legs;isOutput=true]{'a':{'myvar':'A'}, 'b':
{'myvar':'B'}}"
name: mtrx
# This expands to the matrix
# a:
# myvar: A
# b:
# myvar: B
- job: runner
dependsOn: generator
strategy:
matrix: $[ dependencies.generator.outputs['mtrx.legs'] ]
steps:
- script: echo $(myvar) # echos A or B depending on which leg is running

YAML is not supported in TFS.

Slicing
An agent job can be used to run a suite of tests in parallel. For example, you can run a large suite of 1000
tests on a single agent. Or, you can use two agents and run 500 tests on each one in parallel.
To leverage slicing, the tasks in the job should be smart enough to understand the slice they belong to.
The Visual Studio Test task is one such task that supports test slicing. If you have installed multiple agents,
you can specify how the Visual Studio Test task will run in parallel on these agents.
YAML
Classic
The parallel strategy enables a job to be duplicated many times. Variables System.JobPositionInPhase
and System.TotalJobsInPhase are added to each job. The variables can then be used within your scripts to
divide work among the jobs. See Parallel and multiple execution using agent jobs.
The following job will be dispatched five times with the values of System.JobPositionInPhase and
System.TotalJobsInPhase set appropriately.

jobs:
- job: Test
strategy:
parallel: 5

YAML is not yet supported in TFS.

Job variables
If you are using YAML, variables can be specified on the job. The variables can be passed to task inputs
using the macro syntax $(variableName), or accessed within a script using the stage variable.
YAML
Classic
Here's an example of defining variables in a job and using them within tasks.

variables:
mySimpleVar: simple var value
"my.dotted.var": dotted var value
"my var with spaces": var with spaces value

steps:
- script: echo Input macro = $(mySimpleVar). Env var = %MYSIMPLEVAR%
condition: eq(variables['agent.os'], 'Windows_NT')
- script: echo Input macro = $(mySimpleVar). Env var = $MYSIMPLEVAR
condition: in(variables['agent.os'], 'Darwin', 'Linux')
- bash: echo Input macro = $(my.dotted.var). Env var = $MY_DOTTED_VAR
- powershell: Write-Host "Input macro = $(my var with spaces). Env var = $env:MY_VAR_WITH_SPACES"

YAML is not yet supported in TFS.


For information about using a condition , see Specify conditions.

Workspace
When you run an agent pool job, it creates a workspace on the agent. The workspace is a directory in
which it downloads the source, runs steps, and produces outputs. The workspace directory can be
referenced in your job using Pipeline.Workspace variable. Under this, various subdirectories are created:
When you run an agent pool job, it creates a workspace on the agent. The workspace is a directory in
which it downloads the source, runs steps, and produces outputs. The workspace directory can be
referenced in your job using Agent.BuildDirectory variable. Under this, various subdirectories are
created:
Build.SourcesDirectory is where tasks download the application's source code.
Build.ArtifactStagingDirectory is where tasks download artifacts needed for the pipeline or upload
artifacts before they are published.
Build.BinariesDirectory is where tasks write their outputs.
Common.TestResultsDirectory is where tasks upload their test results.

YAML
Classic
When you run a pipeline on a self-hosted agent , by default, none of the subdirectories are cleaned in
between two consecutive runs. As a result, you can do incremental builds and deployments, provided that
tasks are implemented to make use of that. You can override this behavior using the workspace setting on
the job.

IMPORTANT
The workspace clean options are applicable only for self-hosted agents. When using Microsoft-hosted agents, job
are always run on a new agent.

- job: myJob
workspace:
clean: outputs | resources | all # what to clean up before the job runs

When you specify one of the clean options, they are interpreted as follows:
outputs : Delete Build.BinariesDirectory before running a new job.
resources : Delete Build.SourcesDirectory before running a new job.
all : Delete the entire Pipeline.Workspace directory before running a new job.

$(Build.ArtifactStagingDirectory) and $(Common.TestResultsDirectory) are always deleted and recreated


prior to every build regardless of any of these settings.

NOTE
Depending on your agent capabilities and pipeline demands, each job may be routed to a different agent in your
self-hosted pool. As a result, you may get a new agent for subsequent pipeline runs (or stages or jobs in the same
pipeline), so not cleaning is not a guarantee that subsequent runs, jobs, or stages will be able to access outputs
from previous runs, jobs, or stages. You can configure agent capabilities and pipeline demands to specify which
agents are used to run a pipeline job, but unless there is only a single agent in the pool that meets the demands,
there is no guarantee that subsequent jobs will use the same agent as previous jobs. For more information, see
Specify demands.

In addition to workspace clean, you can also configure cleaning by configuring the Clean setting in the
pipeline settings UI. When the Clean setting is true it is equivalent to specifying clean: true for every
checkout step in your pipeline. To configure the Clean setting:
1. Edit your pipeline, choose ..., and select Triggers .

2. Select YAML , Get sources , and configure your desired Clean setting. The default is false .

YAML is not yet supported in TFS.

Artifact download
This example YAML file publishes the artifact WebSite and then downloads the artifact to
$(Pipeline.Workspace) . The Deploy job only runs if the Build job is successful.
YAML
Classic

# test and upload my code as an artifact named WebSite


jobs:
- job: Build
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: npm test
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(System.DefaultWorkingDirectory)'
artifactName: WebSite

# download the artifact and deploy it only if the build job succeeded
- job: Deploy
pool:
vmImage: 'ubuntu-16.04'
steps:
- checkout: none #skip checking out the default repository resource
- task: DownloadBuildArtifacts@0
displayName: 'Download Build Artifacts'
inputs:
artifactName: WebSite
downloadPath: $(System.DefaultWorkingDirectory)

dependsOn: Build
condition: succeeded()

YAML is not yet supported in TFS.


For information about using dependsOn and condition , see Specify conditions.

Access to OAuth token


You can allow scripts running in a job to access the current Azure Pipelines or TFS OAuth security token.
The token can be use to authenticate to the Azure Pipelines REST API.
YAML
Classic
The OAuth token is always available to YAML pipelines. It must be explicitly mapped into the task or step
using env . Here's an example:

steps:
- powershell: |
$url =
"$($env:SYSTEM_TEAMFOUNDATIONCOLLECTIONURI)$env:SYSTEM_TEAMPROJECTID/_apis/build/definitions/$($env:S
YSTEM_DEFINITIONID)?api-version=4.1-preview"
Write-Host "URL: $url"
$pipeline = Invoke-RestMethod -Uri $url -Headers @{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
Write-Host "Pipeline = $($pipeline | ConvertTo-Json -Depth 100)"
env:
SYSTEM_ACCESSTOKEN: $(system.accesstoken)

YAML is not yet supported in TFS.


Related articles
Deployment group jobs
Conditions
Define container jobs (YAML)
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
By default, jobs run on the host machine where the agent is installed. This is convenient and typically well-suited
for projects that are just beginning to adopt Azure Pipelines. Over time, you may find that you want more
control over the context where your tasks run. YAML pipelines offer container jobs for this level of control.
On Linux and Windows agents, jobs may be run on the host or in a container. (On macOS and Red Hat
Enterprise Linux 6, container jobs are not available.) Containers provide isolation from the host and allow you to
pin specific versions of tools and dependencies. Host jobs require less initial setup and infrastructure to
maintain.
Containers offer a lightweight abstraction over the host operating system. You can select the exact versions of
operating systems, tools, and dependencies that your build requires. When you specify a container in your
pipeline, the agent will first fetch and start the container. Then, each step of the job will run inside the container.
You cannot have nested containers. Containers are not supported when an agent is already running inside a
container.
If you need fine-grained control at the individual step level, step targets allow you to choose container or host
for each step.

Requirements
Linux-based containers
The Azure Pipelines system requires a few things in Linux-based containers:
Bash
glibc-based
Can run Node.js (which the agent provides)
Does not define an ENTRYPOINT
USER has access to groupadd and other privileges commands without sudo

And on your agent host:


Ensure Docker is installed
The agent must have permission to access the Docker daemon
Be sure your container has each of these tools available. Some of the extremely stripped-down containers
available on Docker Hub, especially those based on Alpine Linux, don't satisfy these minimum requirements.
Containers with a ENTRYPOINT might not work, since Azure Pipelines will docker create an awaiting container
and docker exec a series of commands which expect the container is always up and running.

NOTE
For Windows-based Linux containers, Node.js must be pre-installed.

Windows Containers
Azure Pipelines can also run Windows Containers. Windows Server version 1803 or higher is required. Docker
must be installed. Be sure your pipelines agent has permission to access the Docker daemon.
The Windows container must support running Node.js. A base Windows Nano Server container is missing
dependencies required to run Node. See this post for more information about what it takes to run Node on
Windows Nano Server.
Hosted agents
Only windows-2019 and ubuntu-* images support running containers. The macOS image does not support
running containers.

Single job
A simple example:

pool:
vmImage: 'ubuntu-18.04'

container: ubuntu:18.04

steps:
- script: printenv

This tells the system to fetch the ubuntu image tagged 18.04 from Docker Hub and then start the container.
When the printenv command runs, it will happen inside the ubuntu:18.04 container.
A Windows example:

pool:
vmImage: 'windows-2019'

container: mcr.microsoft.com/windows/servercore:ltsc2019

steps:
- script: set

NOTE
Windows requires that the kernel version of the host and container match. Since this example uses the Windows 2019
image, we will use the 2019 tag for the container.

Multiple jobs
Containers are also useful for running the same steps in multiple jobs. In the following example, the same steps
run in multiple versions of Ubuntu Linux. (And we don't have to mention the jobs keyword, since there's only a
single job defined.)
pool:
vmImage: 'ubuntu-18.04'

strategy:
matrix:
ubuntu14:
containerImage: ubuntu:14.04
ubuntu16:
containerImage: ubuntu:16.04
ubuntu18:
containerImage: ubuntu:18.04

container: $[ variables['containerImage'] ]

steps:
- script: printenv

Endpoints
Containers can be hosted on registries other than Docker Hub. To host an image on Azure Container Registry or
another private container registry, add a service connection to the private registry. Then you can reference it in a
container spec:

container:
image: myprivate/registry:ubuntu1604
endpoint: private_dockerhub_connection

steps:
- script: echo hello

or

container:
image: myprivate.azurecr.io/windowsservercore:1803
endpoint: my_acr_connection

steps:
- script: echo hello

Other container registries may also work. Amazon ECR doesn't currently work, as there are additional client
tools required to convert AWS credentials into something Docker can use to authenticate.

NOTE
The Red Hat Enterprise Linux 6 build of the agent won't run container job. Choose another Linux flavor, such as Red Hat
Enterprise Linux 7 or above.

Options
If you need to control container startup, you can specify options .
container:
image: ubuntu:18.04
options: --hostname container-test --ip 192.168.0.1

steps:
- script: echo hello

Running docker create --help will give you the list of supported options.

Reusable container definition


In the following example, the containers are defined in the resources section. Each container is then referenced
later, by referring to its assigned alias. (Here, we explicitly list the jobs keyword for clarity.)

resources:
containers:
- container: u14
image: ubuntu:14.04

- container: u16
image: ubuntu:16.04

- container: u18
image: ubuntu:18.04

jobs:
- job: RunInContainer
pool:
vmImage: 'ubuntu-18.04'

strategy:
matrix:
ubuntu14:
containerResource: u14
ubuntu16:
containerResource: u16
ubuntu18:
containerResource: u18

container: $[ variables['containerResource'] ]

steps:
- script: printenv

Non glibc-based containers


The Azure Pipelines agent supplies a copy of Node.js, which is required to run tasks and scripts. The version of
Node.js is compiled against the C runtime we use in our hosted cloud, typically glibc. Some variants of Linux use
other C runtimes. For instance, Alpine Linux uses musl.
If you want to use a non-glibc-based container as a job container, you will need to arrange a few things on your
own. First, you must supply your own copy of Node.js. Second, you must add a label to your image telling the
agent where to find the Node.js binary. Finally, stock Alpine doesn't come with other dependencies that Azure
Pipelines depends on: bash, sudo, which, and groupadd.
Bring your own Node.js
You are responsible for adding a Node binary to your container. Node 6 is a safe choice. You can start from the
node:6-alpine image.
Tell the agent about Node.js
The agent will read a container label "com.azure.dev.pipelines.handler.node.path". If this label exists, it must be
the path to the Node.js binary. For example, in an image based on node:10-alpine , add this line to your
Dockerfile:

LABEL "com.azure.dev.pipelines.agent.handler.node.path"="/usr/local/bin/node"

Add requirements
Azure Pipelines assumes a Bash-based system with common administration packages installed. Alpine Linux in
particular doesn't come with several of the packages needed. Installing bash , sudo , and shadow will cover the
basic needs.

RUN apk add bash sudo shadow

If you depend on any in-box or Marketplace tasks, you'll also need to supply the binaries they require.
Full example of a Dockerfile

FROM node:10-alpine

RUN apk add --no-cache --virtual .pipeline-deps readline linux-pam \


&& apk add bash sudo shadow \
&& apk del .pipeline-deps

LABEL "com.azure.dev.pipelines.agent.handler.node.path"="/usr/local/bin/node"

CMD [ "node" ]

Multiple jobs with agent pools on a single hosted agent


The container job uses the underlying host agent Docker config.json for image registry authorization, which logs
out at the end of the Docker registry container initialization. Subsequent registry image pulls authorization
might be denied for “unauthorized authentication” because the Docker config.json file registered in the system
for authentication has already been logged out by one of the other container jobs that are running in parallel.
The solution is to set the Docker environment variable DOCKER_CONFIG that is specific to each agent pool service
running on the hosted agent. Export the DOCKER_CONFIG in each agent pool’s runsvc.sh script:

#insert anything to set up env when running as a service


export DOCKER_CONFIG=./.docker
Add stages, dependencies, & conditions
11/2/2020 • 9 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

The concept of stages varies depending on whether you use YAML pipelines or classic release pipelines.
YAML
Classic
You can organize the jobs in your pipeline into stages. Stages are the major divisions in a pipeline: "build this
app", "run these tests", and "deploy to pre-production" are good examples of stages. They are a logical boundary
in your pipeline at which you can pause the pipeline and perform various checks.
Every pipeline has at least one stage even if you do not explicitly define it. Stages may be arranged into a
dependency graph: "run this stage before that one".

NOTE
Support for stages was added in Azure DevOps Server 2019.1.

YAML is not supported in this version of TFS.

Specify stages
YAML
Classic

NOTE
Support for stages was added in Azure DevOps Server 2019.1.

In the simplest case, you do not need any logical boundaries in your pipeline. In that case, you do not have to
explicitly use the stage keyword. You can directly specify the jobs in your YAML file.

# this has one implicit stage and one implicit job


pool:
vmImage: 'ubuntu-16.04'
steps:
- bash: echo "Hello world"
# this pipeline has one implicit stage
jobs:
- job: A
steps:
- bash: echo "A"

- job: B
steps:
- bash: echo "B"

If you organize your pipeline into multiple stages, you use the stages keyword.

stages:
- stage: A
jobs:
- job: A1
- job: A2

- stage: B
jobs:
- job: B1
- job: B2

If you choose to specify a pool at the stage level, then all jobs defined in that stage will use that pool unless
otherwise specified at the job-level.

NOTE
In Azure DevOps Server 2019, pools can only be specified at job level.

stages:
- stage: A
pool: StageAPool
jobs:
- job: A1 # will run on "StageAPool" pool based on the pool defined on the stage
- job: A2 # will run on "JobPool" pool
pool: JobPool

The full syntax to specify a stage is:

stages:
- stage: string # name of the stage, A-Z, a-z, 0-9, and underscore
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
pool: string | pool
variables: { string: string } | [ variable | variableReference ]
jobs: [ job | templateReference]

YAML is not supported in this version of TFS.

Specify dependencies
YAML
Classic
NOTE
Support for stages was added in Azure DevOps Server 2019.1.

When you define multiple stages in a pipeline, by default, they run one after the other in the order in which you
define them in the YAML file. Pipelines must contain at least one stage with no dependencies.
The syntax for defining multiple stages and their dependencies is:

stages:
- stage: string
dependsOn: string
condition: string

Example stages that run sequentially:

# if you do not use a dependsOn keyword, stages run in the order they are defined
stages:
- stage: QA
jobs:
- job:
...

- stage: Prod
jobs:
- job:
...

Example stages that run in parallel:

stages:
- stage: FunctionalTest
jobs:
- job:
...

- stage: AcceptanceTest
dependsOn: [] # this removes the implicit dependency on previous stage and causes this to run in
parallel
jobs:
- job:
...

Example of fan-out and fan-in:

stages:
- stage: Test

- stage: DeployUS1
dependsOn: Test # this stage runs after Test

- stage: DeployUS2
dependsOn: Test # this stage runs in parallel with DeployUS1, after Test

- stage: DeployEurope
dependsOn: # this stage runs after DeployUS1 and DeployUS2
- DeployUS1
- DeployUS2
YAML is not supported in this version of TFS.

Conditions
You can specify the conditions under which each stage runs. By default, a stage runs if it does not depend on any
other stage, or if all of the stages that it depends on have completed and succeeded. You can customize this
behavior by forcing a stage to run even if a previous stage fails or by specifying a custom condition.

NOTE
Conditions for failed ('JOBNAME/STAGENAME') and succeeded ('JOBNAME/STAGENAME') as shown in the following
example work only for .
YAML pipelines

YAML
Classic

NOTE
Support for stages was added in Azure DevOps Server 2019.1.

Example to run a stage based upon the status of running a previous stage:

stages:
- stage: A

# stage B runs if A fails


- stage: B
condition: failed()

# stage C runs if B succeeds


- stage: C
dependsOn:
- A
- B
condition: succeeded('B')

Example of using a custom condition:

stages:
- stage: A

- stage: B
condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))

You cannot currently specify that a stage run based on the value of an output variable set in a previous stage.
YAML is not supported in this version of TFS.

Specify queuing policies


YAML
Classic
Queuing policies are not yet supported in YAML pipelines. At present, each run of a pipeline is independent from
and unaware of other runs. In other words, your two successive commits may trigger two pipelines, and both of
them will execute the same sequence of stages without waiting for each other. While we work to bring queuing
policies to YAML pipelines, we recommend that you use manual approvals in order to manually sequence and
control the order the execution if this is of importance.
YAML is not supported in this version of TFS.

Specify approvals
YAML
Classic
You can manually control when a stage should run using approval checks. This is commonly used to control
deployments to production environments. Checks are a mechanism available to the resource owner to control if
and when a stage in a pipeline can consume a resource. As an owner of a resource, such as an environment, you
can define checks that must be satisfied before a stage consuming that resource can start.
Currently, manual approval checks are supported on environments. For more information, see Approvals.
Approvals are not yet supported in YAML pipelines in this version of Azure DevOps Server.
YAML is not supported in this version of TFS.
Deployment jobs
11/2/2020 • 11 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020

IMPORTANT
Job and stage names cannot contain keywords (example: deployment ).
Each job in a stage must have a unique name.

In YAML pipelines, we recommend that you put your deployment steps in a special type of job called a
deployment job. A deployment job is a collection of steps that are run sequentially against the environment. A
deployment job and a traditional job can exist in the same stage.
Deployment jobs provide the following benefits:
Deployment histor y : You get the deployment history across pipelines, down to a specific resource and
status of the deployments for auditing.
Apply deployment strategy : You define how your application is rolled out.

NOTE
We currently support only the runOnce, rolling, and the canary strategies.

Schema
Here is the full syntax to specify a deployment job:

jobs:
- deployment: string # Name of the deployment job, A-Z, a-z, 0-9, and underscore. The word "deploy" is a
keyword and is unsupported as the deployment name.
displayName: string # Friendly name to display in the UI.
pool: # See pool schema.
name: string # Use only global level variables for defining a pool name. Stage/job level variables
are not supported to define pool name.
demands: string | [ string ]
dependsOn: string
condition: string
continueOnError: boolean # 'true' if future jobs should run even if this job fails;
defaults to 'false'
container: containerReference # Container to run the job inside.
services: { string: string | container } # Container resources to run as a service container.
timeoutInMinutes: nonEmptyString # How long to run the job before automatically cancelling.
cancelTimeoutInMinutes: nonEmptyString # How much time to give 'run always even if cancelled tasks'
before killing them.
variables: { string: string } | [ variable | variableReference ]
environment: string # Target environment name and optionally a resource-name to record the deployment
history; format: <environment-name>.<resource-name>
strategy: [ deployment strategy ] # See deployment strategy schema.

Deployment strategies
When you're deploying application updates, it's important that the technique you use to deliver the update will:
Enable initialization.
Deploy the update.
Route traffic to the updated version.
Test the updated version after routing traffic.
In case of failure, run steps to restore to the last known good version.
We achieve this by using lifecycle hooks that can run steps during deployment. Each of the lifecycle hooks
resolves into an agent job or a server job (or a container or validation job in the future), depending on the pool
attribute. By default, the lifecycle hooks will inherit the pool specified by the deployment job.
Deployment jobs use the $(Pipeline.Workspace) system variable.
Descriptions of lifecycle hooks
preDeploy : Used to run steps that initialize resources before application deployment starts.

deploy : Used to run steps that deploy your application. Download artifact task will be auto injected only in the
deploy hook for deployment jobs. To stop downloading artifacts, use - download: none or choose specific
artifacts to download by specifying Download Pipeline Artifact task.
routeTraffic : Used to run steps that serve the traffic to the updated version.
postRouteTraffic : Used to run the steps after the traffic is routed. Typically, these tasks monitor the health of the
updated version for defined interval.
on: failure or on: success : Used to run steps for rollback actions or clean-up.
RunOnce deployment strategy
runOnce is the simplest deployment strategy wherein all the lifecycle hooks, namely preDeploy deploy ,
routeTraffic , and postRouteTraffic , are executed once. Then, either on: success or on: failure is
executed.

strategy:
runOnce:
preDeploy:
pool: [ server | pool ] # See pool schema.
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
deploy:
pool: [ server | pool ] # See pool schema.
steps:
...
routeTraffic:
pool: [ server | pool ]
steps:
...
postRouteTraffic:
pool: [ server | pool ]
steps:
...
on:
failure:
pool: [ server | pool ]
steps:
...
success:
pool: [ server | pool ]
steps:
...
Rolling deployment strategy
A rolling deployment replaces instances of the previous version of an application with instances of the new
version of the application on a fixed set of virtual machines (rolling set) in each iteration.
We currently only support the rolling strategy to VM resources.
For example, a rolling deployment typically waits for deployments on each set of virtual machines to complete
before proceeding to the next set of deployments. You could do a health check after each iteration and if a
significant issue occurs, the rolling deployment can be stopped.
Rolling deployments can be configured by specifying the keyword rolling: under strategy: node. The
strategy.name variable is available in this strategy block, which takes the name of the strategy. In this case,
rolling.

strategy:
rolling:
maxParallel: [ number or percentage as x% ]
preDeploy:
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
deploy:
steps:
...
routeTraffic:
steps:
...
postRouteTraffic:
steps:
...
on:
failure:
steps:
...
success:
steps:
...

All the lifecycle hooks are supported and lifecycle hook jobs are created to run on each VM.
preDeploy , deploy , routeTraffic , and postRouteTraffic are executed once per batch size defined by
maxParallel . Then, either on: success or on: failure is executed.
With maxParallel: <# or % of VMs> , you can control the number/percentage of virtual machine targets to deploy
to in parallel. This ensures that the app is running on these machines and is capable of handling requests while
the deployment is taking place on the rest of the machines, which reduces overall downtime.

NOTE
There are a few known gaps in this feature. For example, when you retry a stage, it will re-run the deployment on all VMs
not just failed targets.

Canary deployment strategy


Canary deployment strategy is an advanced deployment strategy that helps mitigate the risk involved in rolling
out new versions of applications. By using this strategy, you can roll out the changes to a small subset of servers
first. As you gain more confidence in the new version, you can release it to more servers in your infrastructure
and route more traffic to it.
You can only use the canary deployment strategy for Kubernetes resources.
strategy:
canary:
increments: [ number ]
preDeploy:
pool: [ server | pool ] # See pool schema.
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
deploy:
pool: [ server | pool ] # See pool schema.
steps:
...
routeTraffic:
pool: [ server | pool ]
steps:
...
postRouteTraffic:
pool: [ server | pool ]
steps:
...
on:
failure:
pool: [ server | pool ]
steps:
...
success:
pool: [ server | pool ]
steps:
...

Canary deployment strategy supports the preDeploy lifecycle hook (executed once) and iterates with the deploy
, routeTraffic , and postRouteTraffic lifecycle hooks. It then exits with either the success or failure hook.
The following variables are available in this strategy:
strategy.name : Name of the strategy. For example, canary.
strategy.action : The action to be performed on the Kubernetes cluster. For example, deploy, promote, or reject.
strategy.increment : The increment value used in the current interaction. This variable is available only in deploy
, routeTraffic , and postRouteTraffic lifecycle hooks.

Examples
RunOnce deployment strategy
The following example YAML snippet showcases a simple use of a deploy job by using the runOnce deployment
strategy.

jobs:
# Track deployments on the environment.
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Creates an environment if it doesn't exist.
environment: 'smarthotel-dev'
strategy:
# Default deployment strategy, more coming...
runOnce:
deploy:
steps:
- script: echo my first deployment

With each run of this job, deployment history is recorded against the smarthotel-dev environment.
NOTE
It's also possible to create an environment with empty resources and use that as an abstract shell to record deployment
history, as shown in the previous example.

The next example demonstrates how a pipeline can refer both an environment and a resource to be used as the
target for a deployment job.

jobs:
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Records deployment against bookings resource - Kubernetes namespace.
environment: 'smarthotel-dev.bookings'
strategy:
runOnce:
deploy:
steps:
# No need to explicitly pass the connection details.
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespace)
manifests: |
$(System.ArtifactsDirectory)/manifests/*
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)

This approach has the following benefits:


Records deployment history on a specific resource within the environment, as opposed to recording the
history on all resources within the environment.
Steps in the deployment job automatically inherit the connection details of the resource (in this case, a
Kubernetes namespace, smarthotel-dev.bookings ), because the deployment job is linked to the environment.
This is useful in the cases where the same connection detail is set for multiple steps of the job.
Rolling deployment strategy
The rolling strategy for VMs updates up to five targets in each iteration. maxParallel will determine the number
of targets that can be deployed to, in parallel. The selection accounts for absolute number or percentage of
targets that must remain available at any time excluding the targets that are being deployed to. It is also used to
determine the success and failure conditions during deployment.
jobs:
- deployment: VMDeploy
displayName: web
environment:
name: smarthotel-dev
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 5 #for percentages, mention as x%
preDeploy:
steps:
- download: current
artifact: drop
- script: echo initialize, cleanup, backup, install certs
deploy:
steps:
- task: IISWebAppDeploymentOnMachineGroup@0
displayName: 'Deploy application to Website'
inputs:
WebSiteName: 'Default Web Site'
Package: '$(Pipeline.Workspace)/drop/**/*.zip'
routeTraffic:
steps:
- script: echo routing traffic
postRouteTraffic:
steps:
- script: echo health check post-route traffic
on:
failure:
steps:
- script: echo Restore from backup! This is on failure
success:
steps:
- script: echo Notify! This is on success

Canary deployment strategy


In the next example, the canary strategy for AKS will first deploy the changes with 10-percent pods, followed by
20 percent, while monitoring the health during postRouteTraffic . If all goes well, it will promote to 100 percent.
jobs:
- deployment:
environment: smarthotel-dev.bookings
pool:
name: smarthotel-devPool
strategy:
canary:
increments: [10,20]
preDeploy:
steps:
- script: initialize, cleanup....
deploy:
steps:
- script: echo deploy updates...
-task:KubernetesManifest@0
inputs:
action:$(strategy.action)
namespace:'default'
strategy:$(strategy.name)
percentage:$(strategy.increment)
manifests:'manifest.yml'
postRouteTaffic:
pool: server
steps:
- script: echo monitor application health...
on:
failure:
steps:
- script: echo clean-up, rollback...
success:
steps:
- script: echo checks passed, notify...

Use pipeline decorators to inject steps automatically


Pipeline decorators can be used in deployment jobs to auto-inject any custom step (for example, vulnerability
scanner) to every lifecycle hook execution of every deployment job. Since pipeline decorators can be applied to
all pipelines in an organization, this can be leveraged as part of enforcing safe deployment practices.
In addition, deployment jobs can be run as a container job along with services side-car if defined.

Support for output variables


Define output variables in a deployment job's lifecycle hooks and consume them in other downstream steps and
jobs within the same stage.
To share variables between stages, output an artifact in one stage and then consume it in a subsequent stage, or
use the stageDependencies syntax described in variables.
While executing deployment strategies, you can access output variables across jobs using the following syntax.
For runOnce strategy: $[dependencies.<job-name>.outputs['<job-name>.<step-name>.<variable-name>']]
For deployment jobs using the runOnce strategy:
$[dependencies.<job-name>.outputs['<lifecycle-hookname>_<resource-name>.<step-name>.<variable-
name>']]
. For example, if you have a deployment job to a virtual machine named Vm1 , the output variable
would be $[dependencies.<job-name>.outputs['Deploy_vm1.<step-name>.<variable-name>']]
For canar y strategy:
$[dependencies.<job-name>.outputs['<lifecycle-hookname>_<increment-value>.<step-name>.<variable-name>']]
For rolling strategy:
$[dependencies.<job-name>.outputs['<lifecycle-hookname>_<resource-name>.<step-name>.<variable-name>']]

# Set an output variable in a lifecycle hook of a deployment job executing canary strategy.
- deployment: A
pool:
vmImage: 'ubuntu-16.04'
environment: staging
strategy:
canary:
increments: [10,20] # Creates multiple jobs, one for each increment. Output variable can be
referenced with this.
deploy:
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment
variable value"
name: setvarStep
- bash: echo $(setvarStep.myOutputVar)
name: echovar

# Map the variable from the job.


- job: B
dependsOn: A
pool:
vmImage: 'ubuntu-16.04'
variables:
myVarFromDeploymentJob: $[ dependencies.A.outputs['deploy_10.setvarStep.myOutputVar'] ]
steps:
- script: "echo $(myVarFromDeploymentJob)"
name: echovar

For a runOnce job, specify the name of the job instead of the lifecycle hook:

# Set an output variable in a lifecycle hook of a deployment job executing runOnce strategy.
- deployment: A
pool:
vmImage: 'ubuntu-16.04'
environment: staging
strategy:
runOnce:
deploy:
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment
variable value"
name: setvarStep
- bash: echo $(setvarStep.myOutputVar)
name: echovar

# Map the variable from the job.


- job: B
dependsOn: A
pool:
vmImage: 'ubuntu-16.04'
variables:
myVarFromDeploymentJob: $[ dependencies.A.outputs['A.setvarStep.myOutputVar'] ]
steps:
- script: "echo $(myVarFromDeploymentJob)"
name: echovar

When you define an environment in a deployment job, the syntax of the output variable varies depending on
how the environment gets defined. In this example, env1 uses shorthand notation and env2 includes the full
syntax with a defined resource type.
stages:
- stage: MyStage
jobs:
- deployment: A1
pool:
vmImage: 'ubuntu-16.04'
environment: env1
strategy:
runOnce:
deploy:
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment
variable value"
name: setvarStep
- bash: echo $(System.JobName)
- deployment: A2
pool:
vmImage: 'ubuntu-16.04'
environment:
name: env1
resourceType: virtualmachine
strategy:
runOnce:
deploy:
steps:
- script: echo "##vso[task.setvariable variable=myOutputVarTwo;isOutput=true]this is the second
deployment variable value"
name: setvarStepTwo

- job: B1
dependsOn: A1
pool:
vmImage: 'ubuntu-16.04'
variables:
myVarFromDeploymentJob: $[ dependencies.A1.outputs['A1.setvarStep.myOutputVar'] ]

steps:
- script: "echo $(myVarFromDeploymentJob)"
name: echovar

- job: B2
dependsOn: A2
pool:
vmImage: 'ubuntu-16.04'
variables:
myVarFromDeploymentJob: $[ dependencies.A1.outputs['A1.setvarStepTwo.myOutputVar'] ]
myOutputVarTwo: $[ dependencies.A2.outputs['Deploy_vmsfortesting.setvarStepTwo.myOutputVarTwo'] ]

steps:
- script: "echo $(myOutputVarTwo)"
name: echovartwo

Learn more about how to set a multi-job output variable

FAQ
My pipeline is stuck with the message "Job is pending...". How can I fix this?
This can happen when there is a name conflict between two jobs. Verify that any deployment jobs in the same
stage have a unique name and that job and stage names do not contain keywords. If renaming does not fix the
problem, review troubleshooting pipeline runs.
Use a decorator to inject steps into a pipeline
11/2/2020 • 4 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020

TIP
Check out our newest documentation on extension development using the Azure DevOps Extension SDK.

Pipeline decorators let you add steps to the beginning and end of every job. This process is different than adding
steps to a single definition because it applies to all pipelines in an organization.
Suppose our organization requires running a virus scanner on all build outputs that could be released. Pipeline
authors don't need to remember to add that step. We create a decorator that automatically injects the step. Our
pipeline decorator injects a custom task that does virus scanning at the end of every pipeline job.

Author a pipeline decorator


This example assumes you're familiar with the contribution models.
Start by creating an extension. After you follow the tutorial, you'll have a vss-extension.json file. In this file, add
contribution for our new pipeline decorator.
vss-extension.json

{
"manifestVersion": 1,
"contributions": [
{
"id": "my-required-task",
"type": "ms.azure-pipelines.pipeline-decorator",
"targets": [
"ms.azure-pipelines-agent-job.post-job-tasks"
],
"properties": {
"template": "my-decorator.yml"
}
}
],
"files": [
{
"path": "my-decorator.yml",
"addressable": true,
"contentType": "text/plain"
}
]
}

Contribution options
Let's take a look at the properties and what they're used for:

P RO P ERT Y DESC RIP T IO N


P RO P ERT Y DESC RIP T IO N

id Contribution identifier. Must be unique among contributions


in this extension.

type Specifies that this contribution is a pipeline decorator. Must be


the string ms.azure-pipelines.pipeline-decorator .

targets Decorators can run before your job, after, or both. See the
table below for available options.

properties The only property required is a template . The template is a


YAML file included in your extension, which defines the steps
for your pipeline decorator. It's a relative path from the root of
your extension folder.

Targets
TA RGET DESC RIP T IO N

ms.azure-pipelines-agent-job.pre-job-tasks Run before other tasks in a classic build or YAML pipeline. Due
to differences in how source code checkout happens, this
target will run before checkout in a YAML pipeline but after
checkout in a classic build pipeline.

ms.azure-pipelines-agent-job.post-checkout-tasks Run after the last checkout task in a classic build or YAML
pipeline.

ms.azure-pipelines-agent-job.post-job-tasks Run after other tasks in a classic build or YAML pipeline.

ms.azure-release-pipelines-agent-job.pre-job-tasks Run before other tasks in a classic RM pipeline.

ms.azure-release-pipelines-agent-job.post-job-tasks Run after other tasks in a classic RM pipeline.

In this example, we use ms.azure-pipelines-agent-job.post-job-tasks only because we want to run at the end of all
build jobs.
This extension contributes a pipeline decorator. Next, we'll create a template YAML file to define the decorator's
behavior.

Decorator YAML
In the extension's properties, we chose the name "my-decorator.yml". Create that file in the root of your
contribution. It holds the set of steps to run after each job. We'll start with a basic example and work up to the full
task.
my-decorator.yml (initial version )

steps:
- task: CmdLine@2
displayName: 'Run my script (injected from decorator)'
inputs:
script: dir

Installing the decorator


To add a pipeline decorator to your organization, you must install an extension. Only private extensions can
contribute pipeline decorators. The extension must be authored and shared with your organization before it
can be used.
Once the extension has been shared with your organization, search for the extension and install it.
Save the file, then build and install the extension. Create and run a basic pipeline. The decorator automatically
injects our dir script at the end of every job. A pipeline run looks similar to:

NOTE
The decorator runs on every job in every pipeline in the organization. In later steps, we'll add logic to control when and how
the decorator runs.

Conditional injection
In our example, we only need to run the virus scanner if the build outputs might be released to the public. Let's say
that only builds from the default branch (typically master ) are ever released. We should limit the decorator to jobs
running against the default branch.
The updated file looks like this:
my-decorator.yml (revised version )

steps:
- ${{ if eq(resources.repositories['self'].ref, resources.repositories['self'].defaultBranch) }}:
- script: dir
displayName: 'Run my script (injected from decorator)'

You can start to see the power of this extensibility point. Use the context of the current job to conditionally inject
steps at runtime. Use YAML expressions to make decisions about what steps to inject and when. See pipeline
decorator expression context for a full list of available data.
There's another condition we need to consider: what if the user already included the virus scanning step? We
shouldn't waste time running it again. In this simple example, we'll pretend that any script task found in the job is
running the virus scanner. (In a real implementation, you'd have a custom task to check for that instead.)
The script task's ID is d9bafed4-0b18-4f58-968d-86655b4d2ce9 . If we see another script task, we shouldn't inject ours.
my-decorator.yml (final version )

steps:
- ${{ if and(eq(resources.repositories['self'].ref, resources.repositories['self'].defaultBranch),
not(containsValue(job.steps.*.task.id, 'd9bafed4-0b18-4f58-968d-86655b4d2ce9'))) }}:
- script: dir
displayName: 'Run my script (injected from decorator)'

Debugging
While authoring your pipeline decorator, you'll likely need to debug. You also may want to see what data you have
available in the context.
You can set the system.debugContext variable to true when you queue a pipeline. Then, look at the pipeline
summary page.
You see something similar to the following image:

Select the task to see the logs, which report the available context is available and runtime values.

Helpful Links
Learn more about YAML expression syntax.
Pipeline decorator expression context
11/2/2020 • 3 minutes to read • Edit Online

Azure DevOps Ser vices


Pipeline decorators have access to context about the pipeline in which they run. As a pipeline decorator author, you
can use this context to make decisions about the decorator's behavior. The information available in context is
different for pipelines and for release. Also, decorators run after task names are resolved to task GUIDs. When your
decorator wants to reference a task, it should use the GUID rather than the name or keyword.

TIP
Check out our newest documentation on extension development using the Azure DevOps Extension SDK.

Resources
Pipeline resources are available on the resources object.
Repositories
Currently, there's only one key: repositories . repositories is a map from repo ID to information about the
repository.
In a designer build, the primary repo alias is __designer_repo . In a YAML pipeline, the primary repo is called self .
In a release pipeline, repositories aren't available. Release artifact variables are available.
For example, to print the name of the self repo in a YAML pipeline:

steps:
- script: echo ${{ resources.repositories['self'].name }}

Repositories contain these properties:

resources['repositories']['self'] =
{
"alias": "self",
"id": "<repo guid>",
"type": "Git",
"version": "<commit hash>",
"name": "<repo name>",
"project": "<project guid>",
"defaultBranch": "<default ref of repo, like 'refs/heads/master'>",
"ref": "<current pipeline ref, like 'refs/heads/topic'>",
"versionInfo": {
"author": "<author of tip commit>",
"message": "<commit message of tip commit>"
},
"checkoutOptions": {}
}

Job
Job details are available on the job object.
The data looks similar to:
job =
{
"steps": [
{
"environment": null,
"inputs": {
"script": "echo hi"
},
"type": "Task",
"task": {
"id": "d9bafed4-0b18-4f58-968d-86655b4d2ce9",
"name": "CmdLine",
"version": "2.146.1"
},
"condition": null,
"continueOnError": false,
"timeoutInMinutes": 0,
"id": "5c09f0b5-9bc3-401f-8cfb-09c716403f48",
"name": "CmdLine",
"displayName": "CmdLine",
"enabled": true
}
]
}

For instance, to conditionally add a task only if it doesn't already exist:

- ${{ if not(containsValue(job.steps.*.task.id, 'f3ab91e7-bed6-436a-b651-399a66fe6c2a')) }}:


- script: echo conditionally inserted

Variables
Pipeline variables are also available.
For instance, if the pipeline had a variable called myVar , its value would be available to the decorator as
variables['myVar'] .

For example, to give a decorator an opt-out, we could look for a variable. Pipeline authors who wish to opt out of
the decorator can set this variable, and the decorator won't be injected. If the variable isn't present, then the
decorator is injected as usual.
my-decorator.yml

- ${{ if ne(variables['skipInjecting'], 'true') }}


- script: echo Injected the decorator

Then, in a pipeline in the organization, the author can request the decorator not to inject itself.
pipeline-with-opt-out.yml

variables:
skipInjecting: true
steps:
- script: echo This is the only step. No decorator is added.

Task names and GUIDs


Decorators run after tasks have already been turned into GUIDs. Consider the following YAML:
steps:
- checkout: self
- bash: echo This is the Bash task
- task: PowerShell@2
inputs:
targetType: inline
script: Write-Host This is the PowerShell task

Each of those steps maps to a task. Each task has a unique GUID. Task names and keywords map to task GUIDs
before decorators run. If a decorator wants to check for the existence of another task, it must search by task GUID
rather than by name or keyword.
For normal tasks (which you specify with the task keyword), you can look at the task's task.json to determine its
GUID. For special keywords like checkout and bash in the example above, you can use the following GUIDs:

K EY W O RD GUID TA SK N A M E

checkout 6D15AF64-176C-496D-B583- n/a, see note below


FD2AE21D4DF4

bash 6C731C3C-3C68-459A-A5C9- Bash


BDE6E6595B5B

script D9BAFED4-0B18-4F58-968D- CmdLine


86655B4D2CE9

powershell E213FF0F-5D5C-4791-802D- PowerShell


52EA3E7BE1F1

pwsh E213FF0F-5D5C-4791-802D- PowerShell


52EA3E7BE1F1

publish ECDC45F6-832D-4AD9-B52B- PublishPipelineArtifact


EE49E94659BE

download 61F2A582-95AE-4948-B34D- DownloadPipelineArtifact


A1B3C4F6A737

After resolving task names and keywords, the above YAML becomes:

steps:
- task: 6D15AF64-176C-496D-B583-FD2AE21D4DF4@1
inputs:
repository: self
- task: 6C731C3C-3C68-459A-A5C9-BDE6E6595B5B@3
inputs:
targetType: inline
script: echo This is the Bash task
- task: E213FF0F-5D5C-4791-802D-52EA3E7BE1F1@2
inputs:
targetType: inline
script: Write-Host This is the PowerShell task

TIP
Each of these GUIDs can be found in the task.json for the corresponding in-box task. The only exception is checkout ,
which is a native capability of the agent. Its GUID is built into the Azure Pipelines service and agent.
Specify conditions
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017.3


You can specify the conditions under which each stage, job, or step runs. By default, a job or stage runs if it
does not depend on any other job or stage, or if all of the jobs or stages that it depends on have completed
and succeeded. By default, a step runs if nothing in its job has failed yet and the step immediately preceding it
has finished. You can customize this behavior by forcing a stage, job, or step to run even if a previous
dependency fails or by specifying a custom condition.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments, and
jobs are called phases.

YAML
Classic
You can specify conditions under which a step, job, or stage will run.
Only when all previous dependencies have succeeded. This is the default if there is not a condition set
in the YAML.
Even if a previous dependency has failed, unless the run was canceled. Use succeededOrFailed() in the
YAML for this condition.
Even if a previous dependency has failed, even if the run was canceled. Use always() in the YAML for
this condition.
Only when a previous dependency has failed. Use failed() in the YAML for this condition.
Custom conditions
By default, steps, jobs, and stages run if all previous steps/jobs have succeeded. It's as if you specified
"condition: succeeded()" (see Job status functions).

jobs:
- job: Foo

steps:
- script: echo Hello!
condition: always() # this step will always run, even if the pipeline is canceled

- job: Bar
dependsOn: Foo
condition: failed() # this job will only run if Foo fails

You can also use variables in conditions.


variables:
isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/master')]

stages:
- stage: A
jobs:
- job: A1
steps:
- script: echo Hello Stage A!

- stage: B
condition: and(succeeded(), eq(variables.isMain, true))
jobs:
- job: B1
steps:
- script: echo Hello Stage B!
- script: echo $(isMain)

Conditions are evaluated to decide whether to start a stage, job, or step. This means that nothing computed at
runtime inside that unit of work will be available. For example, if you have a job which sets a variable using a
runtime expression using $[ ] syntax, you can't use that variable in your custom condition.
YAML is not yet supported in TFS.

Enable a custom condition


If the built-in conditions don't meet your needs, then you can specify custom conditions .

In TFS 2017.3, custom task conditions are available in the user interface only for Build pipelines. You can
use the Release REST APIs to establish custom conditions for Release pipelines.

Conditions are written as expressions. The agent evaluates the expression beginning with the innermost
function and works its way out. The final result is a boolean value that determines if the task, job, or stage
should run or not. See the expressions topic for a full guide to the syntax.
Do any of your conditions make it possible for the task to run even after the build is canceled by a user? If so,
then specify a reasonable value for cancel timeout so that these kinds of tasks have enough time to complete
after the user cancels a run.

Examples
Run for the master branch, if succeeding

and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))

Run if the branch is not master, if succeeding

and(succeeded(), ne(variables['Build.SourceBranch'], 'refs/heads/master'))

Run for user topic branches, if succeeding

and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/heads/users/'))

Run for continuous integration (CI ) builds if succeeding


and(succeeded(), in(variables['Build.Reason'], 'IndividualCI', 'BatchedCI'))

Run if the build is run by a branch policy for a pull request, if failing

and(failed(), eq(variables['Build.Reason'], 'PullRequest'))

Run if the build is scheduled, even if failing, even if canceled

and(always(), eq(variables['Build.Reason'], 'Schedule'))

Release.Ar tifacts.{ar tifact-alias}.SourceBranch is equivalent to Build.SourceBranch .

Run if a variable is null

variables:
- name: testNull
value: ''

jobs:
- job: A
steps:
- script: echo testNull is blank
condition: eq('${{ variables.testNull }}', '')

Use a template parameter as part of a condition


When you declare a parameter in the same pipeline that you have a condition, parameter expansion happens
before conditions are considered. In this case, you can embed parameters inside conditions. The script in this
YAML file will run because parameters.doThing is true.

parameters:
- name: doThing
default: true
type: boolean

steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', true))

However, when you pass a parameter to a template, the parameter will not have a value when the condition
gets evaluated. As a result, if you set the parameter value in both the template and the pipeline YAML files, the
pipeline value from the template will get used in your condition.

# parameters.yml
parameters:
- name: doThing
default: false # value passed to the condition
type: boolean

jobs:
- job: B
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', true))
# azure-pipeline.yml
parameters:
- name: doThing
default: true # will not be evaluated in time
type: boolean

trigger:
- none

extends:
template: parameters.yml

Use the output variable from a job in a condition in a subsequent job


You can make a variable available to future jobs and specify it in a condition. Variables available to future jobs
must be marked as multi-job output variables using isOutput=true .

jobs:
- job: Foo
steps:
- bash: |
echo "This is job Foo."
echo "##vso[task.setvariable variable=doThing;isOutput=true]Yes" #set variable doThing to Yes
name: DetermineResult
- job: Bar
dependsOn: Foo
condition: eq(dependencies.Foo.outputs['DetermineResult.doThing'], 'Yes') #map doThing and check the
value
steps:
- script: echo "Job Foo ran and doThing is Yes."

FAQ
I've got a conditional step that runs even when a job is canceled. Does my conditional step affect a job that
I canceled in the queue?
No. If you cancel a job while it's in the queue, then the entire job is canceled, including conditional steps.
I've got a conditional step that should run even when the deployment is canceled. How do I specify this?
If you defined the pipelines using a YAML file, then this is supported. This scenario is not yet supported for
release pipelines.
How can I trigger a job if a previous job succeeded with issues?
You can use the result of the previous job. For example, in this YAML file, the condition
eq(dependencies.A.result,'SucceededWithIssues') allows the job to run because Job A succeeded with issues.

jobs:
- job: A
displayName: Job A
continueOnError: true # next job starts even if this one fails
steps:
- script: echo Job A ran
- script: exit 1

- job: B
dependsOn: A
condition: eq(dependencies.A.result,'SucceededWithIssues') # targets the result of the previous job
displayName: Job B
steps:
- script: echo Job B ran
I've got a conditional step that runs even when a job is canceled. How do I manage to cancel all jobs at
once?
You'll experience this issue if the condition that's configured in the stage doesn't include a job status check
function. To resolve the issue, add a job status check function to the condition. If you cancel a job while it's in
the queue, the entire job is canceled, including all the other stages, with this function configured. For more
information, see Job status functions.

stages:
- stage: Stage1
displayName: Stage 1
dependsOn: []
condition: and(contains(variables['build.sourceBranch'], 'refs/heads/master'), succeeded())
jobs:
- job: ShowVariables
displayName: Show variables
steps:
- task: CmdLine@2
displayName: Show variables
inputs:
script: 'printenv'

- stage: Stage2
displayName: stage 2
dependsOn: Stage1
condition: contains(variables['build.sourceBranch'], 'refs/heads/master')
jobs:
- job: ShowVariables
displayName: Show variables 2
steps:
- task: CmdLine@2
displayName: Show variables 2
inputs:
script: 'printenv'

- stage: Stage3
displayName: stage 3
dependsOn: Stage2
condition: and(contains(variables['build.sourceBranch'], 'refs/heads/master'), succeeded())
jobs:
- job: ShowVariables
displayName: Show variables 3
steps:
- task: CmdLine@2
displayName: Show variables 3
inputs:
script: 'printenv'

Related articles
Specify jobs in your pipeline
Add stages, dependencies, & conditions
Specify demands
11/2/2020 • 2 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Use demands to make sure that the capabilities your pipeline needs are present on the agents that run it.
Demands are asserted automatically by tasks or manually by you.

NOTE
Demands and capabilities are designed for use with self-hosted agents so that jobs can be matched with an agent that
meets the requirements of the job. When using Microsoft-hosted agents, you select an image for the agent that matches
the requirements of the job, so although it is possible to add capabilities to a Microsoft-hosted agent, you don't need to
use capabilities with Microsoft-hosted agents.

Task demands
Some tasks won't run unless one or more demands are met by the agent. For example, the Visual Studio Build
task demands that msbuild and visualstudio are installed on the agent.

Manually entered demands


You might need to use self-hosted agents with special capabilities. For example, your pipeline may require
SpecialSoftware on agents in the Default pool. Or, if you have multiple agents with different operating
systems in the same pool, you may have a pipeline that requires a Linux agent.
YAML
Classic
To add a single demand to your YAML build pipeline, add the demands: line to the pool section.

pool:
name: Default
demands: SpecialSoftware # Check if SpecialSoftware capability exists

Or if you need to add multiple demands, add one per line.

pool:
name: Default
demands:
- SpecialSoftware # Check if SpecialSoftware capability exists
- Agent.OS -equals Linux # Check if Agent.OS == Linux

For multiple demands:


pool:
name: MyPool
demands:
- myCustomCapability # check for existence of capability
- agent.os -equals Darwin # check for specific string in capability

For more information and examples, see YAML schema - Demands.


Register each agent that has the capability.
1. In your web browser, navigate to Agent pools:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

2. Navigate to the capabilities tab for the agent:


1. From the Agent pools tab, select the desired agent pool.

2. Select Agents and choose the desired agent.


3. Choose the Capabilities tab.

NOTE
Microsoft-hosted agents don't display system capabilities. For a list of software installed on Microsoft-
hosted agents, see Use a Microsoft-hosted agent.

1. From the Agent pools tab, select the desired pool.


2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


1. From the Agent pools tab, select the desired pool.

2. Select Agents and choose the desired agent.


3. Choose the Capabilities tab.

Select the desired agent, and choose the Capabilities tab.


Select the desired agent, and choose the Capabilities tab.

From the Agent pools tab, select the desired agent, and choose the Capabilities tab.

3. Add something like the following entry:

F IRST B O X SEC O N D B O X

SpecialSoftware C:\Program Files (x86)\SpecialSoftware

TIP
When you manually queue a build you can change the demands for that run.
Library
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Library is a collection of includes_ build and release assets for a project. Assets defined in a library can be used in
multiple build and release pipelines of the project. The Librar y tab can be accessed directly in Azure Pipelines and
Team Foundation Server (TFS).
At present, the library contains two types of assets: variable groups and secure files.

Variable groups are available to only release pipelines in TFS 2017 and earlier. They are available to build and
release pipelines in TFS 2018 and in Azure Pipelines. Task groups and service connections are available to build
and release pipelines in TFS 2015 and newer, and in Azure Pipelines.

Library Security
All assets defined in the Librar y tab share a common security model. You can control who can define new items in
a library, and who can use an existing item. Roles are defined for library items, and membership of these roles
governs the operations you can perform on those items.

RO L E O N A L IB RA RY IT EM P URP O SE

Reader Members of this role can view the item.

User Members of this role can use the item when authoring build
or release pipelines. For example, you must be a 'User' for a
variable group to be able to use it in a release pipeline.

Administrator In addition to all the above operations, members of this role


can manage membership of all other roles for the item. The
user that created an item is automatically added to the
Administrator role for that item.

The security settings for the Librar y tab control access for all items in the library. Role memberships for individual
items are automatically inherited from those of the Librar y node. In addition to the three roles listed above, the
Creator role on the library defines who can create new items in the library, but it does not include Reader and
User permissions and cannot be used to manage permissions for other users. By default, the following groups are
added to the Administrator role of the library: Build Administrators , Release Administrators , and Project
Administrators .

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Define variables
11/2/2020 • 35 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.

Variables give you a convenient way to get key bits of data into various parts of the pipeline. The most
common use of variables is to define a value that you can then use in your pipeline. All variables are stored
as strings and are mutable. The value of a variable can change from run to run or job to job of your pipeline.
When you define the same variable in multiple places with the same name, the most locally scoped variable
wins. So, a variable defined at the job level can override a variable set at the stage level. A variable defined
at the stage level will override a variable set at the pipeline root level. A variable set in the pipeline root level
will override a variable set in the Pipeline settings UI.
Variables are different from runtime parameters, which are typed and available during template parsing.

User-defined variables
When you define a variable, you can use different syntaxes (macro, template expression, or runtime) and
what syntax you use will determine where in the pipeline your variable will render.
In YAML pipelines, you can set variables at the root, stage, and job level. You can also specify variables
outside of a YAML pipeline in the UI. When you set a variable in the UI, that variable can be encrypted and
set as secret. Secret variables are not automatically decrypted in YAML pipelines and need to be passed to
your YAML file with env: or a variable at the root level.
User-defined variables can be set as read-only.
You can use a variable group to make variables available across multiple pipelines.
You can use templates to define variables that are used in multiple pipelines in one file.

System variables
In addition to user-defined variables, Azure Pipelines has system variables with predefined values. If you are
using YAML or classic build pipelines, see predefined variables for a comprehensive list of system variables.
If you are using classic release pipelines, see release variables.
System variables are set with their current value when you run the pipeline. Some variables are set
automatically. As a pipeline author or end user, you change the value of a system variable before the
pipeline is run.
System variables are read-only.

Environment variables
Environment variables are specific to the operating system you are using. They are injected into a pipeline in
platform-specific ways. The format corresponds to how environment variables get formatted for your
specific scripting platform.
On UNIX systems (macOS and Linux), environment variables have the format $NAME . On Windows, the
format is %NAME% for batch and $env:NAME in PowerShell.
System and user-defined variables also get injected as environment variables for your platform. When
variables are turned into environment variables, variable names become uppercase, and periods turn into
underscores. For example, the variable name any.variable becomes the variable name $ANY_VARIABLE .

Variable characters
User-defined variables can consist of letters, numbers, . , and _ characters. Don't use variable prefixes
that are reserved by the system. These are: endpoint , input , secret , and securefile . Any variable that
begins with one of these strings (regardless of capitalization) will not be available to your tasks and scripts.

Understand variable syntax


Azure Pipelines supports three different ways to reference variables: macro, template expression, and
runtime expression. Each syntax can be used for a different purpose and has some limitations.
In a pipeline, template expression variables ( ${{ variables.var }} ) get processed at compile time, before
runtime starts. Macro syntax variables ( $(var) ) get processed during runtime before a task runs. Runtime
expressions ( $[variables.var] ) also get processed during runtime but were designed for use with
conditions and expressions. When you use a runtime expression, it must take up the entire right side of a
definition.
In this example, you can see that the template expression still has the initial value of the variable after the
variable is updated. The value of the macro syntax variable updates. The template expression value does not
change because all template expression variables get processed at compile time before tasks run. In
contrast, macro syntax variables are evaluated before each task runs.

variables:
- name: one
value: initialValue

steps:
- script: |
echo ${{ variables.one }} # outputs initialValue
echo $(one)
displayName: First variable pass
- bash: echo '##vso[task.setvariable variable=one]secondValue'
displayName: Set new variable value
- script: |
echo ${{ variables.one }} # outputs initialValue
echo $(one) # outputs secondValue
displayName: Second variable pass

Macro syntax variables


Most documentation examples use macro syntax ( $(var) ). Macro syntax is designed to interpolate variable
values into task inputs and into other variables.
Variables with macro syntax get processed before a task executes during runtime. Runtime happens after
template expansion. When the system encounters a macro expression, it replaces the expression with the
contents of the variable. If there's no variable by that name, then the macro expression is left unchanged. For
example, if $(var) can't be replaced, $(var) won't be replaced by anything.
Macro syntax variables remain unchanged with no value because an empty value like $() might mean
something to the task you are running and the agent should not assume you want that value replaced. For
example, if you use $(foo) to reference variable foo in a Bash task, replacing all $() expressions in the
input to the task could break your Bash scripts.
Macro variables are only expanded when they are used for a value, not as a keyword. Values appear on the
right side of a pipeline definition. The following is valid: key: $(value) . The following isn't valid:
$(key): value . Macro variables are not expanded when used to display a job name inline. Instead, you
must use the displayName property.

NOTE
Variables are only expanded for stages , jobs , and steps . You cannot, for example, use macro syntax inside a
resource or trigger .

Template expression syntax


You can use template expression syntax to expand both template parameters and variables (
${{ variables.var }} ). Template variables are processed at compile time, and are replaced before runtime
starts. Template expressions are designed for reusing parts of YAML as templates.
Template variables silently coalesce to empty strings when a replacement value isn't found. Template
expressions, unlike macro and runtime expressions, can appear as either keys (left side) or values (right
side). The following is valid: ${{ variables.key }} : ${{ variables.value }} .
Runtime expression syntax
You can use runtime expression syntax for variables that are expanded at runtime ( $[variables.var] ).
Runtime expression variables silently coalesce to empty strings when a replacement value isn't found.
Runtime expressions are designed to be used in the conditions of jobs, to support conditional execution of
jobs, or whole stages.
Runtime expression variables are only expanded when they are used for a value, not as a keyword. Values
appear on the right side of a pipeline definition. The following is valid: key: $[variables.value] . The
following isn't valid: $[variables.key]: value . The runtime expression must take up the entire right side of
a key-value pair. For example, key: $[variables.value] is valid but key: $[variables.value] foo is not.

W H ERE DO ES IT
EXPA N D IN A H O W DO ES IT
W H EN IS IT P IP EL IN E REN DER W H EN N OT
SY N TA X EXA M P L E P RO C ESSED? DEF IN IT IO N ? F O UN D?

macro $(var) runtime before a value (right side) prints $(var)


task executes

template expression ${{ compile time key or value (left or empty string
variables.var }} right side)

runtime expression $[variables.var] runtime value (right side) empty string

What syntax should I use?


Use macro syntax if you are providing input for a task.
Choose a runtime expression if you are working with conditions and expressions. The exception to this is if
you have a pipeline where it will cause a problem for your empty variable to print out. For example, if you
have conditional logic that relies on a variable having a specific value or no value. In that case, you should
use a runtime expression.
If you are defining a variable in a template, use a template expression.

Set variables in pipeline


YAML
Classic
Azure DevOps CLI
In the most common case, you set the variables and use them within the YAML file. This allows you to track
changes to the variable in your version control system. You can also define variables in the pipeline settings
UI (see the Classic tab) and reference them in your YAML.
Here's an example that shows how to set two variables, configuration and platform , and use them later in
steps. To use a variable in a YAML statement, wrap it in $() . Variables can't be used to define a repository
in a YAML statement.

# Set variables once


variables:
configuration: debug
platform: x64

steps:

# Use them once


- task: MSBuild@1
inputs:
solution: solution1.sln
configuration: $(configuration) # Use the variable
platform: $(platform)

# Use them again


- task: MSBuild@1
inputs:
solution: solution2.sln
configuration: $(configuration) # Use the variable
platform: $(platform)

Variable scopes
In the YAML file, you can set a variable at various scopes:
At the root level, to make it available to all jobs in the pipeline.
At the stage level, to make it available only to a specific stage.
At the job level, to make it available only to a specific job.
When a variable is defined at the top of a YAML, it will be available to all jobs and stages in the pipeline and
is a global variable. Global variables defined in a YAML are not visible in the pipeline settings UI.
Variables at the job level override variables at the root and stage level. Variables at the stage level override
variables at the root level.
variables:
global_variable: value # this is available to all jobs

jobs:
- job: job1
pool:
vmImage: 'ubuntu-16.04'
variables:
job_variable1: value1 # this is only available in job1
steps:
- bash: echo $(global_variable)
- bash: echo $(job_variable1)
- bash: echo $JOB_VARIABLE1 # variables are available in the script environment too

- job: job2
pool:
vmImage: 'ubuntu-16.04'
variables:
job_variable2: value2 # this is only available in job2
steps:
- bash: echo $(global_variable)
- bash: echo $(job_variable2)
- bash: echo $GLOBAL_VARIABLE

Specify variables
In the preceding examples, the variables keyword is followed by a list of key-value pairs. The keys are the
variable names and the values are the variable values.
There is another syntax, useful when you want to use variable templates or variable groups. This syntax
should be used at the root level of a pipeline.
In this alternate syntax, the variables keyword takes a list of variable specifiers. The variable specifiers are
name for a regular variable, group for a variable group, and template to include a variable template. The
following example demonstrates all three.

variables:
# a regular variable
- name: myvariable
value: myvalue
# a variable group
- group: myvariablegroup
# a reference to a variable template
- template: myvariabletemplate.yml

Learn more about variable reuse with templates.


Access variables through the environment
Notice that variables are also made available to scripts through environment variables. The syntax for using
these environment variables depends on the scripting language.
The name is upper-cased, and the . is replaced with the _ . This is automatically inserted into the process
environment. Here are some examples:
Batch script: %VARIABLE_NAME%
PowerShell script: $env:VARIABLE_NAME
Bash script: $VARIABLE_NAME
IMPORTANT
Predefined variables that contain file paths are translated to the appropriate styling (Windows style C:\foo\ versus
Unix style /foo/) based on agent host type and shell type. If you are running bash script tasks on Windows, you
should use the environment variable method for accessing these variables rather than the pipeline variable method
to ensure you have the correct file path styling.

YAML is not supported in TFS.

Set secret variables


YAML
Classic
Azure DevOps CLI
Don't set secret variables in your YAML file. Operating systems often log commands for the processes that
they run, and you wouldn't want the log to include a secret that you passed in as an input. Use the script's
environment or map the variable within the variables block to pass secrets to your pipeline.
You need to set secret variables in the pipeline settings UI for your pipeline. These variables are scoped to
the pipeline in which you set them. You can also set secret variables in variable groups.
To set secrets in the web interface, follow these steps:
1. Go to the Pipelines page, select the appropriate pipeline, and then select Edit .
2. Locate the Variables for this pipeline.
3. Add or update the variable.
4. Select the lock icon to store the variable in an encrypted manner.
5. Save the pipeline.
Secret variables are encrypted at rest with a 2048-bit RSA key. Secrets are available on the agent for tasks
and scripts to use. Be careful about who has access to alter your pipeline.

IMPORTANT
We make an effort to mask secrets from appearing in Azure Pipelines output, but you still need to take precautions.
Never echo secrets as output. Some operating systems log command line arguments. Never pass secrets on the
command line. Instead, we suggest that you map your secrets into environment variables.
We never mask substrings of secrets. If, for example, "abc123" is set as a secret, "abc" isn't masked from the logs. This
is to avoid masking secrets at too granular of a level, making the logs unreadable. For this reason, secrets should not
contain structured data. If, for example, "{ "foo": "bar" }" is set as a secret, "bar" isn't masked from the logs.

Unlike a normal variable, they are not automatically decrypted into environment variables for scripts. You
need to explicitly map secret variables.
The following example shows how to use a secret variable called mySecret in PowerShell and Bash scripts.
Unlike a normal pipeline variable, there's no environment variable called MYSECRET .
variables:
GLOBAL_MYSECRET: $(mySecret) # this will not work because the secret variable needs to be mapped as
env
GLOBAL_MY_MAPPED_ENV_VAR: $(nonSecretVariable) # this works because it's not a secret.

steps:

- powershell: |
Write-Host "Using an input-macro works: $(mySecret)"
Write-Host "Using the env var directly does not work: $env:MYSECRET"
Write-Host "Using a global secret var mapped in the pipeline does not work either:
$env:GLOBAL_MYSECRET"
Write-Host "Using a global non-secret var mapped in the pipeline works:
$env:GLOBAL_MY_MAPPED_ENV_VAR"
Write-Host "Using the mapped env var for this task works and is recommended:
$env:MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(mySecret) # the recommended way to map to an env variable

- bash: |
echo "Using an input-macro works: $(mySecret)"
echo "Using the env var directly does not work: $MYSECRET"
echo "Using a global secret var mapped in the pipeline does not work either: $GLOBAL_MYSECRET"
echo "Using a global non-secret var mapped in the pipeline works: $GLOBAL_MY_MAPPED_ENV_VAR"
echo "Using the mapped env var for this task works and is recommended: $MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(mySecret) # the recommended way to map to an env variable

The output from both tasks in the preceding script would look like this:

Using an input-macro works: ***


Using the env var directly does not work:
Using a global secret var mapped in the pipeline does not work either:
Using a global non-secret var mapped in the pipeline works: foo
Using the mapped env var for this task works and is recommended: ***

You can also map secret variables using the variables definition. This example shows how to use secret
variables $(vmsUser) and $(vmsAdminPass) in an Azure file copy task.

variables:
VMS_USER: $(vmsUser)
VMS_PASS: $(vmsAdminPass)

pool:
vmImage: 'ubuntu-latest'

steps:
- task: AzureFileCopy@4
inputs:
SourcePath: 'my/path'
azureSubscription: 'my-subscription'
Destination: 'AzureVMs'
storage: 'my-storage'
resourceGroup: 'my-rg'
vmsAdminUserName: $(VMS_USER)
vmsAdminPassword: $(VMS_PASS)

Reference secret variables in variable groups


This example shows how to reference a variable group in your YAML file, and also add variables within the
YAML. There are two variables used from the variable group: user and token . The token variable is
secret, and is mapped to the environment variable $env:MY_MAPPED_TOKEN so that it can be referenced in the
YAML.
This YAML makes a REST call to retrieve a list of releases, and outputs the result.

variables:
- group: 'my-var-group' # variable group
- name: 'devopsAccount' # new variable defined in YAML
value: 'contoso'
- name: 'projectName' # new variable defined in YAML
value: 'contosoads'

steps:
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
# Encode the Personal Access Token (PAT)
# $env:USER is a normal variable in the variable group
# $env:MY_MAPPED_TOKEN is a mapped secret variable
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f
$env:USER,$env:MY_MAPPED_TOKEN)))

# Get a list of releases


$uri = "https://vsrm.dev.azure.com/$(devopsAccount)/$(projectName)/_apis/release/releases?api-
version=5.1"

# Invoke the REST call


$result = Invoke-RestMethod -Uri $uri -Method Get -ContentType "application/json" -Headers
@{Authorization=("Basic {0}" -f $base64AuthInfo)}

# Output releases in JSON


Write-Host $result.value
env:
MY_MAPPED_TOKEN: $(token) # Maps the secret variable $(token) from my-var-group

IMPORTANT
By default with GitHub repositories, secret variables associated with your pipeline aren't made available to pull
request builds of forks. For more information, see Contributions from forks.

YAML is not supported in TFS.

Share variables across pipelines


To share variables across multiple pipelines in your project, use the web interface. Under Librar y , use
variable groups.

Use output variables from tasks


Some tasks define output variables, which you can consume in downstream steps, jobs, and stages. In
YAML, you can access variables across jobs and stages by using dependencies.
Some tasks define output variables, which you can consume in downstream steps and jobs within the same
stage. In YAML, you can access variables across jobs by using dependencies.
Some tasks define output variables, which you can consume in downstream steps within the same job.
YAML
Classic
Azure DevOps CLI
For these examples, assume we have a task called MyTask , which sets an output variable called MyVar .
Learn more about the syntax in Expressions - Dependencies.
Use outputs in the same job

steps:
- task: MyTask@1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- script: echo $(ProduceVar.MyVar) # this step uses the output variable

Use outputs in a different job

jobs:
- job: A
steps:
- task: MyTask@1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- job: B
dependsOn: A
variables:
# map the output variable from A into this job
varFromA: $[ dependencies.A.outputs['ProduceVar.MyVar'] ]
steps:
- script: echo $(varFromA) # this step uses the mapped-in variable

Use outputs in a different stage


To use the output from a different stage at the job level, you use the stageDependencies syntax.

stages:
- stage: One
jobs:
- job: A
steps:
- task: MyTask@1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- stage: Two
- job: B
variables:
# map the output variable from A into this job
varFromA: $[ stageDependencies.One.A.outputs['ProduceVar.MyVar'] ]
steps:
- script: echo $(varFromA) # this step uses the mapped-in variable

List variables
You can list all of the variables in your pipeline with the az pipelines variable list command. To get started,
see Get started with Azure DevOps CLI.

az pipelines variable list [--org]


[--pipeline-id]
[--pipeline-name]
[--project]

Parameters
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
pipeline-id : Required if pipeline-name is not supplied. ID of the pipeline.
pipeline-name : Required if pipeline-id is not supplied, but ignored if pipeline-id is supplied. Name
of the pipeline.
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up by using
git config .

Example
The following command lists all of the variables in the pipeline with ID 12 and shows the result in table
format.

az pipelines variable list --pipeline-id 12 --output table

Name Allow Override Is Secret Value


------------- ---------------- ----------- ------------
MyVariable False False platform
NextVariable False True platform
Configuration False False config.debug

Set variables in scripts


A script in your pipeline can define a variable so that it can be consumed by one of the subsequent steps in
the pipeline. All variables set by this method are treated as strings. To set a variable from a script, you use a
command syntax and print to stdout.
YAML
Classic
Azure DevOps CLI
Set a job-scoped variable from a script
To set a variable from a script, you use the task.setvariable logging command. This doesn't update the
environment variables, but it does make the new variable available to downstream steps within the same
job.

steps:

# Create a variable
- bash: |
echo "##vso[task.setvariable variable=sauce]crushed tomatoes"

# Use the variable


# "$(sauce)" is replaced by the contents of the `sauce` variable by Azure Pipelines
# before handing the body of the script to the shell.
- bash: |
echo my pipeline variable is $(sauce)

Subsequent steps will also have the pipeline variable added to their environment.
steps:

# Create a variable
# Note that this does not update the environment of the current script.
- bash: |
echo "##vso[task.setvariable variable=sauce]crushed tomatoes"

# An environment variable called `SAUCE` has been added to all downstream steps
- bash: |
echo "my environment variable is $SAUCE"
- pwsh: |
Write-Host "my environment variable is $env:SAUCE"

Set a multi-job output variable


If you want to make a variable available to future jobs, you must mark it as an output variable by using
isOutput=true . Then you can map it into future jobs by using the $[] syntax and including the step name
that set the variable. Multi-job output variables only work for jobs in the same stage.
To pass variables to jobs in different stages, use the stage dependencies syntax.
When you create a multi-job output variable, you should assign the expression to a variable. In this YAML,
$[ dependencies.A.outputs['setvarStep.myOutputVar'] ] is assigned to the variable $(myVarFromJobA) .

jobs:
# Set an output variable from job A
- job: A
pool:
vmImage: 'vs2017-win2016'
steps:
- powershell: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the value"
name: setvarStep
- script: echo $(setvarStep.myOutputVar)
name: echovar

# Map the variable into job B


- job: B
dependsOn: A
pool:
vmImage: 'ubuntu-18.04'
variables:
myVarFromJobA: $[ dependencies.A.outputs['setvarStep.myOutputVar'] ] # map in the variable
# remember, expressions
require single quotes
steps:
- script: echo $(myVarFromJobA)
name: echovar

If you're setting a variable from one stage to another, use stageDependencies .


stages:
- stage: A
jobs:
- job: A1
steps:
- bash: echo "##vso[task.setvariable variable=myStageOutputVar;isOutput=true]this is a stage
output var"
name: printvar

- stage: B
dependsOn: A
variables:
myVarfromStageA: $[ stageDependencies.A.A1.outputs['printvar.myStageOutputVar'] ]
jobs:
- job: B1
steps:
- script: echo $(myVarfromStageA)

If you're setting a variable from a matrix or slice, then, to reference the variable when you access it from a
downstream job, you must include:
The name of the job.
The step.

jobs:

# Set an output variable from a job with a matrix


- job: A
pool:
vmImage: 'ubuntu-18.04'
strategy:
maxParallel: 2
matrix:
debugJob:
configuration: debug
platform: x64
releaseJob:
configuration: release
platform: x64
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the $(configuration)
value"
name: setvarStep
- bash: echo $(setvarStep.myOutputVar)
name: echovar

# Map the variable from the debug job


- job: B
dependsOn: A
pool:
vmImage: 'ubuntu-18.04'
variables:
myVarFromJobADebug: $[ dependencies.A.outputs['debugJob.setvarStep.myOutputVar'] ]
steps:
- script: echo $(myVarFromJobADebug)
name: echovar
jobs:

# Set an output variable from a job with slicing


- job: A
pool:
vmImage: 'ubuntu-18.04'
parallel: 2 # Two slices
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the slice
$(system.jobPositionInPhase) value"
name: setvarStep
- script: echo $(setvarStep.myOutputVar)
name: echovar

# Map the variable from the job for the first slice
- job: B
dependsOn: A
pool:
vmImage: 'ubuntu-18.04'
variables:
myVarFromJobsA1: $[ dependencies.A.outputs['job1.setvarStep.myOutputVar'] ]
steps:
- script: "echo $(myVarFromJobsA1)"
name: echovar

Be sure to prefix the job name to the output variables of a deployment job. In this case, the job name is A :

jobs:

# Set an output variable from a deployment


- deployment: A
pool:
vmImage: 'ubuntu-18.04'
environment: staging
strategy:
runOnce:
deploy:
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment
variable value"
name: setvarStep
- bash: echo $(setvarStep.myOutputVar)
name: echovar

# Map the variable from the job for the first slice
- job: B
dependsOn: A
pool:
vmImage: 'ubuntu-18.04'
variables:
myVarFromDeploymentJob: $[ dependencies.A.outputs['A.setvarStep.myOutputVar'] ]
steps:
- bash: "echo $(myVarFromDeploymentJob)"
name: echovar

YAML is not supported in TFS.

Set variables by using expressions


YAML
Classic
Azure DevOps CLI
You can set a variable by using an expression. We already encountered one case of this to set a variable to
the output of another from a previous job.

- job: B
dependsOn: A
variables:
myVarFromJobsA1: $[ dependencies.A.outputs['job1.setvarStep.myOutputVar'] ] # remember to use
single quotes

You can use any of the supported expressions for setting a variable. Here's an example of setting a variable
to act as a counter that starts at 100, gets incremented by 1 for every run, and gets reset to 100 every day.

jobs:
- job:
variables:
a: $[counter(format('{0:yyyyMMdd}', pipeline.startTime), 100)]
steps:
- bash: echo $(a)

For more information about counters, dependencies, and other expressions, see expressions.
YAML is not supported in TFS.

Allow at queue time


YAML
Classic
Azure DevOps CLI
You can choose which variables are allowed to be set at queue time, and which are fixed by the pipeline
author. If a variable appears in the variables block of a YAML file, it's fixed and can't be overridden at queue
time.
To allow a variable to be set at queue time, make sure it doesn't appear in the variables block of a pipeline
or job.
You can also set a default value in the editor, and that value can be overridden by the person queuing the
pipeline. To do this, select the variable in the Variables tab of the pipeline, and check Let users override
this value when running this pipeline .
YAML is not supported in TFS.

Expansion of variables
YAML
Classic
Azure DevOps CLI
When you set a variable with the same name in multiple scopes, the following precedence applies (highest
precedence first).
1. Job level variable set in the YAML file
2. Stage level variable set in the YAML file
3. Pipeline level variable set in the YAML file
4. Variable set at queue time
5. Pipeline variable set in Pipeline settings UI
In the following example, the same variable a is set at the pipeline level and job level in YAML file. It's also
set in a variable group G , and as a variable in the Pipeline settings UI.

variables:
a: 'pipeline yaml'

stages:
- stage: one
displayName: one
variables:
- name: a
value: 'stage yaml'

jobs:
- job: A
variables:
- name: a
value: 'job yaml'
steps:
- bash: echo $(a) # This will be 'job yaml'

When you set a variable with the same name in the same scope, the last set value will take precedence.

stages:
- stage: one
displayName: Stage One
variables:
- name: a
value: alpha
- name: a
value: beta
jobs:
- job: I
displayName: Job I
variables:
- name: b
value: uno
- name: b
value: dos
steps:
- script: echo $(a) #outputs beta
- script: echo $(b) #outputs dos

NOTE
When you set a variable in the YAML file, don't define it in the web editor as settable at queue time. You can't
currently change variables that are set in the YAML file at queue time. If you need a variable to be settable at queue
time, don't set it in the YAML file.

Variables are expanded once when the run is started, and again at the beginning of each step. For example:
jobs:
- job: A
variables:
a: 10
steps:
- bash: |
echo $(a) # This will be 10
echo '##vso[task.setvariable variable=a]20'
echo $(a) # This will also be 10, since the expansion of $(a) happens before the step
- bash: echo $(a) # This will be 20, since the variables are expanded just before the step

There are two steps in the preceding example. The expansion of $(a) happens once at the beginning of the
job, and once at the beginning of each of the two steps.
Because variables are expanded at the beginning of a job, you can't use them in a strategy. In the following
example, you can't use the variable a to expand the job matrix, because the variable is only available at the
beginning of each expanded job.

jobs:
- job: A
variables:
a: 10
strategy:
matrix:
x:
some_variable: $(a) # This does not work

If the variable a is an output variable from a previous job, then you can use it in a future job.

- job: A
steps:
- powershell: echo "##vso[task.setvariable variable=a;isOutput=true]10"
name: a_step

# Map the variable into job B


- job: B
dependsOn: A
variables:
some_variable: $[ dependencies.A.outputs['a_step.a'] ]

Recursive expansion
On the agent, variables referenced using $( ) syntax are recursively expanded. However, for service-side
operations such as setting display names, variables aren't expanded recursively. For example:

variables:
myInner: someValue
myOuter: $(myInner)

steps:
- script: echo $(myOuter) # prints "someValue"
displayName: Variable is $(myOuter) # display name is "Variable is $(myInner)"

YAML is not supported in TFS.


Use predefined variables
11/2/2020 • 84 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Variables give you a convenient way to get key bits of data into various parts of your pipeline. This is the
comprehensive list of predefined variables.
These variables are automatically set by the system and read-only. (The exceptions are Build.Clean and
System.Debug.) Learn more about working with variables.

NOTE
You can use release variables in your deploy tasks to share the common information (e.g. — Environment Name, Resource
Group, etc)

Build.Clean
This is a deprecated variable that modifies how the build agent cleans up source. To learn how to clean up source,
see Clean the local repo on the agent.
This variable modifies how the build agent cleans up source. To learn more, see Clean the local repo on the agent.

System.AccessToken
System.AccessToken is a special variable that carries the security token used by the running build.
YAML
Classic
In YAML, you must explicitly map System.AccessToken into the pipeline using a variable. You can do this at the step
or task level:

steps:
- bash: echo This script could use $SYSTEM_ACCESSTOKEN
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
- powershell: Write-Host "This is a script that could use $env:SYSTEM_ACCESSTOKEN"
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)

You can configure the default scope for System.AccessToken using build job authorization scope.

System.Debug
For more detailed logs to debug pipeline problems, define System.Debug and set it to true .
1. Edit your pipeline.
2. Select Variables .
3. Add a new variable with the name System.Debug and value true .
4. Save the new variable.

Agent variables (DevOps Services)


NOTE
You can use agent variables as environment variables in your scripts and as parameters in your build tasks. You cannot use
them to customize the build number or to apply a version control label or tag.

VA RIA B L E DESC RIP T IO N

Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created. This variable has the same value
as Pipeline.Workspace .
For example: /home/vsts/work/1

Agent.ContainerMapping A mapping from container resource names in YAML to


their Docker IDs at runtime.
For example:

{
"one_container": {
"id":
"bdbb357d73a0bd3550a1a5b778b62a4c88ed2051c7802a06
59f1ff6e76910190"
},
"another_container": {
"id":
"82652975109ec494876a8ccbb875459c945982952e0a72ad
74c91216707162bb"
}
}

Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .

Agent.Id The ID of the agent.

Agent.JobName The name of the running job. This will usually be "Job" or
"__default", but in multi-config scenarios, will be the
configuration.
Agent.JobStatus The status of the build.
Canceled
Failed
Succeeded
SucceededWithIssues (partially successful)
The environment variable should be referenced as
AGENT_JOBSTATUS . The older agent.jobstatus is
available for backwards compatibility.

Agent.MachineName The name of the machine on which the agent is installed.

Agent.Name The name of the agent that is registered with the pool.
If you are using a self-hosted agent, then this name is
specified by you. See agents.

Agent.OS The operating system of the agent host. Valid values are:
Windows_NT
Darwin
Linux

If you're running in a container, the agent host and container


may be running different operating systems.

Agent.OSArchitecture The operating system processor architecture of the agent host.


Valid values are:
X86
X64
ARM

Agent.TempDirectory A temporary folder that is cleaned after each pipeline job.


This directory is used by tasks such as .NET Core CLI task
to hold temporary items like test results before they are
published.
For example: /home/vsts/work/_temp for Ubuntu

Agent.ToolsDirectory The directory used by tasks such as Node Tool Installer and
Use Python Version to switch between multiple versions of a
tool. These tasks will add tools from this directory to PATH so
that subsequent build steps can use them.

Learn about managing this directory on a self-hosted agent.

Agent.WorkFolder The working directory for this agent. For example:


c:\agent_work .

Note: This directory is not guaranteed to be writable by


pipeline tasks (eg. when mapped into a container)

Build variables (DevOps Services)

VA RIA B L E DESC RIP T IO N AVA IL A B L E IN T EM P L AT ES?


Build.ArtifactStagingDirectory The local path on the agent where No
any artifacts are copied to before
being pushed to their destination.
For example: c:\agent_work\1\a

A typical way to use this folder is to


publish your build artifacts with the
Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory
and Build.StagingDirectory are
interchangeable. This directory is
purged before each new build, so
you don't have to clean it up
yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.

Build.BuildId The ID of the record for the completed No


build.

Build.BuildNumber The name of the completed build, also No


known as the run number. You can
specify what is included in this value.

A typical use of this variable is to make


it part of the label format, which you
specify on the repository tab.

Note: This value can contain


whitespace or other invalid label
characters. In these cases, the label
format will fail.

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.

Build.BuildUri The URI for the build. For example: No


vstfs:///Build/Build/1430 .

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.
Build.BinariesDirectory The local path on the agent you can use No
as an output folder for compiled
binaries.

By default, new build pipelines are not


set up to clean this directory. You can
define your build to clean it up on the
Repository tab.

For example: c:\agent_work\1\b .

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.ContainerId The ID of the container for your artifact. No


When you upload an artifact in your
pipeline, it is added to a container that is
specific for that particular artifact.

Build.DefinitionName The name of the build pipeline. Yes

Note: This value can contain


whitespace or other invalid label
characters. In these cases, the label
format will fail.

Build.DefinitionVersion The version of the build pipeline. Yes

Build.QueuedBy See "How are the identity variables set?". Yes

Note: This value can contain


whitespace or other invalid label
characters. In these cases, the label
format will fail.

Build.QueuedById See "How are the identity variables set?". Yes


Build.Reason The event that caused the build to run. Yes
Manual : A user manually
queued the build.
IndividualCI : Continuous
integration (CI) triggered by a
Git push or a TFVC check-in.
BatchedCI : Continuous
integration (CI) triggered by a
Git push or a TFVC check-in, and
the Batch changes was
selected.
Schedule : Scheduled trigger.
ValidateShelveset : A user
manually queued the build of a
specific TFVC shelveset.
CheckInShelveset : Gated
check-in trigger.
PullRequest : The build was
triggered by a Git branch policy
that requires a build.
ResourceTrigger : The build
was triggered by a resource
trigger or it was triggered by
another build.
See Build pipeline triggers, Improve code
quality with branch policies.

Build.Repository.Clean The value you've selected for Clean in No


the source repository settings.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.LocalPath The local path on the agent where No


your source code files are
downloaded. For example:
c:\agent_work\1\s

By default, new build pipelines


update only the changed files. You
can modify how files are
downloaded on the Repository tab.
Important note: if you only check
out one Git repository, this path will
be the exact path to the code. If you
check out multiple repositories, it
will revert to its default value, which
is $(Pipeline.Workspace)/s .

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.
This variable is synonymous with
Build.SourcesDirectory.
Build.Repository.ID The unique identifier of the repository. No

This won't change, even if the name of


the repository does.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.Name The name of the triggering repository. No

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.Provider The type of the triggering repository. No


TfsGit : TFS Git repository
TfsVersionControl : Team
Foundation Version Control
Git : Git repository hosted on
an external server
GitHub
Svn : Subversion
This variable is agent-scoped, and can
be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.Tfvc.Workspace Defined if your repository is Team No


Foundation Version Control. The name
of the TFVC workspace used by the
build agent.

For example, if the Agent.BuildDirectory


is c:\agent_work\12 and the Agent.Id
is 8 , the workspace name could be:
ws_12_8

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.Uri The URL for the triggering repository. No


For example:
Git:
https://dev.azure.com/fabrikamfiber/_git/Scripts
TFVC:
https://dev.azure.com/fabrikamfiber/

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.
Build.RequestedFor See "How are the identity variables set?". Yes

Note: This value can contain


whitespace or other invalid label
characters. In these cases, the label
format will fail.

Build.RequestedForEmail See "How are the identity variables set?". Yes

Build.RequestedForId See "How are the identity variables set?". Yes

Build.SourceBranch The branch of the triggering repo the Yes


build was queued for. Some examples:
Git repo branch:
refs/heads/master
Git repo pull request:
refs/pull/1/merge
TFVC repo branch:
$/teamproject/main
TFVC repo gated check-in:
Gated_2016-06-
06_05.20.51.4369;[email protected]
TFVC repo shelveset build:
myshelveset;[email protected]
When your pipeline is triggered
by a tag:
refs/tags/your-tag-name

When you use this variable in your build


number format, the forward slash
characters ( / ) are replaced with
underscore characters _ ).

Note: In TFVC, if you are running a


gated check-in build or manually
building a shelveset, you cannot use this
variable in your build number format.
Build.SourceBranchName The name of the branch in the Yes
triggering repo the build was queued
for.
Git repo branch or pull request:
The last path segment in the ref.
For example, in
refs/heads/master this value
is master . In
refs/heads/feature/tools
this value is tools .
TFVC repo branch: The last path
segment in the root server path
for the workspace. For example,
in $/teamproject/main this
value is main .
TFVC repo gated check-in or
shelveset build is the name of
the shelveset. For example,
Gated_2016-06-
06_05.20.51.4369;[email protected]
or
myshelveset;[email protected]
.
Note: In TFVC, if you are running a
gated check-in build or manually
building a shelveset, you cannot use this
variable in your build number format.

Build.SourcesDirectory The local path on the agent where No


your source code files are
downloaded. For example:
c:\agent_work\1\s

By default, new build pipelines


update only the changed files.
Important note: if you only check
out one Git repository, this path will
be the exact path to the code. If you
check out multiple repositories, it
will revert to its default value, which
is $(Pipeline.Workspace)/s .

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.
This variable is synonymous with
Build.Repository.LocalPath.

Build.SourceVersion The latest version control change of the Yes


triggering repo that is included in this
build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped, and can
be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.
Build.SourceVersionMessage The comment of the commit or No
changeset for the triggering repo. We
truncate the message to the first line or
200 characters, whichever is shorter.
This variable is agent-scoped, and
can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag. Also, this
variable is only available on the step
level and is neither available in the
job nor stage levels (i.e. the message
is not extracted until the job had
started and checked out the code).
Note: This variable is available in TFS
2015.4.

Build.StagingDirectory The local path on the agent where No


any artifacts are copied to before
being pushed to their destination.
For example: c:\agent_work\1\a

A typical way to use this folder is to


publish your build artifacts with the
Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory
and Build.StagingDirectory are
interchangeable. This directory is
purged before each new build, so
you don't have to clean it up
yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.

Build.Repository.Git.SubmoduleCheckout The value you've selected for Checkout No


submodules on the repository tab.
With multiple repos checked out, this
value tracks the triggering repository's
setting.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.
Build.SourceTfvcShelveset Defined if your repository is Team No
Foundation Version Control.

If you are running a gated build or a


shelveset build, this is set to the name
of the shelveset you are building.

Note: This variable yields a value that is


invalid for build use in a build number
format.

Build.TriggeredBy.BuildId If the build was triggered by another No


build, then this variable is set to the
BuildID of the triggering build. In Classic
pipelines, this variable is triggered by a
build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.TriggeredBy.DefinitionId If the build was triggered by another No


build, then this variable is set to the
DefinitionID of the triggering build. In
Classic pipelines, this variable is
triggered by a build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.TriggeredBy.DefinitionName If the build was triggered by another No


build, then this variable is set to the
name of the triggering build pipeline. In
Classic pipelines, this variable is
triggered by a build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.TriggeredBy.BuildNumber If the build was triggered by another No


build, then this variable is set to the
number of the triggering build. In
Classic pipelines, this variable is
triggered by a build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.
Build.TriggeredBy.ProjectID If the build was triggered by another No
build, then this variable is set to ID of
the project that contains the triggering
build. In Classic pipelines, this variable is
triggered by a build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Common.TestResultsDirectory The local path on the agent where the No


test results are created. For example:
c:\agent_work\1\TestResults

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Pipeline variables (DevOps Services)


VA RIA B L E DESC RIP T IO N

Pipeline.Workspace Workspace directory for a particular pipeline. This variable has


the same value as Agent.BuildDirectory .

For example, /home/vsts/work/1 .

Deployment job variables (DevOps Services)


These variables are scoped to a specific Deployment job and will be resolved only at job execution time.

VA RIA B L E DESC RIP T IO N

Environment.Name Name of the environment targeted in the deployment job to


run the deployment steps and record the deployment history.
For example, smarthotel-dev .

Environment.Id ID of the environment targeted in the deployment job. For


example, 10 .

Environment.ResourceName Name of the specific resource within the environment targeted


in the deployment job to run the deployment steps and record
the deployment history. For example, bookings which is a
Kubernetes namespace that has been added as a resource to
the environment smarthotel-dev .

Environment.ResourceId ID of the specific resource within the environment targeted in


the deployment job to run the deployment steps. For example,
4 .

System variables (DevOps Services)


VA RIA B L E DESC RIP T IO N AVA IL A B L E IN T EM P L AT ES?
System.AccessToken Use the OAuth token to access the REST Yes
API.

Use System.AccessToken from YAML


scripts.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

System.CollectionId The GUID of the TFS collection or Azure Yes


DevOps organization.

System.CollectionUri The URI of the TFS collection or Azure Yes


DevOps organization. For example:
https://dev.azure.com/fabrikamfiber/
.

System.DefaultWorkingDirectory The local path on the agent where No


your source code files are
downloaded. For example:
c:\agent_work\1\s

By default, new build pipelines


update only the changed files. You
can modify how files are
downloaded on the Repository tab.

This variable is agent-scoped. It can


be used as an environment variable
in a script and as a parameter in a
build task, but not as part of the
build number or as a version control
tag.

System.DefinitionId The ID of the build pipeline. Yes

System.HostType Set to build if the pipeline is a build. Yes


For a release, the values are
deployment for a Deployment group
job, gates during evaluation of gates,
and release for other (Agent and
Agentless) jobs.

System.JobAttempt Set to 1 the first time this job is No


attempted, and increments every time
the job is retried.

System.JobDisplayName The human-readable name given to a No


job.

System.JobId A unique identifier for a single attempt No


of a single job.

System.JobName The name of the job, typically used for No


expressing dependencies and accessing
output variables.
System.PhaseAttempt Set to 1 the first time this phase is No
attempted, and increments every time
the job is retried.

Note: "Phase" is a mostly-redundant


concept which represents the design-
time for a job (whereas job was the
runtime version of a phase). We've
mostly removed the concept of "phase"
from Azure Pipelines. Matrix and multi-
config jobs are the only place where
"phase" is still distinct from "job". One
phase can instantiate multiple jobs
which differ only in their inputs.

System.PhaseDisplayName The human-readable name given to a No


phase.

System.PhaseName A string-based identifier for a job, No


typically used for expressing
dependencies and accessing output
variables.

System.StageAttempt Set to 1 the first time this stage is No


attempted, and increments every time
the job is retried.

System.StageDisplayName The human-readable name given to a No


stage.

System.StageName A string-based identifier for a stage, Yes


typically used for expressing
dependencies and accessing output
variables.

System.PullRequest.IsFork If the pull request is from a fork of the Yes


repository, this variable is set to True .
Otherwise, it is set to False .

System.PullRequest.PullRequestId The ID of the pull request that caused No


this build. For example: 17 . (This
variable is initialized only if the build ran
because of a Git PR affected by a branch
policy).

System.PullRequest.PullRequestNumber The number of the pull request that No


caused this build. This variable is
populated for pull requests from GitHub
which have a different pull request ID
and pull request number. This variable is
only available in a YAML pipeline if the
PR is a affected by a branch policy.

System.PullRequest.SourceBranch The branch that is being reviewed in a No


pull request. For example:
refs/heads/users/raisa/new-
feature
. (This variable is initialized only if the
build ran because of a Git PR affected by
a branch policy). This variable is only
available in a YAML pipeline if the PR is
affected by a branch policy.
System.PullRequest.SourceRepositoryURI The URL to the repo that contains the No
pull request. For example:
https://dev.azure.com/ouraccount/_git/OurProject
.

System.PullRequest.TargetBranch The branch that is the target of a pull No


request. For example:
refs/heads/master when your
repository is in Azure Repos and
master when your repository is in
GitHub. This variable is initialized only if
the build ran because of a Git PR
affected by a branch policy. This variable
is only available in a YAML pipeline if the
PR is affected by a branch policy.

System.TeamFoundationCollectionUri The URI of the TFS collection or Azure Yes


DevOps organization. For example:
https://dev.azure.com/fabrikamfiber/
.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

System.TeamProject The name of the project that contains Yes


this build.

System.TeamProjectId The ID of the project that this build Yes


belongs to.

TF_BUILD Set to True if the script is being run by No


a build task.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Agent variables (DevOps Server 2020)


NOTE
You can use agent variables as environment variables in your scripts and as parameters in your build tasks. You cannot use
them to customize the build number or to apply a version control label or tag.

VA RIA B L E DESC RIP T IO N

Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created. This variable has the same value
as Pipeline.Workspace .
For example: /home/vsts/work/1

Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .

Agent.Id The ID of the agent.


Agent.JobName The name of the running job. This will usually be "Job" or
"__default", but in multi-config scenarios, will be the
configuration.

Agent.JobStatus The status of the build.


Canceled
Failed
Succeeded
SucceededWithIssues (partially successful)
The environment variable should be referenced as
AGENT_JOBSTATUS . The older agent.jobstatus is
available for backwards compatibility.

Agent.MachineName The name of the machine on which the agent is installed.

Agent.Name The name of the agent that is registered with the pool.
If you are using a self-hosted agent, then this name is
specified by you. See agents.

Agent.OS The operating system of the agent host. Valid values are:
Windows_NT
Darwin
Linux
If you're running in a container, the agent host and container
may be running different operating systems.

Agent.OSArchitecture The operating system processor architecture of the agent host.


Valid values are:
X86
X64
ARM

Agent.TempDirectory A temporary folder that is cleaned after each pipeline job.


This directory is used by tasks such as .NET Core CLI task
to hold temporary items like test results before they are
published.
For example: /home/vsts/work/_temp for Ubuntu

Agent.ToolsDirectory The directory used by tasks such as Node Tool Installer and
Use Python Version to switch between multiple versions of a
tool. These tasks will add tools from this directory to PATH so
that subsequent build steps can use them.

Learn about managing this directory on a self-hosted agent.

Agent.WorkFolder The working directory for this agent. For example:


c:\agent_work .

Note: This directory is not guaranteed to be writable by


pipeline tasks (eg. when mapped into a container)

Build variables (DevOps Server 2020)

VA RIA B L E DESC RIP T IO N AVA IL A B L E IN T EM P L AT ES?


Build.ArtifactStagingDirectory The local path on the agent where No
any artifacts are copied to before
being pushed to their destination.
For example: c:\agent_work\1\a

A typical way to use this folder is to


publish your build artifacts with the
Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory
and Build.StagingDirectory are
interchangeable. This directory is
purged before each new build, so
you don't have to clean it up
yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.

Build.BuildId The ID of the record for the completed No


build.

Build.BuildNumber The name of the completed build, also No


known as the run number. You can
specify what is included in this value.

A typical use of this variable is to make


it part of the label format, which you
specify on the repository tab.

Note: This value can contain


whitespace or other invalid label
characters. In these cases, the label
format will fail.

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.

Build.BuildUri The URI for the build. For example: No


vstfs:///Build/Build/1430 .

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.
Build.BinariesDirectory The local path on the agent you can use No
as an output folder for compiled
binaries.

By default, new build pipelines are not


set up to clean this directory. You can
define your build to clean it up on the
Repository tab.

For example: c:\agent_work\1\b .

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.ContainerId The ID of the container for your artifact. No


When you upload an artifact in your
pipeline, it is added to a container that is
specific for that particular artifact.

Build.DefinitionName The name of the build pipeline. Yes

Note: This value can contain


whitespace or other invalid label
characters. In these cases, the label
format will fail.

Build.DefinitionVersion The version of the build pipeline. Yes

Build.QueuedBy See "How are the identity variables set?". Yes

Note: This value can contain


whitespace or other invalid label
characters. In these cases, the label
format will fail.

Build.QueuedById See "How are the identity variables set?". Yes


Build.Reason The event that caused the build to run. Yes
Manual : A user manually
queued the build.
IndividualCI : Continuous
integration (CI) triggered by a
Git push or a TFVC check-in.
BatchedCI : Continuous
integration (CI) triggered by a
Git push or a TFVC check-in, and
the Batch changes was
selected.
Schedule : Scheduled trigger.
ValidateShelveset : A user
manually queued the build of a
specific TFVC shelveset.
CheckInShelveset : Gated
check-in trigger.
PullRequest : The build was
triggered by a Git branch policy
that requires a build.
ResourceTrigger : The build
was triggered by a resource
trigger or it was triggered by
another build.
See Build pipeline triggers, Improve code
quality with branch policies.

Build.Repository.Clean The value you've selected for Clean in No


the source repository settings.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.LocalPath The local path on the agent where No


your source code files are
downloaded. For example:
c:\agent_work\1\s

By default, new build pipelines


update only the changed files. You
can modify how files are
downloaded on the Repository tab.
Important note: if you only check
out one Git repository, this path will
be the exact path to the code. If you
check out multiple repositories, it
will revert to its default value, which
is $(Pipeline.Workspace)/s .

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.
This variable is synonymous with
Build.SourcesDirectory.
Build.Repository.ID The unique identifier of the repository. No

This won't change, even if the name of


the repository does.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.Name The name of the triggering repository. No

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.Provider The type of the triggering repository. No


TfsGit : TFS Git repository
TfsVersionControl : Team
Foundation Version Control
Git : Git repository hosted on
an external server
GitHub
Svn : Subversion
This variable is agent-scoped, and can
be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.Tfvc.Workspace Defined if your repository is Team No


Foundation Version Control. The name
of the TFVC workspace used by the
build agent.

For example, if the Agent.BuildDirectory


is c:\agent_work\12 and the Agent.Id
is 8 , the workspace name could be:
ws_12_8

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.Repository.Uri The URL for the triggering repository. No


For example:
Git:
https://dev.azure.com/fabrikamfiber/_git/Scripts
TFVC:
https://dev.azure.com/fabrikamfiber/

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.
Build.RequestedFor See "How are the identity variables set?". Yes

Note: This value can contain


whitespace or other invalid label
characters. In these cases, the label
format will fail.

Build.RequestedForEmail See "How are the identity variables set?". Yes

Build.RequestedForId See "How are the identity variables set?". Yes

Build.SourceBranch The branch of the triggering repo the Yes


build was queued for. Some examples:
Git repo branch:
refs/heads/master
Git repo pull request:
refs/pull/1/merge
TFVC repo branch:
$/teamproject/main
TFVC repo gated check-in:
Gated_2016-06-
06_05.20.51.4369;[email protected]
TFVC repo shelveset build:
myshelveset;[email protected]
When your pipeline is triggered
by a tag:
refs/tags/your-tag-name

When you use this variable in your build


number format, the forward slash
characters ( / ) are replaced with
underscore characters _ ).

Note: In TFVC, if you are running a


gated check-in build or manually
building a shelveset, you cannot use this
variable in your build number format.

Build.SourceBranchName The name of the branch in the Yes


triggering repo the build was queued
for.
Git repo branch or pull request:
The last path segment in the ref.
For example, in
refs/heads/master this value
is master . In
refs/heads/feature/tools
this value is tools .
TFVC repo branch: The last path
segment in the root server path
for the workspace. For example,
in $/teamproject/main this
value is main .
TFVC repo gated check-in or
shelveset build is the name of
the shelveset. For example,
Gated_2016-06-
06_05.20.51.4369;[email protected]
or
myshelveset;[email protected]
.
Note: In TFVC, if you are running a
gated check-in build or manually
building a shelveset, you cannot use this
variable in your build number format.
Build.SourcesDirectory The local path on the agent where No
your source code files are
downloaded. For example:
c:\agent_work\1\s

By default, new build pipelines


update only the changed files. You
can modify how files are
downloaded on the Repository tab.
Important note: if you only check
out one Git repository, this path will
be the exact path to the code. If you
check out multiple repositories, it
will revert to its default value, which
is $(Pipeline.Workspace)/s .

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.
This variable is synonymous with
Build.Repository.LocalPath.

Build.SourceVersion The latest version control change of the Yes


triggering repo that is included in this
build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped, and can
be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.SourceVersionMessage The comment of the commit or No


changeset for the triggering repo. We
truncate the message to the first line or
200 characters, whichever is shorter.
This variable is agent-scoped, and
can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag. Also, this
variable is only available on the step
level and is neither available in the
job nor stage levels (i.e. the message
is not extracted until the job had
started and checked out the code).
Note: This variable is available in TFS
2015.4.
Build.StagingDirectory The local path on the agent where No
any artifacts are copied to before
being pushed to their destination.
For example: c:\agent_work\1\a

A typical way to use this folder is to


publish your build artifacts with the
Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory
and Build.StagingDirectory are
interchangeable. This directory is
purged before each new build, so
you don't have to clean it up
yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped, and


can be used as an environment
variable in a script and as a
parameter in a build task, but not as
part of the build number or as a
version control tag.

Build.Repository.Git.SubmoduleCheckout The value you've selected for Checkout No


submodules on the repository tab.
With multiple repos checked out, this
value tracks the triggering repository's
setting.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.SourceTfvcShelveset Defined if your repository is Team No


Foundation Version Control.

If you are running a gated build or a


shelveset build, this is set to the name
of the shelveset you are building.

Note: This variable yields a value that is


invalid for build use in a build number
format.

Build.TriggeredBy.BuildId If the build was triggered by another No


build, then this variable is set to the
BuildID of the triggering build. In Classic
pipelines, this variable is triggered by a
build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.
Build.TriggeredBy.DefinitionId If the build was triggered by another No
build, then this variable is set to the
DefinitionID of the triggering build. In
Classic pipelines, this variable is
triggered by a build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.TriggeredBy.DefinitionName If the build was triggered by another No


build, then this variable is set to the
name of the triggering build pipeline. In
Classic pipelines, this variable is
triggered by a build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.TriggeredBy.BuildNumber If the build was triggered by another No


build, then this variable is set to the
number of the triggering build. In
Classic pipelines, this variable is
triggered by a build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Build.TriggeredBy.ProjectID If the build was triggered by another No


build, then this variable is set to ID of
the project that contains the triggering
build. In Classic pipelines, this variable is
triggered by a build completion trigger.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Common.TestResultsDirectory The local path on the agent where the No


test results are created. For example:
c:\agent_work\1\TestResults

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

Pipeline variables (DevOps Server 2020)


VA RIA B L E DESC RIP T IO N

Pipeline.Workspace Workspace directory for a particular pipeline. This variable has


the same value as Agent.BuildDirectory .

For example, /home/vsts/work/1 .


Deployment job variables (DevOps Server 2020)
These variables are scoped to a specific Deployment job and will be resolved only at job execution time.

VA RIA B L E DESC RIP T IO N

Environment.Name Name of the environment targeted in the deployment job to


run the deployment steps and record the deployment history.
For example, smarthotel-dev .

Environment.Id ID of the environment targeted in the deployment job. For


example, 10 .

Environment.ResourceName Name of the specific resource within the environment targeted


in the deployment job to run the deployment steps and record
the deployment history. For example, bookings which is a
Kubernetes namespace that has been added as a resource to
the environment smarthotel-dev .

Environment.ResourceId ID of the specific resource within the environment targeted in


the deployment job to run the deployment steps. For example,
4 .

System variables (DevOps Server 2020)


VA RIA B L E DESC RIP T IO N AVA IL A B L E IN T EM P L AT ES?

System.AccessToken Use the OAuth token to access the REST Yes


API.

Use System.AccessToken from YAML


scripts.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

System.CollectionId The GUID of the TFS collection or Azure Yes


DevOps organization

System.CollectionUri A string Team Foundation Server Yes


collection URI.

System.DefaultWorkingDirectory The local path on the agent where No


your source code files are
downloaded. For example:
c:\agent_work\1\s

By default, new build pipelines


update only the changed files. You
can modify how files are
downloaded on the Repository tab.

This variable is agent-scoped. It can


be used as an environment variable
in a script and as a parameter in a
build task, but not as part of the
build number or as a version control
tag.

System.DefinitionId The ID of the build pipeline. Yes


System.HostType Set to build if the pipeline is a build. Yes
For a release, the values are
deployment for a Deployment group
job, gates during evaluation of gates,
and release for other (Agent and
Agentless) jobs.

System.JobAttempt Set to 1 the first time this job is No


attempted, and increments every time
the job is retried.

System.JobDisplayName The human-readable name given to a No


job.

System.JobId A unique identifier for a single attempt No


of a single job.

System.JobName The name of the job, typically used for No


expressing dependencies and accessing
output variables.

System.PhaseAttempt Set to 1 the first time this phase is No


attempted, and increments every time
the job is retried.

Note: "Phase" is a mostly-redundant


concept which represents the design-
time for a job (whereas job was the
runtime version of a phase). We've
mostly removed the concept of "phase"
from Azure Pipelines. Matrix and multi-
config jobs are the only place where
"phase" is still distinct from "job". One
phase can instantiate multiple jobs
which differ only in their inputs.

System.PhaseDisplayName The human-readable name given to a No


phase.

System.PhaseName A string-based identifier for a job, No


typically used for expressing
dependencies and accessing output
variables.

System.StageAttempt Set to 1 the first time this stage is No


attempted, and increments every time
the job is retried.

System.StageDisplayName The human-readable name given to a No


stage.

System.StageName A string-based identifier for a stage, Yes


typically used for expressing
dependencies and accessing output
variables.

System.PullRequest.IsFork If the pull request is from a fork of the Yes


repository, this variable is set to True .
Otherwise, it is set to False .
System.PullRequest.PullRequestId The ID of the pull request that caused No
this build. For example: 17 . (This
variable is initialized only if the build ran
because of a Git PR affected by a branch
policy).

System.PullRequest.PullRequestNumber The number of the pull request that No


caused this build. This variable is
populated for pull requests from GitHub
which have a different pull request ID
and pull request number. This variable is
only available in a YAML pipeline if the
PR is a affected by a branch policy.

System.PullRequest.SourceBranch The branch that is being reviewed in a No


pull request. For example:
refs/heads/users/raisa/new-
feature
. (This variable is initialized only if the
build ran because of a Git PR affected by
a branch policy). This variable is only
available in a YAML pipeline if the PR is
affected by a branch policy.

System.PullRequest.SourceRepositoryURI The URL to the repo that contains the No


pull request. For example:
https://dev.azure.com/ouraccount/_git/OurProject
.

System.PullRequest.TargetBranch The branch that is the target of a pull No


request. For example:
refs/heads/master when your
repository is in Azure Repos and
master when your repository is in
GitHub. This variable is initialized only if
the build ran because of a Git PR
affected by a branch policy. This variable
is only available in a YAML pipeline if the
PR is affected by a branch policy.

System.TeamFoundationCollectionUri The URI of the team foundation Yes


collection. For example:
https://dev.azure.com/fabrikamfiber/

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.

System.TeamProject The name of the project that contains Yes


this build.

System.TeamProjectId The ID of the project that this build Yes


belongs to.

TF_BUILD Set to True if the script is being run by No


a build task.

This variable is agent-scoped, and can


be used as an environment variable in a
script and as a parameter in a build task,
but not as part of the build number or
as a version control tag.
Agent variables (DevOps Server 2019)
NOTE
You can use agent variables as environment variables in your scripts and as parameters in your build tasks. You cannot use
them to customize the build number or to apply a version control label or tag.

VA RIA B L E DESC RIP T IO N

Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created.
For example: c:\agent_work\1

Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .

Agent.Id The ID of the agent.

Agent.JobName The name of the running job. This will usually be "Job" or
"__default", but in multi-config scenarios, will be the
configuration.

Agent.JobStatus The status of the build.


Canceled
Failed
Succeeded
SucceededWithIssues (partially successful)
The environment variable should be referenced as
AGENT_JOBSTATUS . The older agent.jobstatus is
available for backwards compatibility.

Agent.MachineName The name of the machine on which the agent is installed.

Agent.Name The name of the agent that is registered with the pool.
If you are using a self-hosted agent, then this name is
specified by you. See agents.

Agent.OS The operating system of the agent host. Valid values are:
Windows_NT
Darwin
Linux
If you're running in a container, the agent host and container
may be running different operating systems.

Agent.OSArchitecture The operating system processor architecture of the agent host.


Valid values are:
X86
X64
ARM

Agent.TempDirectory A temporary folder that is cleaned after each pipeline job. This
directory is used by tasks such as .NET Core CLI task to hold
temporary items like test results before they are published.
Agent.ToolsDirectory The directory used by tasks such as Node Tool Installer and
Use Python Version to switch between multiple versions of a
tool. These tasks will add tools from this directory to PATH so
that subsequent build steps can use them.

Learn about managing this directory on a self-hosted agent.

Agent.WorkFolder The working directory for this agent. For example:


c:\agent_work .

This directory is not guaranteed to be writable by pipeline


tasks (eg. when mapped into a container)

Build variables (DevOps Server 2019)

VA RIA B L E DESC RIP T IO N

Build.ArtifactStagingDirectory The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a

A typical way to use this folder is to publish your build


artifacts with the Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory and


Build.StagingDirectory are interchangeable. This directory
is purged before each new build, so you don't have to
clean it up yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

Build.BuildId The ID of the record for the completed build.

Build.BuildNumber The name of the completed build. You can specify the build
number format that generates this value in the pipeline
options.

A typical use of this variable is to make it part of the label


format, which you specify on the repository tab.

Note: This value can contain whitespace or other invalid


label characters. In these cases, the label format will fail.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

Build.BuildUri The URI for the build. For example:


vstfs:///Build/Build/1430 .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.BinariesDirectory The local path on the agent you can use as an output folder
for compiled binaries.

By default, new build pipelines are not set up to clean this


directory. You can define your build to clean it up on the
Repository tab.

For example: c:\agent_work\1\b .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.DefinitionName The name of the build pipeline.

Note: This value can contain whitespace or other invalid


label characters. In these cases, the label format will fail.

Build.DefinitionVersion The version of the build pipeline.

Build.QueuedBy See "How are the identity variables set?".

Note: This value can contain whitespace or other invalid


label characters. In these cases, the label format will fail.

Build.QueuedById See "How are the identity variables set?".

Build.Reason The event that caused the build to run.


Manual : A user manually queued the build.
IndividualCI : Continuous integration (CI)
triggered by a Git push or a TFVC check-in.
BatchedCI : Continuous integration (CI) triggered
by a Git push or a TFVC check-in, and the Batch
changes was selected.
Schedule : Scheduled trigger.
ValidateShelveset : A user manually queued the
build of a specific TFVC shelveset.
CheckInShelveset : Gated check-in trigger.
PullRequest : The build was triggered by a Git branch
policy that requires a build.
BuildCompletion : The build was triggered by another
build
See Build pipeline triggers, Improve code quality with branch
policies.

Build.Repository.Clean The value you've selected for Clean in the source repository
settings.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.Repository.LocalPath The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
This variable is synonymous with Build.SourcesDirectory.

Build.Repository.Name The name of the repository.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Provider The type of repository you selected.


TfsGit : TFS Git repository
TfsVersionControl : Team Foundation Version
Control
Git : Git repository hosted on an external server
GitHub
Svn : Subversion
This variable is agent-scoped. It can be used as an environment
variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Tfvc.Workspace Defined if your repository is Team Foundation Version Control.


The name of the TFVC workspace used by the build agent.

For example, if the Agent.BuildDirectory is c:\agent_work\12


and the Agent.Id is 8 , the workspace name could be:
ws_12_8

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Uri The URL for the repository. For example:


Git:
https://dev.azure.com/fabrikamfiber/_git/Scripts
TFVC: https://dev.azure.com/fabrikamfiber/

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.RequestedFor See "How are the identity variables set?".

Note: This value can contain whitespace or other invalid


label characters. In these cases, the label format will fail.

Build.RequestedForEmail See "How are the identity variables set?".

Build.RequestedForId See "How are the identity variables set?".


Build.SourceBranch The branch the build was queued for. Some examples:
Git repo branch: refs/heads/master
Git repo pull request: refs/pull/1/merge
TFVC repo branch: $/teamproject/main
TFVC repo gated check-in:
Gated_2016-06-
06_05.20.51.4369;[email protected]
TFVC repo shelveset build:
myshelveset;[email protected]

When you use this variable in your build number format, the
forward slash characters ( / ) are replaced with underscore
characters _ ).

Note: In TFVC, if you are running a gated check-in build or


manually building a shelveset, you cannot use this variable in
your build number format.

Build.SourceBranchName The name of the branch the build was queued for.
Git repo branch or pull request: The last path segment
in the ref. For example, in refs/heads/master this
value is master . In refs/heads/feature/tools this
value is tools .
TFVC repo branch: The last path segment in the root
server path for the workspace. For example in
$/teamproject/main this value is main .
TFVC repo gated check-in or shelveset build is the
name of the shelveset. For example,
Gated_2016-06-
06_05.20.51.4369;[email protected]
or myshelveset;[email protected] .
Note: In TFVC, if you are running a gated check-in build or
manually building a shelveset, you cannot use this variable in
your build number format.

Build.SourcesDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
This variable is synonymous with
Build.Repository.LocalPath.

Build.SourceVersion The latest version control change that is included in this build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped. It can be used as an environment
variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.SourceVersionMessage The comment of the commit or changeset. We truncate the
message to the first line or 200 characters, whichever is
shorter.
This variable is agent-scoped. It can be used as an
environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
Note: This variable is available in TFS 2015.4.

Build.StagingDirectory The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a

A typical way to use this folder is to publish your build


artifacts with the Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory and


Build.StagingDirectory are interchangeable. This directory
is purged before each new build, so you don't have to
clean it up yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

Build.Repository.Git.SubmoduleCheckout The value you've selected for Checkout submodules on the


repository tab.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.SourceTfvcShelveset Defined if your repository is Team Foundation Version Control.

If you are running a gated build or a shelveset build, this is set


to the name of the shelveset you are building.

Note: This variable yields a value that is invalid for build use in
a build number format.

Build.TriggeredBy.BuildId If the build was triggered by another build, then this variable is
set to the BuildID of the triggering build.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.TriggeredBy.DefinitionId If the build was triggered by another build, then this variable is
set to the DefinitionID of the triggering build.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.TriggeredBy.DefinitionName If the build was triggered by another build, then this variable is
set to the name of the triggering build pipeline.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.TriggeredBy.BuildNumber If the build was triggered by another build, then this variable is
set to the number of the triggering build.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.TriggeredBy.ProjectID If the build was triggered by another build, then this variable is
set to ID of the project that contains the triggering build.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Common.TestResultsDirectory The local path on the agent where the test results are created.
For example: c:\agent_work\1\TestResults

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System variables (DevOps Server 2019)


VA RIA B L E DESC RIP T IO N

System.AccessToken Use the OAuth token to access the REST API.

Use System.AccessToken from YAML scripts.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System.CollectionId The GUID of the TFS collection or Azure DevOps organization

System.DefaultWorkingDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

System.DefinitionId The ID of the build pipeline.

System.HostType Set to build if the pipeline is a build. For a release, the values
are deployment for a Deployment group job and release
for an Agent job.

System.PullRequest.IsFork If the pull request is from a fork of the repository, this variable
is set to True . Otherwise, it is set to False .

System.PullRequest.PullRequestId The ID of the pull request that caused this build. For example:
17 . (This variable is initialized only if the build ran because of
a Git PR affected by a branch policy.)
System.PullRequest.PullRequestNumber The number of the pull request that caused this build. This
variable is populated for pull requests from GitHub which have
a different pull request ID and pull request number.

System.PullRequest.SourceBranch The branch that is being reviewed in a pull request. For


example: refs/heads/users/raisa/new-feature . (This
variable is initialized only if the build ran because of a Git PR
affected by a branch policy.)

System.PullRequest.SourceRepositoryURI The URL to the repo that contains the pull request. For
example:
https://dev.azure.com/ouraccount/_git/OurProject . (This
variable is initialized only if the build ran because of a Azure
Repos Git PR affected by a branch policy. It is not initialized for
GitHub PRs.)

System.PullRequest.TargetBranch The branch that is the target of a pull request. For example:
refs/heads/master . This variable is initialized only if the build
ran because of a Git PR affected by a branch policy.

System.TeamFoundationCollectionUri The URI of the team foundation collection. For example:


https://dev.azure.com/fabrikamfiber/ .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System.TeamProject The name of the project that contains this build.

System.TeamProjectId The ID of the project that this build belongs to.

TF_BUILD Set to True if the script is being run by a build task.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Agent variables (TFS 2018)


NOTE
You can use agent variables as environment variables in your scripts and as parameters in your build tasks. You cannot use
them to customize the build number or to apply a version control label or tag.

VA RIA B L E DESC RIP T IO N

Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created.
For example: c:\agent_work\1

Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .

Agent.Id The ID of the agent.


Agent.JobStatus The status of the build.
Canceled
Failed
Succeeded
SucceededWithIssues (partially successful)
The environment variable should be referenced as
AGENT_JOBSTATUS . The older agent.jobstatus is
available for backwards compatibility.

Agent.MachineName The name of the machine on which the agent is installed.

Agent.Name The name of the agent that is registered with the pool.
This name is specified by you. See agents.

Agent.TempDirectory A temporary folder that is cleaned after each pipeline job. This
directory is used by tasks such as .NET Core CLI task to hold
temporary items like test results before they are published.

Agent.ToolsDirectory The directory used by tasks such as Node Tool Installer and
Use Python Version to switch between multiple versions of a
tool. These tasks will add tools from this directory to PATH so
that subsequent build steps can use them.

Learn about managing this directory on a self-hosted agent.

Agent.WorkFolder The working directory for this agent. For example:


c:\agent_work .

Build variables (TFS 2018)

VA RIA B L E DESC RIP T IO N

Build.ArtifactStagingDirectory The local path on the agent where any artifacts are copied to
before being pushed to their destination.

The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a

A typical way to use this folder is to publish your build


artifacts with the Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory and


Build.StagingDirectory are interchangeable. This directory
is purged before each new build, so you don't have to
clean it up yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

Build.BuildId The ID of the record for the completed build.


Build.BuildNumber The name of the completed build. You can specify the build
number format that generates this value in the pipeline
options.

A typical use of this variable is to make it part of the label


format, which you specify on the repository tab.

Note: This value can contain whitespace or other invalid


label characters. In these cases, the label format will fail.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as a version control tag.

Build.BuildUri The URI for the build. For example:


vstfs:///Build/Build/1430 .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.BinariesDirectory The local path on the agent you can use as an output folder
for compiled binaries.

By default, new build pipelines are not set up to clean this


directory. You can define your build to clean it up on the
Repository tab.

For example: c:\agent_work\1\b .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.DefinitionName The name of the build pipeline.


Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.

Build.DefinitionVersion The version of the build pipeline.

Build.QueuedBy See "How are the identity variables set?".


Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.

Build.QueuedById See "How are the identity variables set?".


Build.Reason The event that caused the build to run.
Manual : A user manually queued the build.
IndividualCI : Continuous integration (CI)
triggered by a Git push or a TFVC check-in.
BatchedCI : Continuous integration (CI) triggered
by a Git push or a TFVC check-in, and the Batch
changes was selected.
Schedule : Scheduled trigger.
ValidateShelveset : A user manually queued the
build of a specific TFVC shelveset.
CheckInShelveset : Gated check-in trigger.
PullRequest : The build was triggered by a Git branch
policy that requires a build.
See Build pipeline triggers, Improve code quality with branch
policies.

Build.Repository.Clean The value you've selected for Clean in the source repository
settings.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.LocalPath The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
This variable is synonymous with Build.SourcesDirectory.

Build.Repository.Name The name of the repository.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Provider The type of repository you selected.


TfsGit : TFS Git repository
TfsVersionControl : Team Foundation Version
Control
Git : Git repository hosted on an external server
Svn : Subversion

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.Repository.Tfvc.Workspace Defined if your repository is Team Foundation Version Control.
The name of the TFVC workspace used by the build agent.

For example, if the Agent.BuildDirectory is c:\agent_work\12


and the Agent.Id is 8 , the workspace name could be:
ws_12_8

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Uri The URL for the repository. For example:


Git:
https://fabrikamfiber/tfs/DefaultCollection/Scripts/_git/Scripts
TFVC:
https://fabrikamfiber/tfs/DefaultCollection/

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.RequestedFor See "How are the identity variables set?".


Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.

Build.RequestedForEmail See "How are the identity variables set?".

Build.RequestedForId See "How are the identity variables set?".

Build.SourceBranch The branch the build was queued for. Some examples:
Git repo branch: refs/heads/master
Git repo pull request: refs/pull/1/merge
TFVC repo branch: $/teamproject/main
TFVC repo gated check-in:
Gated_2016-06-
06_05.20.51.4369;[email protected]
TFVC repo shelveset build:
myshelveset;[email protected]

When you use this variable in your build number format, the
forward slash characters ( / ) are replaced with underscore
characters _ ).

Note: In TFVC, if you are running a gated check-in build or


manually building a shelveset, you cannot use this variable in
your build number format.

Build.SourceBranchName The name of the branch the build was queued for.
Git repo branch or pull request: The last path segment
in the ref. For example, in refs/heads/master this
value is master . In refs/heads/feature/tools this
value is tools .
TFVC repo branch: The last path segment in the root
server path for the workspace. For example in
$/teamproject/main this value is main .
TFVC repo gated check-in or shelveset build is the
name of the shelveset. For example,
Gated_2016-06-
06_05.20.51.4369;[email protected]
or myshelveset;[email protected] .
Note: In TFVC, if you are running a gated check-in build or
manually building a shelveset, you cannot use this variable in
your build number format.
Build.SourcesDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
This variable is synonymous with
Build.Repository.LocalPath.

Build.SourceVersion The latest version control change that is included in this build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped. It can be used as an environment
variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.SourceVersionMessage The comment of the commit or changeset. We truncate the


message to the first line or 200 characters, whichever is
shorter.
This variable is agent-scoped, and can be used as an
environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
Note: This variable is available in TFS 2015.4.

Build.StagingDirectory The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a

A typical way to use this folder is to publish your build


artifacts with the Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory and


Build.StagingDirectory are interchangeable. This directory
is purged before each new build, so you don't have to
clean it up yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

Build.Repository.Git.SubmoduleCheckout The value you've selected for Checkout submodules on the


repository tab.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.SourceTfvcShelveset Defined if your repository is Team Foundation Version Control.

If you are running a gated build or a shelveset build, this is set


to the name of the shelveset you are building.

Note: This variable yields a value that is invalid for build use in
a build number format.

Common.TestResultsDirectory The local path on the agent where the test results are created.
For example: c:\agent_work\1\TestResults

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System variables (TFS 2018)


VA RIA B L E DESC RIP T IO N

System.AccessToken Use the OAuth token to access the REST API.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System.CollectionId The GUID of the TFS collection or Azure DevOps organization

System.DefaultWorkingDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

System.DefinitionId The ID of the build pipeline.

System.HostType Set to build if the pipeline is a build or release if the


pipeline is a release.

System.PullRequest.IsFork If the pull request is from a fork of the repository, this variable
is set to True . Otherwise, it is set to False . Available in TFS
2018.2 .

System.PullRequest.PullRequestId The ID of the pull request that caused this build. For example:
17 . (This variable is initialized only if the build ran because of
a Git PR affected by a branch policy.)

System.PullRequest.SourceBranch The branch that is being reviewed in a pull request. For


example: refs/heads/users/raisa/new-feature . (This
variable is initialized only if the build ran because of a Git PR
affected by a branch policy.)
System.PullRequest.SourceRepositoryURI The URL to the repo that contains the pull request. For
example:
http://our-
server:8080/tfs/DefaultCollection/_git/OurProject
. (This variable is initialized only if the build ran because of a
Azure Repos Git PR affected by a branch policy.)

System.PullRequest.TargetBranch The branch that is the target of a pull request. For example:
refs/heads/master . This variable is initialized only if the build
ran because of a Git PR affected by a branch policy.

System.TeamFoundationCollectionUri The URI of the team foundation collection. For example:


http://our-server:8080/tfs/DefaultCollection/ .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System.TeamProject The name of the project that contains this build.

System.TeamProjectId The ID of the project that this build belongs to.

TF_BUILD Set to True if the script is being run by a build task.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Agent variables (TFS 2017)


NOTE
You can use agent variables as environment variables in your scripts and as parameters in your build tasks. You cannot use
them to customize the build number or to apply a version control label or tag.

VA RIA B L E DESC RIP T IO N

Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created.
For example: c:\agent_work\1

Agent.ComputerName The name of the machine on which the agent is installed.

Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .

Agent.Id The ID of the agent.

Agent.JobStatus The status of the build.


Canceled
Failed
Succeeded
SucceededWithIssues (partially successful)
The environment variable should be referenced as
AGENT_JOBSTATUS . The older agent.jobstatus is
available for backwards compatibility.
Agent.Name The name of the agent that is registered with the pool.
This name is specified by you. See agents.

Agent.WorkFolder The working directory for this agent. For example:


c:\agent_work .

Build variables (TFS 2017)

VA RIA B L E DESC RIP T IO N

Build.ArtifactStagingDirectory The local path on the agent where any artifacts are copied to
before being pushed to their destination.

The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a

A typical way to use this folder is to publish your build


artifacts with the Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory and


Build.StagingDirectory are interchangeable. This directory
is purged before each new build, so you don't have to
clean it up yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

Build.BuildId The ID of the record for the completed build.

Build.BuildNumber The name of the completed build. You can specify the build
number format that generates this value in the pipeline
options.

A typical use of this variable is to make it part of the label


format, which you specify on the repository tab.

Note: This value can contain whitespace or other invalid


label characters. In these cases, the label format will fail.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as a version control tag.

Build.BuildUri The URI for the build. For example:


vstfs:///Build/Build/1430 .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.BinariesDirectory The local path on the agent you can use as an output folder
for compiled binaries.

By default, new build pipelines are not set up to clean this


directory. You can define your build to clean it up on the
Repository tab.

For example: c:\agent_work\1\b .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.DefinitionName The name of the build pipeline.


Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.

Build.DefinitionVersion The version of the build pipeline.

Build.QueuedBy See "How are the identity variables set?".


Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.

Build.QueuedById See "How are the identity variables set?".

Build.Reason The event that caused the build to run. Available in TFS
2017.3 .
Manual : A user manually queued the build.
IndividualCI : Continuous integration (CI)
triggered by a Git push or a TFVC check-in.
BatchedCI : Continuous integration (CI) triggered
by a Git push or a TFVC check-in, and the Batch
changes was selected.
Schedule : Scheduled trigger.
ValidateShelveset : A user manually queued the
build of a specific TFVC shelveset.
CheckInShelveset : Gated check-in trigger.
PullRequest : The build was triggered by a Git branch
policy that requires a build.
See Build pipeline triggers, Improve code quality with branch
policies.

Build.Repository.Clean The value you've selected for Clean in the source repository
settings.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.LocalPath The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
This variable is synonymous with Build.SourcesDirectory.
Build.Repository.Name The name of the repository.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Provider The type of repository you selected.


TfsGit : TFS Git repository
TfsVersionControl : Team Foundation Version
Control
Git : Git repository hosted on an external server
Svn : Subversion

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Tfvc.Workspace Defined if your repository is Team Foundation Version Control.


The name of the TFVC workspace used by the build agent.

For example, if the Agent.BuildDirectory is c:\agent_work\12


and the Agent.Id is 8 , the workspace name could be:
ws_12_8

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Uri The URL for the repository. For example:


Git:
https://fabrikamfiber/tfs/DefaultCollection/Scripts/_git/Scripts
TFVC:
https://fabrikamfiber/tfs/DefaultCollection/

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.RequestedFor See "How are the identity variables set?".


Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.

Build.RequestedForEmail See "How are the identity variables set?".

Build.RequestedForId See "How are the identity variables set?".

Build.SourceBranch The branch the build was queued for. Some examples:
Git repo branch: refs/heads/master
Git repo pull request: refs/pull/1/merge
TFVC repo branch: $/teamproject/main
TFVC repo gated check-in:
Gated_2016-06-
06_05.20.51.4369;[email protected]
TFVC repo shelveset build:
myshelveset;[email protected]

When you use this variable in your build number format, the
forward slash characters ( / ) are replaced with underscore
characters _ ).

Note: In TFVC, if you are running a gated check-in build or


manually building a shelveset, you cannot use this variable in
your build number format.
Build.SourceBranchName The name of the branch the build was queued for.
Git repo branch or pull request: The last path segment
in the ref. For example, in refs/heads/master this
value is master . In refs/heads/feature/tools this
value is tools .
TFVC repo branch: The last path segment in the root
server path for the workspace. For example in
$/teamproject/main this value is main .
TFVC repo gated check-in or shelveset build is the
name of the shelveset. For example,
Gated_2016-06-
06_05.20.51.4369;[email protected]
or myshelveset;[email protected] .
Note: In TFVC, if you are running a gated check-in build or
manually building a shelveset, you cannot use this variable in
your build number format.

Build.SourcesDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
This variable is synonymous with
Build.Repository.LocalPath.

Build.SourceVersion The latest version control change that is included in this build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped. It can be used as an environment
variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.SourceVersionMessage The comment of the commit or changeset. We truncate the


message to the first line or 200 characters, whichever is
shorter.
This variable is agent-scoped, and can be used as an
environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
Note: This variable is available in TFS 2015.4.
Build.StagingDirectory The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a

A typical way to use this folder is to publish your build


artifacts with the Copy files and Publish build artifacts
tasks.

Note: Build.ArtifactStagingDirectory and


Build.StagingDirectory are interchangeable. This directory
is purged before each new build, so you don't have to
clean it up yourself.

See Artifacts in Azure Pipelines.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

Build.Repository.Git.SubmoduleCheckout The value you've selected for Checkout submodules on the


repository tab.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.SourceTfvcShelveset Defined if your repository is Team Foundation Version Control.

If you are running a gated build or a shelveset build, this is set


to the name of the shelveset you are building.

Note: This variable yields a value that is invalid for build use in
a build number format.

Common.TestResultsDirectory The local path on the agent where the test results are created.
For example: c:\agent_work\1\TestResults

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System variables (TFS 2017)


VA RIA B L E DESC RIP T IO N

System.AccessToken Use the OAuth token to access the REST API.

System.CollectionId The GUID of the TFS collection or Azure DevOps organization

System.DefaultWorkingDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

System.DefinitionId The ID of the build pipeline.


System.HostType Set to build if the pipeline is a build or release if the
pipeline is a release.

System.PullRequest.PullRequestId The ID of the pull request that caused this build. For example:
17 . (This variable is initialized only if the build ran because of
a Git PR affected by a branch policy.)

System.PullRequest.SourceBranch The branch that is being reviewed in a pull request. For


example: refs/heads/users/raisa/new-feature . (This
variable is initialized only if the build ran because of a Git PR
affected by a branch policy.)

System.PullRequest.SourceRepositoryURI The URL to the repo that contains the pull request. For
example:
http://our-
server:8080/tfs/DefaultCollection/_git/OurProject
. (This variable is initialized only if the build ran because of a
Azure Repos Git PR affected by a branch policy.)

System.PullRequest.TargetBranch The branch that is the target of a pull request. For example:
refs/heads/master . This variable is initialized only if the build
ran because of a Git PR affected by a branch policy.

System.TeamFoundationCollectionUri The URI of the team foundation collection. For example:


http://our-server:8080/tfs/DefaultCollection/ .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System.TeamProject The name of the project that contains this build.

System.TeamProjectId The ID of the project that this build belongs to.

TF_BUILD Set to True if the script is being run by a build task.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Agent variables (TFS 2015)


NOTE
You can use agent variables as environment variables in your scripts and as parameters in your build tasks. You cannot use
them to customize the build number or to apply a version control label or tag.

VA RIA B L E DESC RIP T IO N

Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created.
For example:
TFS 2015.4:
C:\TfsData\Agents\Agent-MACHINENAME_work\1
TFS 2015 RTM user-installed agent:
C:\Agent_work\6c3842c6
TFS 2015 RTM built-in agent:
C:\TfsData\Build_work\6c3842c6
Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software.
For example:
TFS 2015.4: C:\TfsData\Agents\Agent-MACHINENAME
TFS 2015 RTM user-installed agent: C:\Agent
TFS 2015 RTM built-in agent:
C:\Program Files\Microsoft Team Foundation
Server 14.0\Build

Agent.Id The ID of the agent.

Agent.JobStatus The status of the build.


Canceled
Failed
Succeeded
SucceededWithIssues (partially successful)
Note: The environment variable can be referenced only as
agent.jobstatus . AGENT_JOBSTATUS was not present in
TFS 2015.

Agent.MachineName The name of the machine on which the agent is installed. This
variable is available in TFS 2015.4 , not in TFS 2015 RTM .

Agent.Name The name of the agent that is registered with the pool.
This name is specified by you. See agents.

Agent.WorkFolder The working directory for this agent. For example:


c:\agent_work .

Build variables (TFS 2015)

VA RIA B L E DESC RIP T IO N


Build.ArtifactStagingDirectory The local path on the agent where any artifacts are copied to
before being pushed to their destination.

A typical way to use this folder is to publish your build artifacts


with the Copy files and Publish build artifacts tasks. See
Artifacts in Azure Pipelines.

For example:
TFS 2015.4:
C:\TfsData\Agents\Agent-MACHINENAME_work\1\a
TFS 2015 RTM default agent:
C:\TfsData\Build_work\6c3842c6\artifacts
TFS 2015 RTM agent installed by you:
C:\Agent_work\6c3842c6\artifacts

This directory is purged before each new build, so you don't


have to clean it up yourself.

In TFS 2015.4 , Build.ArtifactStagingDirectory and


Build.StagingDirectory are interchangeable.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.BuildId The ID of the record for the completed build.

Build.BuildNumber The name of the completed build. You can specify the build
number format that generates this value in the pipeline
options.
A typical use of this variable is to make it part of the label
format, which you specify on the repository tab.
Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of a version control tag.

Build.BuildUri The URI for the build. For example:


vstfs:///Build/Build/1430 .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.BinariesDirectory The local path on the agent you can use as an output folder
for compiled binaries. Available in TFS 2015.4 .

By default, new build pipelines are not set up to clean this


directory. You can define your build to clean it up on the
Repository tab.

For example:
C:\TfsData\Agents\Agent-MACHINENAME_work\1\b

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.DefinitionName The name of the build pipeline.


Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.
Build.DefinitionVersion The version of the build pipeline.

Build.QueuedBy See "How are the identity variables set?".


Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.

Build.QueuedById See "How are the identity variables set?".

Build.Repository.Clean The value you've selected for Clean in the source repository
settings.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.LocalPath The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
This variable is synonymous with Build.SourcesDirectory.

Build.Repository.Name The name of the repository.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Provider The type of repository you selected.


TfsGit : TFS Git repository
TfsVersionControl : Team Foundation Version
Control
Git : Git repository hosted on an external server
Svn : Subversion (available on TFS 2015.4)

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.Repository.Tfvc.Workspace Defined if your repository is Team Foundation Version Control.


The name of the TFVC workspace used by the build agent.

For example, if the Agent.BuildDirectory is c:\agent_work\12


and the Agent.Id is 8 , the workspace name could be:
ws_12_8

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.Repository.Uri The URL for the repository. For example:
Git:
https://fabrikamfiber/tfs/DefaultCollection/Scripts/_git/Scripts
TFVC:
https://fabrikamfiber/tfs/DefaultCollection/

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.RequestedFor See "How are the identity variables set?".


Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.

Build.RequestedForId See "How are the identity variables set?".

Build.SourceBranch The branch the build was queued for. Some examples:
Git repo branch: refs/heads/master
Git repo pull request: refs/pull/1/merge
TFVC repo branch: $/teamproject/main
TFVC repo gated check-in:
Gated_2016-06-
06_05.20.51.4369;[email protected]
TFVC repo shelveset build:
myshelveset;[email protected]

When you use this variable in your build number format, the
forward slash characters ( / ) are replaced with underscore
characters _ ).

Note: In TFVC, if you are running a gated check-in build or


manually building a shelveset, you cannot use this variable in
your build number format.

Build.SourceBranchName The name of the branch the build was queued for.
Git repo branch or pull request: The last path segment
in the ref. For example, in refs/heads/master this
value is master . In refs/heads/feature/tools this
value is tools .
TFVC repo branch: The last path segment in the root
server path for the workspace. For example in
$/teamproject/main this value is main .
TFVC repo gated check-in or shelveset build is the
name of the shelveset. For example,
Gated_2016-06-
06_05.20.51.4369;[email protected]
or myshelveset;[email protected] .
Note: In TFVC, if you are running a gated check-in build or
manually building a shelveset, you cannot use this variable in
your build number format.
Build.SourcesDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
This variable is synonymous with
Build.Repository.LocalPath.

Build.SourcesDirectoryHash Note: This variable is available in TFS 2015 RTM, but not in TFS
2015.4.

Build.SourceVersion The latest version control change that is included in this build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped. It can be used as an environment
variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.SourceVersionMessage The comment of the commit or changeset. We truncate the


message to the first line or 200 characters, whichever is
shorter.
This variable is agent-scoped, and can be used as an
environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
Note: This variable is available in TFS 2015.4.

Build.StagingDirectory TFS 2015 RTM

The local path on the agent you can use as an output folder
for compiled binaries. For example:
C:\TfsData\Build_work\6c3842c6\staging .

By default, new build pipelines are not set up to clean this


directory. You can define your build to clean it up on the
Repository tab.

TFS 2015.4

The local path on the agent where any artifacts are copied to
before being pushed to their destination. For example:
C:\TfsData\Agents\Agent-MACHINENAME_work\1\a

This directory is purged before each new build, so you don't


have to clean it up yourself.

A typical way to use this folder is to publish your build artifacts


with the Copy files and Publish build artifacts tasks. See
Artifacts in Azure Pipelines.

In TFS 2015.4 , Build.ArtifactStagingDirectory and


Build.StagingDirectory are interchangeable.

All versions of TFS 2015

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.Repository.Git.SubmoduleCheckout The value you've selected for Checkout submodules on the
repository tab.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

Build.SourceTfvcShelveset Defined if your repository is Team Foundation Version Control.

If you are running a gated build or a shelveset build, this is set


to the name of the shelveset you are building.

Note: This variable yields a value that is invalid for build use in
a build number format.

Common.TestResultsDirectory The local path on the agent where the test results are created.
For example: c:\agent_work\1\TestResults . Available in
TFS 2015.4 .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System variables (TFS 2015)


VA RIA B L E DESC RIP T IO N

System.AccessToken Available in TFS 2015.4 . Use the OAuth token to access the
REST API.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System.CollectionId The GUID of the TFS collection or Azure DevOps organization

System.DefaultWorkingDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s

By default, new build pipelines update only the changed


files. You can modify how files are downloaded on the
Repository tab.

This variable is agent-scoped. It can be used as an


environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.

System.DefinitionId The ID of the build pipeline.

System.HostType Set to build if the pipeline is a build or release if the


pipeline is a release.

System.PullRequest.PullRequestId The ID of the pull request that caused this build. For example:
17 . (This variable is initialized only if the build ran because of
a Git PR affected by a branch policy.)

System.PullRequest.SourceBranch The branch that is being reviewed in a pull request. For


example: refs/heads/users/raisa/new-feature . (This
variable is initialized only if the build ran because of a Git PR
affected by a branch policy.)
System.PullRequest.SourceRepositoryURI The URL to the repo that contains the pull request. For
example:
http://our-
server:8080/tfs/DefaultCollection/_git/OurProject
. (This variable is initialized only if the build ran because of a
Azure Repos Git PR affected by a branch policy.)

System.PullRequest.TargetBranch The branch that is the target of a pull request. For example:
refs/heads/master . This variable is initialized only if the build
ran because of a Git PR affected by a branch policy.

System.TeamFoundationCollectionUri The URI of the team foundation collection. For example:


http://our-server:8080/tfs/DefaultCollection/ .

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

System.TeamProject The name of the project that contains this build.

System.TeamProjectId The ID of the project that this build belongs to.

TF_BUILD Set to True if the script is being run by a build task.

This variable is agent-scoped. It can be used as an environment


variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.

How are the identity variables set?


The value depends on what caused the build.

T H EN T H E B UIL D. Q UEUEDB Y A N D T H EN T H E B UIL D. REQ UEST EDF O R A N D


B UIL D. Q UEUEDB Y ID VA L UES A RE B A SED B UIL D. REQ UEST EDF O RID VA L UES A RE
IF T H E B UIL D IS T RIGGERED. . . ON... B A SED O N . . .

In Git or TFVC by the Continuous The system identity, for example: The person who pushed or checked in
integration (CI) triggers [DefaultCollection]\Project the changes.
Collection Service Accounts

In Git or by a branch policy build. The system identity, for example: The person who checked in the changes.
[DefaultCollection]\Project
Collection Service Accounts

In TFVC by a gated check-in trigger The person who checked in the changes. The person who checked in the changes.

In Git or TFVC by the Scheduled triggers The system identity, for example: The system identity, for example:
[DefaultCollection]\Project [DefaultCollection]\Project
Collection Service Accounts Collection Service Accounts

Because you clicked the Queue build You You


button
Runtime parameters
11/2/2020 • 5 minutes to read • Edit Online

Runtime parameters let you have more control over what values can be passed to a pipeline. With runtime
parameters you can:
Supply different values to scripts and tasks at runtime
Control parameter types, ranges allowed, and defaults
Dynamically select jobs and stages with template expressions
You can specify parameters in templates and in the pipeline. Parameters have data types such as number and
string, and they can be restricted to a subset of values. The parameters section in a YAML defines what parameters
are available.
Parameters are only available at template parsing time. Parameters are expanded just before the pipeline runs so
that values surrounded by ${{ }} are replaced with parameter values. Use variables if you need your values to be
more widely available during your pipeline run.
Parameters must contain a name and data type. Parameters cannot be optional. A default value needs to be
assigned in your YAML file or when you run your pipeline.

Use parameters in pipelines


Set runtime parameters at the beginning of a YAML. This example pipeline accepts the value of image and then
outputs the value in the job. The trigger is set to none so that you can select the value of image when you
manually trigger your pipeline to run.

parameters:
- name: image
displayName: Pool Image
type: string
default: ubuntu-latest
values:
- windows-latest
- vs2017-win2016
- ubuntu-latest
- ubuntu-16.04
- macOS-latest
- macOS-10.14

trigger: none

jobs:
- job: build
displayName: build
pool:
vmImage: ${{ parameters.image }}
steps:
- script: echo building $(Build.BuildNumber) with ${{ parameters.image }}

When the pipeline runs, you select the Pool Image. If you do not make a selection, the default option,
ubuntu-latest gets used.
Use conditionals with parameters
You can also use parameters as part of conditional logic. With conditionals, part of a YAML will only run if it meets
the if criteria.
Use parameters to determine what steps run
This pipeline only runs a step when the boolean parameter test is true.
parameters:
- name: image
displayName: Pool Image
default: ubuntu-latest
values:
- windows-latest
- vs2017-win2016
- ubuntu-latest
- ubuntu-16.04
- macOS-latest
- macOS-10.14
- name: test
displayName: Run Tests?
type: boolean
default: false

trigger: none

jobs:
- job: build
displayName: Build and Test
pool:
vmImage: ${{ parameters.image }}
steps:
- script: echo building $(Build.BuildNumber)
- ${{ if eq(parameters.test, true) }}:
- script: echo "Running all the tests"

Use parameters to set what configuration is used


You can also use parameters to set which job runs. In this example, a different job runs depending on the value of
config .

parameters:
- name: configs
type: string
default: 'x86,x64'

trigger: none

jobs:
- ${{ if contains(parameters.configs, 'x86') }}:
- job: x86
steps:
- script: echo Building x86...
- ${{ if contains(parameters.configs, 'x64') }}:
- job: x64
steps:
- script: echo Building x64...
- ${{ if contains(parameters.configs, 'arm') }}:
- job: arm
steps:
- script: echo Building arm...

Selectively exclude a stage


You can also use parameters to set whether a stage runs. In this example, the Performance Test stage runs if the
parameter runPerfTests is true.
parameters:
- name: runPerfTests
type: boolean
default: false

trigger: none

stages:
- stage: Build
displayName: Build
jobs:
- job: Build
steps:
- script: echo running Build

- stage: UnitTest
displayName: Unit Test
dependsOn: Build
jobs:
- job: UnitTest
steps:
- script: echo running UnitTest

- ${{ if eq(parameters.runPerfTests, true) }}:


- stage: PerfTest
displayName: Performance Test
dependsOn: Build
jobs:
- job: PerfTest
steps:
- script: echo running PerfTest

- stage: Deploy
displayName: Deploy
dependsOn: UnitTest
jobs:
- job: Deploy
steps:
- script: echo running UnitTest

Loop through parameters


You can also loop through your string, number, and boolean parameters.
Script
PowerShell
In this example, you loop through parameters and print out each parameter name and value.
# start.yaml
parameters:
- name: myStringName
type: string
default: a string value
- name: myMultiString
type: string
default: default
values:
- default
- ubuntu
- name: myNumber
type: number
default: 2
values:
- 1
- 2
- 4
- 8
- 16
- name: myBoolean
type: boolean
default: true

steps:
- ${{ each parameter in parameters }}:
- script: echo ${{ parameter.Key }}
- script: echo ${{ parameter.Value }}

# azure-pipeline.yaml
trigger: none

extends:
template: start.yaml

Check for an empty parameter object


You can use the length() expression to check whether an object parameter has no value.

parameters:
- name: foo
type: object
default: []

steps:
- checkout: none
- ${{ if eq(length(parameters.foo), 0) }}:
- script: echo Foo is empty
displayName: Foo is empty

Parameter data types


DATA T Y P E N OT ES

string string

number may be restricted to values: , otherwise any number-like


string is accepted
DATA T Y P E N OT ES

boolean true or false

object any YAML structure

step a single step

stepList sequence of steps

job a single job

jobList sequence of jobs

deployment a single deployment job

deploymentList sequence of deployment jobs

stage a single stage

stageList sequence of stages

The step, stepList, job, jobList, deployment, deploymentList, stage, and stageList data types all use standard YAML
schema format. This example includes string, number, boolean, object, step, and stepList.
parameters:
- name: myString
type: string
default: a string
- name: myMultiString
type: string
default: default
values:
- default
- ubuntu
- name: myNumber
type: number
default: 2
values:
- 1
- 2
- 4
- 8
- 16
- name: myBoolean
type: boolean
default: true
- name: myObject
type: object
default:
foo: FOO
bar: BAR
things:
- one
- two
- three
nested:
one: apple
two: pear
count: 3
- name: myStep
type: step
default:
script: echo my step
- name: mySteplist
type: stepList
default:
- script: echo step one
- script: echo step two

trigger: none

jobs:
- job: stepList
steps: ${{ parameters.mySteplist }}
- job: myStep
steps:
- ${{ parameters.myStep }}
Classic release and artifacts variables
11/2/2020 • 14 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Classic release and artifacts variables are a convenient way to exchange and transport data throughout your
pipeline. Each variable is stored as a string and its value can change between runs of your pipeline.
Variables are different from Runtime parameters which are only available at template parsing time.

NOTE
This is a reference article that covers the classic release and artifacts variables. To understand variables in YAML pipelines, see
user-defined variables.

As you compose the tasks for deploying your application into each stage in your DevOps CI/CD processes, variables
will help you to:
Define a more generic deployment pipeline once, and then customize it easily for each stage. For example, a
variable can be used to represent the connection string for web deployment, and the value of this variable
can be changed from one stage to another. These are custom variables .
Use information about the context of the particular release, stage, artifacts, or agent in which the deployment
pipeline is being run. For example, your script may need access to the location of the build to download it, or
to the working directory on the agent to create temporary files. These are default variables .

TIP
You can view the current values of all variables for a release, and use a default variable to run a release in debug mode.

Default variables
Information about the execution context is made available to running tasks through default variables. Your tasks
and scripts can use these variables to find information about the system, release, stage, or agent they are running
in. With the exception of System.Debug , these variables are read-only and their values are automatically set by the
system. Some of the most significant variables are described in the following tables. To view the full list, see View
the current values of all variables.

Default variables - System


VA RIA B L E N A M E DESC RIP T IO N
VA RIA B L E N A M E DESC RIP T IO N

System.TeamFoundationServerUri The URL of the service connection in TFS or Azure Pipelines.


Use this from your scripts or tasks to call Azure Pipelines REST
APIs.

Example: https://fabrikam.vsrm.visualstudio.com/

System.TeamFoundationCollectionUri The URL of the Team Foundation collection or Azure Pipelines.


Use this from your scripts or tasks to call REST APIs on other
services such as Build and Version control.

Example: https://dev.azure.com/fabrikam/

System.CollectionId The ID of the collection to which this build or release belongs.


Not available in TFS 2015.

Example: 6c6f3423-1c84-4625-995a-f7f143a1e43d

System.DefinitionId The ID of the release pipeline to which the current release


belongs. Not available in TFS 2015.

Example: 1

System.TeamProject The name of the project to which this build or release belongs.

Example: Fabrikam

System.TeamProjectId The ID of the project to which this build or release belongs.


Not available in TFS 2015.

Example: 79f5c12e-3337-4151-be41-a268d2c73344

System.ArtifactsDirectory The directory to which artifacts are downloaded during


deployment of a release. The directory is cleared before every
deployment if it requires artifacts to be downloaded to the
agent. Same as Agent.ReleaseDirectory and
System.DefaultWorkingDirectory.

Example: C:\agent\_work\r1\a

System.DefaultWorkingDirectory The directory to which artifacts are downloaded during


deployment of a release. The directory is cleared before every
deployment if it requires artifacts to be downloaded to the
agent. Same as Agent.ReleaseDirectory and
System.ArtifactsDirectory.

Example: C:\agent\_work\r1\a

System.WorkFolder The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.RootDirectory and Agent.WorkFolder.

Example: C:\agent\_work
VA RIA B L E N A M E DESC RIP T IO N

System.Debug This is the only system variable that can be set by the users.
Set this to true to run the release in debug mode to assist in
fault-finding.

Example: true

Default variables - Release


VA RIA B L E N A M E DESC RIP T IO N

Release.AttemptNumber The number of times this release is deployed in this stage. Not
available in TFS 2015.

Example: 1

Release.DefinitionEnvironmentId The ID of the stage in the corresponding release pipeline. Not


available in TFS 2015.

Example: 1

Release.DefinitionId The ID of the release pipeline to which the current release


belongs. Not available in TFS 2015.

Example: 1

Release.DefinitionName The name of the release pipeline to which the current release
belongs.

Example: fabrikam-cd

Release.Deployment.RequestedFor The display name of the identity that triggered (started) the
deployment currently in progress. Not available in TFS 2015.

Example: Mateo Escobedo

Release.Deployment.RequestedForId The ID of the identity that triggered (started) the deployment


currently in progress. Not available in TFS 2015.

Example: 2f435d07-769f-4e46-849d-10d1ab9ba6ab

Release.DeploymentID The ID of the deployment. Unique per job.

Example: 254

Release.DeployPhaseID The ID of the phase where deployment is running.

Example: 127

Release.EnvironmentId The ID of the stage instance in a release to which the


deployment is currently in progress.

Example: 276
VA RIA B L E N A M E DESC RIP T IO N

Release.EnvironmentName The name of stage to which deployment is currently in


progress.

Example: Dev

Release.EnvironmentUri The URI of the stage instance in a release to which deployment


is currently in progress.

Example: vstfs://ReleaseManagement/Environment/276

Release.Environments.{stage-name}.status The deployment status of the stage.

Example: InProgress

Release.PrimaryArtifactSourceAlias The alias of the primary artifact source

Example: fabrikam\_web

Release.Reason The reason for the deployment. Supported values are:


ContinuousIntegration - the release started in Continuous
Deployment after a build completed.
Manual - the release started manually.
None - the deployment reason has not been specified.
Scheduled - the release started from a schedule.

Release.ReleaseDescription The text description provided at the time of the release.

Example: Critical security patch

Release.ReleaseId The identifier of the current release record.

Example: 118

Release.ReleaseName The name of the current release.

Example: Release-47

Release.ReleaseUri The URI of current release.

Example: vstfs://ReleaseManagement/Release/118

Release.ReleaseWebURL The URL for this release.

Example:
https://dev.azure.com/fabrikam/f3325c6c/_release?
releaseId=392&_a=release-summary

Release.RequestedFor The display name of identity that triggered the release.

Example: Mateo Escobedo

Release.RequestedForEmail The email address of identity that triggered the release.

Example: [email protected]
VA RIA B L E N A M E DESC RIP T IO N

Release.RequestedForId The ID of identity that triggered the release.

Example: 2f435d07-769f-4e46-849d-10d1ab9ba6ab

Release.SkipArtifactDownload Boolean value that specifies whether or not to skip


downloading of artifacts to the agent.

Example: FALSE

Release.TriggeringArtifact.Alias The alias of the artifact which triggered the release. This is
empty when the release was scheduled or triggered manually.

Example: fabrikam\_app

Default variables - Release stage


VA RIA B L E N A M E DESC RIP T IO N

Release.Environments.{stage name}.Status The status of deployment of this release within a specified


stage. Not available in TFS 2015.

Example: NotStarted

Default variables - Agent


VA RIA B L E N A M E DESC RIP T IO N

Agent.Name The name of the agent as registered with the agent pool. This
is likely to be different from the computer name.

Example: fabrikam-agent

Agent.MachineName The name of the computer on which the agent is configured.

Example: fabrikam-agent

Agent.Version The version of the agent software.

Example: 2.109.1

Agent.JobName The name of the job that is running, such as Release or Build.

Example: Release

Agent.HomeDirectory The folder where the agent is installed. This folder contains the
code and resources for the agent.

Example: C:\agent
VA RIA B L E N A M E DESC RIP T IO N

Agent.ReleaseDirectory The directory to which artifacts are downloaded during


deployment of a release. The directory is cleared before every
deployment if it requires artifacts to be downloaded to the
agent. Same as System.ArtifactsDirectory and
System.DefaultWorkingDirectory.

Example: C:\agent\_work\r1\a

Agent.RootDirectory The working directory for this agent, where subfolders are
created for every build or release. Same as Agent.WorkFolder
and System.WorkFolder.

Example: C:\agent\_work

Agent.WorkFolder The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.RootDirectory and System.WorkFolder.

Example: C:\agent\_work

Agent.DeploymentGroupId The ID of the deployment group the agent is registered with.


This is available only in deployment group jobs. Not available
in TFS 2018 Update 1.

Example: 1

Default variables - General Artifact


For each artifact that is referenced in a release, you can use the following artifact variables. Not all variables are
meaningful for each artifact type. The table below lists the default artifact variables and provides examples of the
values that they have depending on the artifact type. If an example is empty, it implies that the variable is not
populated for that artifact type.
Replace the {alias} placeholder with the value you specified for the artifact alias or with the default value
generated for the release pipeline.

VA RIA B L E N A M E DESC RIP T IO N

Release.Artifacts.{alias}.DefinitionId The identifier of the build pipeline or repository.

Azure Pipelines example: 1


GitHub example: fabrikam/asp

Release.Artifacts.{alias}.DefinitionName The name of the build pipeline or repository.

Azure Pipelines example: fabrikam-ci


TFVC example: $/fabrikam
Git example: fabrikam
GitHub example: fabrikam/asp (main)
VA RIA B L E N A M E DESC RIP T IO N

Release.Artifacts.{alias}.BuildNumber The build number or the commit identifier.

Azure Pipelines example: 20170112.1


Jenkins/TeamCity example: 20170112.1
TFVC example: Changeset 3
Git example: 38629c964
GitHub example: 38629c964

Release.Artifacts.{alias}.BuildId The build identifier.

Azure Pipelines example: 130


Jenkins/TeamCity example: 130
GitHub example:
38629c964d21fe405ef830b7d0220966b82c9e11

Release.Artifacts.{alias}.BuildURI The URL for the build.

Azure Pipelines example: vstfs://build-release/Build/130


GitHub example: https://github.com/fabrikam/asp

Release.Artifacts.{alias}.SourceBranch The full path and name of the branch from which the source
was built.

Azure Pipelines example: refs/heads/main

Release.Artifacts.{alias}.SourceBranchName The name only of the branch from which the source was built.

Azure Pipelines example: main

Release.Artifacts.{alias}.SourceVersion The commit that was built.

Azure Pipelines example:


bc0044458ba1d9298cdc649cb5dcf013180706f7

Release.Artifacts.{alias}.Repository.Provider The type of repository from which the source was built.

Azure Pipelines example: Git

Release.Artifacts.{alias}.RequestedForID The identifier of the account that triggered the build.

Azure Pipelines example:


2f435d07-769f-4e46-849d-10d1ab9ba6ab

Release.Artifacts.{alias}.RequestedFor The name of the account that requested the build.

Azure Pipelines example: Mateo Escobedo

Release.Artifacts.{alias}.Type The type of artifact source, such as Build.

Azure Pipelines example: Build


Jenkins example: Jenkins
TeamCity example: TeamCity
TFVC example: TFVC
Git example: Git
GitHub example: GitHub
VA RIA B L E N A M E DESC RIP T IO N

Release.Artifacts.{alias}.PullRequest.TargetBranch The full path and name of the branch that is the target of a
pull request. This variable is initialized only if the release is
triggered by a pull request flow.

Azure Pipelines example: refs/heads/main

Release.Artifacts.{alias}.PullRequest.TargetBranchName The name only of the branch that is the target of a pull
request. This variable is initialized only if the release is
triggered by a pull request flow.

Azure Pipelines example: main

See also Artifact source alias

Default variables - Primary Artifact


You designate one of the artifacts as a primary artifact in a release pipeline. For the designated primary artifact,
Azure Pipelines populates the following variables.

VA RIA B L E N A M E SA M E A S

Build.DefinitionId Release.Artifacts.{Primary artifact alias}.DefinitionId

Build.DefinitionName Release.Artifacts.{Primary artifact alias}.DefinitionName

Build.BuildNumber Release.Artifacts.{Primary artifact alias}.BuildNumber

Build.BuildId Release.Artifacts.{Primary artifact alias}.BuildId

Build.BuildURI Release.Artifacts.{Primary artifact alias}.BuildURI

Build.SourceBranch Release.Artifacts.{Primary artifact alias}.SourceBranch

Build.SourceBranchName Release.Artifacts.{Primary artifact alias}.SourceBranchName

Build.SourceVersion Release.Artifacts.{Primary artifact alias}.SourceVersion

Build.Repository.Provider Release.Artifacts.{Primary artifact alias}.Repository.Provider

Build.RequestedForID Release.Artifacts.{Primary artifact alias}.RequestedForID

Build.RequestedFor Release.Artifacts.{Primary artifact alias}.RequestedFor

Build.Type Release.Artifacts.{Primary artifact alias}.Type

Build.PullRequest.TargetBranch Release.Artifacts.{Primary artifact


alias}.PullRequest.TargetBranch

Build.PullRequest.TargetBranchName Release.Artifacts.{Primary artifact


alias}.PullRequest.TargetBranchName

Using default variables


You can use the default variables in two ways - as parameters to tasks in a release pipeline or in your scripts.
You can directly use a default variable as an input to a task. For example, to pass
Release.Artifacts.{Artifact alias}.DefinitionName for the artifact source whose alias is ASPNET4.CI to a task, you
would use $(Release.Artifacts.ASPNET4.CI.DefinitionName) .

To use a default variable in your script, you must first replace the . in the default variable names with _ . For
example, to print the value of artifact variable Release.Artifacts.{Artifact alias}.DefinitionName for the artifact
source whose alias is ASPNET4.CI in a PowerShell script, you would use
$env:RELEASE_ARTIFACTS_ASPNET4_CI_DEFINITIONNAME .

Note that the original name of the artifact source alias, ASPNET4.CI , is replaced by ASPNET4_CI .
View the current values of all variables
1. Open the pipelines view of the summary for the release, and choose the stage you are interested in. In the
list of steps, choose Initialize job .

2. This opens the log for this step. Scroll down to see the values used by the agent for this job.
Run a release in debug mode
Show additional information as a release executes and in the log files by running the entire release, or just the tasks
in an individual release stage, in debug mode. This can help you resolve issues and failures.
To initiate debug mode for an entire release, add a variable named System.Debug with the value true to the
Variables tab of a release pipeline.
To initiate debug mode for a single stage, open the Configure stage dialog from the shortcut menu of the
stage and add a variable named System.Debug with the value true to the Variables tab.
Alternatively, create a variable group containing a variable named System.Debug with the value true and
link this variable group to a release pipeline.

TIP
If you get an error related to an Azure RM service connection, see How to: Troubleshoot Azure Resource Manager service
connections.

Custom variables
Custom variables can be defined at various scopes.
Share values across all of the definitions in a project by using variable groups. Choose a variable group when
you need to use the same values across all the definitions, stages, and tasks in a project, and you want to be
able to change the values in a single place. You define and manage variable groups in the Librar y tab.
Share values across all of the stages by using release pipeline variables . Choose a release pipeline
variable when you need to use the same value across all the stages and tasks in the release pipeline, and you
want to be able to change the value in a single place. You define and manage these variables in the
Variables tab in a release pipeline. In the Pipeline Variables page, open the Scope drop-down list and select
"Release". By default, when you add a variable, it is set to Release scope.
Share values across all of the tasks within one specific stage by using stage variables . Use a stage-level
variable for values that vary from stage to stage (and are the same for all the tasks in an stage). You define
and manage these variables in the Variables tab of a release pipeline. In the Pipeline Variables page, open
the Scope drop-down list and select the required stage. When you add a variable, set the Scope to the
appropriate environment.
Using custom variables at project, release pipeline, and stage scope helps you to:
Avoid duplication of values, making it easier to update all occurrences as one operation.
Store sensitive values in a way that they cannot be seen or changed by users of the release pipelines.
Designate a configuration property to be a secure (secret) variable by selecting the (padlock) icon next to
the variable.

IMPORTANT
The values of the hidden (secret) variables are securely stored on the server and cannot be viewed by users after they
are saved. During a deployment, the Azure Pipelines release service decrypts these values when referenced by the
tasks and passes them to the agent over a secure HTTPS channel.

NOTE
Creating custom variables can overwrite standard variables. For example, the PowerShell Path environment variable. If you
create a custom Path variable on a Windows agent, it will overwrite the $env:Path variable and PowerShell won't be able
to run.

Using custom variables


To use custom variables in your build and release tasks, simply enclose the variable name in parentheses and
precede it with a $ character. For example, if you have a variable named adminUserName , you can insert the
current value of that variable into a parameter of a task as $(adminUserName) .

NOTE
At present, variables in different groups that are linked to a pipeline in the same scope (e.g., job or stage) will collide and the
result may be unpredictable. Ensure that you use different names for variables across all your variable groups.

You can use custom variables to prompt for values during the execution of a release. For more information, see
Approvals.
Define and modify your variables in a script
To define or modify a variable from a script, use the task.setvariable logging command. Note that the updated
variable value is scoped to the job being executed, and does not flow across jobs or stages. Variable names are
transformed to uppercase, and the characters "." and " " are replaced by "_".
For example, becomes AGENT_WORKFOLDER . On Windows, you access this as
Agent.WorkFolder %AGENT_WORKFOLDER% or
$env:AGENT_WORKFOLDER . On Linux and macOS, you use $AGENT_WORKFOLDER .

TIP
You can run a script on a:
Windows agent using either a Batch script task or PowerShell script task.
macOS or Linux agent using a Shell script task.

Batch
PowerShell
Shell
Batch script

Set the sauce and secret.Sauce variables


@echo ##vso[task.setvariable variable=sauce]crushed tomatoes
@echo ##vso[task.setvariable variable=secret.Sauce;issecret=true]crushed tomatoes with garlic

Read the variables


Arguments

"$(sauce)" "$(secret.Sauce)"

Script

@echo off
set sauceArgument=%~1
set secretSauceArgument=%~2
@echo No problem reading %sauceArgument% or %SAUCE%
@echo But I cannot read %SECRET_SAUCE%
@echo But I can read %secretSauceArgument% (but the log is redacted so I do not spoil
the secret)

Console output from reading the variables:

No problem reading crushed tomatoes or crushed tomatoes


But I cannot read
But I can read ******** (but the log is redacted so I do not spoil the secret)

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Use secrets from Azure Key Vault in Azure Pipelines
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019

NOTE
This tutorial will guide you through working with Azure key vault in your pipeline. Another way of working with secrets is
using Secret variables in your Azure Pipeline or referencing secrets in a variable group.

Azure Key Vault helps teams to securely store and manage sensitive information such as API keys, passwords,
certificates, etc.
In this tutorial, you will learn about:
Creating an Azure Key Vault using the Azure CLI
Adding a secret and configuring access to Azure key vault
Using secrets in your pipeline

Prerequisites
An Azure DevOps organization. If you don't have one, you can create one for free.

Create an Azure Key Vault


Azure key vaults can be created and managed through the Azure portal or Azure CLI. We will use Azure CLI in this
tutorial
Sign in to the Azure Portal, and then select the Cloud Shell button in the upper-right corner.
1. If you have more than one Azure subscription associated with your account, use the command below to
specify a default subscription. You can use az account list to generate a list of your subscriptions.

az account set --subscription <your_subscription_name_or_ID>

2. Run the following command to set a default Azure region for your subscription. You can use
az account list-locations to generate a list of available regions.

az configure --defaults location=<your_region>

For example, this command will select the westus2 region:

az configure --defaults location=westus2

3. Run the following command to create a new resource group.

az group create --name <your-resource-group>

4. Run the following command to create a new key vault.


az keyvault create \
--name <your-key-vault> \
--resource-group <your-resource-group>

5. Run the following command to create a new secret in your key vault. Secrets are stored as a key value pair. In
the example below, Password is the key and mysecretpassword is the value.

az keyvault secret set \


--name "Password" \
--value "mysecretpassword" \
--vault-name <your-key-vault>

Create a project
Sign in to Azure Pipelines. Your browser will then navigate to https://dev.azure.com/your-organization-name and
displays your Azure DevOps dashboard.
If you don't have any projects in your organization yet, select Create a project to get star ted to create a new
project. Otherwise, select the New project button in the upper-right corner of the dashboard.

Create a repo
We will use YAML to create our pipeline but first we need to create a new repo.
1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Repos , and then select Initialize to initialize a new repo with a README.

Create a new pipeline


1. Go to Pipelines , and then select New Pipeline .
2. Select Azure Repos Git .

3. Select the repo you created earlier. It should have the same name as your Azure DevOps project.
4. Select Star ter pipeline .
5. The default pipeline will include a few scripts that run echo commands. Those are not needed so we can
delete them. Your new YAML file will now look like this:

trigger:
- main

pool:
vmImage: 'ubuntu-latest'

steps:

6. Select Show assistant to expand the assistant panel. This panel provides convenient and searchable list of
pipeline tasks.

7. Search for vault and select the Azure Key Vault task.

8. Select and authorize the Azure subscription you used to create your Azure key vault earlier. Select the key
vault and select Add to insert the task at the end of the pipeline. This task allows the pipeline to connect to
your Azure Key Vault and retrieve secrets to use as pipeline variables.

NOTE
Make secrets available to whole job feature is not currently supported in Azure DevOps Server 2019 and 2020.
9. This step is optional. To verify the retrieval and processing of our secret through the pipeline, add the script
below to your YAML to write the secret to a text file and publish it for review. This is not recommended and it
is for demonstration purposes only.

- script: echo $(Password) > secret.txt

- publish: secret.txt

TIP
YAML is very particular about formatting and indentation. Make sure your YAML file is indented properly.

10. Do not save or run the pipeline yet. It will fail because the pipeline does not have permissions to access the
key vault yet. Keep this browser tab open, we will resume once we set up the key vault permissions.

Set up Azure Key Vault access policies


1. Go to Azure portal.
2. Use the search bar to search for the key vault you created earlier.

3. Under Settings Select Access policies .


4. Select Add Access Policy to add a new policy.
5. For Secret permissions , select Get and List .
6. Select the option to select a principal and search for yours.
A security principal is an object that represents a user, group, service, or application that's requesting access
to Azure resources. Azure assigns a unique object ID to every security principal. The default naming
convention is [Azure DevOps account name]-[Azure DevOps project name]-[subscription ID] so if your account
is "https://dev.azure.com/Contoso" and your team project is "AzureKeyVault", your principal would look
something like this Contoso-AzureKeyVault-[subscription ID] .

TIP
You may need to minimize the Azure CLI panel to see the Select button.

7. Select Add to create the access policy.


8. Select Save .

Run and review the pipeline


1. Return to the open pipeline tab where we left off.
2. Select Save then Save again to commit your changes and trigger the pipeline.

NOTE
You may be asked to allow the pipeline to access Azure resources, if prompted select Allow . You will only have to
approve it once.

3. Select the CmdLine job to view the logs. Note that the actual secret is not part of the logs.

4. Return to pipeline summary and select the published artifact.

5. Under Job select the secret.txt file to view it.


6. The text file contains our secret: mysecretpassword . This concludes our verification step that we mentioned
earlier.

Clean up resources
Follow the steps below to delete the resources you created:
1. If you created a new organization to host your project, see how to delete your organization, otherwise delete
your project).
2. All Azure resources created during this tutorial are hosted under a single resource group
PipelinesKeyVaultResourceGroup . Run the following command to delete the resource group and all of its
resources.

az group delete --name PipelinesKeyVaultResourceGroup

Next steps
Architect secure infrastructure in Azure Secure your cloud data
Release approvals and gates overview
4/22/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

A release pipeline specifies the end-to-end release pipeline for an app to be deployed across a range of stages.
Deployments to each stage are fully automated by using jobs and tasks.
Approvals and gates give you additional control over the start and completion of the deployment pipeline.
Each stage in a release pipeline can be configured with pre-deployment and post-deployment conditions that
can include waiting for users to manually approve or reject deployments, and checking with other automated
systems until specific conditions are verified. In addition, you can configure a manual intervention to pause the
deployment pipeline and prompt users to carry out manual tasks, then resume or reject the deployment.

At present, gates are available only in Azure Pipelines.

The following diagram shows how these features are combined in a stage of a release pipeline.

By using approvals, gates, and manual intervention you can take full control of your releases to meet a wide
range of deployment requirements. Typical scenarios where approvals, gates, and manual intervention are
useful include the following.
SC EN A RIO F EAT URE( S) TO USE

Some users must manually validate the change request and Pre-deployment approvals
approve the deployment to a stage.

Some users must manually sign out from the app after Post-deployment approvals
deployment before the release is promoted to other stages.

You want to ensure there are no active issues in the work Pre-deployment gates
item or problem management system before deploying a
build to a stage.

You want to ensure there are no incidents from the Post-deployment gates
monitoring or incident management system for the app
after it's been deployed, before promoting the release.

After deployment, you want to wait for a specified time Post-deployment gates and post-deployment approvals
before prompting some users for a manual sign out.

During the deployment pipeline, a user must manually follow Manual Intervention
specific instructions and then resume the deployment.

During the deployment pipeline, you want to prompt the Manual Intervention
user to enter a value for a parameter used by the
deployment tasks, or allow the user to edit the details of this
release.

During the deployment pipeline, you want to wait for Planned


monitoring or information portals to detect any active
incidents, before continuing with other deployment jobs.

You can combine all three techniques within a release pipeline to fully achieve your own deployment
requirements.
In addition, you can install an extension that integrates with Ser viceNow to help you control and manage your
deployments though Service Management methodologies such as ITIL. For more information, see Release
deployment control using ServiceNow.

NOTE
The time delay before the pre-deployment gates are executed is capped at 48 hours. If you need to delay the overall
launch of your gates instead, it is recommended to use a delay task in your release pipeline.

# Delay
# Delay further execution of a workflow by a fixed time
jobs:
- job: RunsOnServer
pool: Server
steps:
- task: Delay@1
inputs:
delayForMinutes: '0'

Related articles
Approvals
Gates
Manual intervention
ServiceNow release and deployment control
Stages
Triggers
Release pipelines and releases

Additional resources
Video: Deploy quicker and safer with gates in Azure Pipelines
Configure your release pipelines for safe deployments

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature
on our Azure DevOps Developer Community. Support page.
Define approvals and checks
11/7/2020 • 9 minutes to read • Edit Online

Azure Pipelines
A pipeline is made up of stages. A pipeline author can control whether a stage should run by defining conditions
on the stage. Another way to control if and when a stage should run is through approvals and checks .
Pipelines rely on resources such as environments, service connections, agent pools, variable groups, and secure
files. Checks enable the resource owner to control if and when a stage in any pipeline can consume a resource. As
an owner of a resource, you can define checks that must be satisfied before a stage consuming that resource can
start. For example, a manual approval check on an environment would ensure that deployment to that
environment only happens after the designated user(s) has reviewed the changes being deployed.
A stage can consist of many jobs, and each job can consume several resources. Before the execution of a stage can
begin, all checks on all the resources used in that stage must be satisfied. Azure Pipelines pauses the execution of
a pipeline prior to each stage, and waits for all pending checks to be completed. Checks are re-evaluation based
on the retry interval specified in each check. If all checks are not successful till the timeout specified, then that
stage is not executed. If any of the checks terminally fails (for example, if you reject an approval on one of the
resources), then that stage is not executed.
Approvals and other checks are not defined in the yaml file. Users modifying the pipeline yaml file cannot modify
the checks performed before start of a stage. Administrators of resources manage checks using the web interface
of Azure Pipelines.

IMPORTANT
Checks can be configured on environments, service connections and agent pools.

Approvals
You can manually control when a stage should run using approval checks. This is commonly used to control
deployments to production environments.
1. In your Azure DevOps project, go to the resource (eg environment) that needs to be protected.
2. Navigate to Approvals and Checks for the resource.
3. Select Create , provide the approvers and an optional message, and select Create again to complete the
addition of the manual approval check.
You can add multiple approvers to an environment. These approvers can be individual users or groups of users.
When a group is specified as an approver, only one of the users in that group needs to approve for the run to
move forward.
Using the advanced options, you can configure minimum number of approvers to complete the approval. A group
is considered as one approver.
You can also restrict the user who requested (initiated or created) the run from completing the approval. This
option is commonly used for segregation of roles amongst the users.
When you run a pipeline, the execution of that run pauses before entering a stage that uses the environment.
Users configured as approvers must review and approve or reject the deployment. If you have multiple runs
executing simultaneously, you must approve or reject each of them independently. If all required approvals are
not complete within the Timeout specified for the approval and all other checks succeed, the stage is marked
skipped.

Branch control
Using the branch control check, you can ensure all the resources linked with the pipeline are built from the
allowed branches and that they branches have protection enabled. This helps in control the release readiness and
quality of deployments. In case multiple resources are linked with the pipeline, source for all the resources is
verified. If you have linked another pipeline, then the branch of the specific run being deployed is verified for
protection.
To define the branch control check:
1. In your Azure DevOps project, go to the resource (eg environment) that needs to be protected.
2. Navigate to Approvals and Checks for the resource.
3. Choose the Branch control check and provide a comma separated list of allowed branches. You can
mandate that the branch should have protection enabled and the behavior of the check in case protection
status for one of the branches is not known.
At run time, the check would validate branches for all linked resources in the run against the allowed list. If any
one of the branches do not match the criteria, the check fails and the stage is marked failed.

NOTE
The check requires the branch names to be fully qualified. Make sure the format for branch name is
refs/heads/<branch name>

Business hours
In case you want all deployments to your environment to happen in a specific time window only, then business
hours check is the ideal solution. When you run a pipeline, the execution of the stage that uses the resource waits
for business hours. If you have multiple runs executing simultaneously, each of them is independently verified. At
the start of the business hours, the check is marked successful for all the runs.
If execution of the stage has not started at the end of business hours (held up by to some other check), then the
business hours approval is automatically withdrawn and a re-evaluation is scheduled for the next day. The check
fails if execution of the stage does not start within the Timeout period specified for the check, and the stage is
marked failed.

Invoke Azure function


Azure functions are the serverless computation platform offered by Azure. With Azure functions, you can run
small pieces of code (called "functions") without worrying about application infrastructure. Given the high
flexibility, Azure functions provide a great way to author your own checks. You include the logic of the check in
Azure function such that each execution is triggered on http request, has a short execution time and returns a
response. While defining the check, you can parse the response body to infer if the check is successful. The
evaluation can be repeated periodically using the Time between evaluations setting in control options. Learn More
The checks fails if the stage has not started execution within the specified Timeout period.

NOTE
User defined pipeline variables are not accessbile to the check. You can only access the pre-defined variables and variables
from the linked variable group in the request body.

Invoke REST API


Invoke REST API check enables you to integrate with any of your existing services. Periodically, make a call to a
REST API and continue if it returns a successful response. Learn More
The evaluation can be repeated periodically using the Time between evaluations setting in control options. The
checks fails if the stage has not started execution within the specified Timeout period.
NOTE
User defined pipeline variables are not accessbile to the check. You can only access the pre-defined variables and variables
from the linked variable group in the request body.

Query Azure Monitor Alerts


Azure Monitor offers visualization, query, routing, alerting, autoscale, and automation on data from the Azure
infrastructure and each individual Azure resources. Alerts are a standard means to detect issues with the health of
infrastructure or application, and take corrective actions. Canary deployments and staged rollouts are common
deployment strategies used to lower risk of regressions to critical applications. After deploying to a stage (set of
customers), the application is observed for a period of time. Health of the application after deployment is used to
decide whether the update should be made to the next stage or not.
Query Azure Monitor Alerts helps you observe Azure Monitor and ensure no alerts are raised for the application
after a deployment. The check succeeds if no alert rules are activated at the time of evaluation. Learn More
The evaluation is repeated after Time between evaluations setting in control options. The checks fails if the
stage has not started execution within the specified Timeout period.

Required template
With the required template check, you can enforce pipelines to use a specific YAML template. When this check is
in place, a pipeline will fail if it doesn't extend from the referenced template.
To define a required template approval:
1. In your Azure DevOps project, go to the service connection that you want to restrict.
2. Open Approvals and Checks in the menu next to Edit .
3. In the Add your first check menu, select Required template .
4. Enter details on how to get to your required template file.
Repositor y type : The location of your repository (GitHub, Azure, or Bitbucket).
Repositor y : The name of your repository that contains your template.
Ref : The branch or tag of the required template.
Path to required template : The name of your template.
You can have multiple required templates for the same service connection. In this example, the required template
is required.yml .
Evaluate artifact
You can evaluate artifact(s) to be deployed to an environment against custom policies.

NOTE
Currently, this works with container image artifacts only

To define a custom policy evaluation over the artifact(s), follow the below steps.
1. In your Azure DevOps Services project, navigate to the environment that needs to be protected. Learn more
about creating an environment.

2. Navigate to Approvals and checks for the environment.


3. Select Evaluate ar tifact .

4. Paste the policy definition and click Save . See more about writing policy definitions.

When you run a pipeline, the execution of that run pauses before entering a stage that uses the environment. The
specified policy is evaluated against the available image metadata. The check passes when the policy is successful
and fails otherwise. The stage is marked failed if the check fails.
Passed
Failed

You can also see the complete logs of the policy checks from the pipeline view.

Exclusive lock
The exclusive lock check allows only a single run from the pipeline to proceed. All stages in all runs of that
pipeline which use the resource are paused. When the stage using the lock completes, then another stage can
proceed to use the resource. Also, only one stage will be allowed to continue. Any other stages which tried to take
the lock will be cancelled.

FAQ
The checks defined did not start. What happened?
The evaluation of checks starts once the stage conditions are satisfied. You should confirm run of the stage started
after the checks were added on the resource and that the resource is consumed in the stage.
How can I use checks for scheduling a stage?
Using the business hours check, you can control the time for start of stage execution. You can achieve the same
behavior as predefined schedule on a stage in designer releases.
How can I take advance approvals for a stage scheduled to run in future?
This scenario can be enabled
1. The business hours check enables all stages deploying to a resource to be scheduled for execution between the
time window
2. When approvals configured on the same resource, then the stage would wait for approvals before starting.
3. You can configure both the checks on a resource. The stage would wait on approvals and business hours. It
would start in the next scheduled window after approvals are complete.
Can I wait for completion of security scanning on the artifact being deployed?
In order to wait for completion of security scanning on the artifact being deployed, you would need to use an
external scanning service like AquaScan. The artifact being deployed would need to be uploaded at a location
accessible to the scanning service before the start of checks, and can be identified using pre-defined variables.
Using the Invoke REST API check, you can add a check to wait on the API in the security service and pass the
artifact identifier as an input.
How can I use output variables from previous stages in a check?
By default, only pre-defined variables are available to checks. You can use a linked variable group to access other
variables. The output variable from the previous stage can be written to the variable group and accessed in the
check.
Release deployment control using gates
2/26/2020 • 5 minutes to read • Edit Online

Azure Pipelines
Gates allow automatic collection of health signals from external services, and then promote the release when all
the signals are successful at the same time or stop the deployment on timeout. Typically, gates are used in
connection with incident management, problem management, change management, monitoring, and external
approval systems.

Scenarios for gates


Some scenarios and use cases for gates are:
Incident and issues management . Ensure the required status for work items, incidents, and issues. For
example, ensure deployment occurs only if no priority zero bugs exist, and validation that there are no active
incidents takes place after deployment.
Seek approvals outside Azure Pipelines . Notify non-Azure Pipelines users such as legal approval
departments, auditors, or IT managers about a deployment by integrating with approval collaboration systems
such as Microsoft Teams or Slack, and waiting for the approval to complete.
Quality validation . Query metrics from tests on the build artifacts such as pass rate or code coverage and
deploy only if they are within required thresholds.
Security scan on ar tifacts . Ensure security scans such as anti-virus checking, code signing, and policy
checking for build artifacts have completed. A gate might initiate the scan and wait for it to complete, or just
check for completion.
User experience relative to baseline . Using product telemetry, ensure the user experience hasn't
regressed from the baseline state. The experience level before the deployment could be considered a baseline.
Change management . Wait for change management procedures in a system such as ServiceNow complete
before the deployment occurs.
Infrastructure health . Execute monitoring and validate the infrastructure against compliance rules after
deployment, or wait for healthy resource utilization and a positive security report.
Most of the health parameters vary over time, regularly changing their status from healthy to unhealthy and back
to healthy. To account for such variations, all the gates are periodically re-evaluated until all of them are successful
at the same time. The release execution and deployment does not proceed if all gates do not succeed in the same
interval and before the configured timeout.

Define a gate for a stage


You can enable gates at the start of a stage (in the Pre-deployment conditions ) or at the end of a stage (Post-
deployment conditions ), or both. For details of how to enable gates, see Configure a gate.
The Delay before evaluation is a time delay at the beginning of the gate evaluation process that allows the
gates to initialize, stabilize, and begin providing accurate results for the current deployment (see Gate evaluation
flows). For example:
For pre-deployment gates , the delay would be the time required for all bugs to be logged against the
artifacts being deployed.
For post-deployment gates , the delay would be the maximum of the time taken for the deployed app to
reach a steady operational state, the time taken for execution of all the required tests on the deployed stage,
and the time it takes for incidents to be logged after the deployment.
The following gates are available by default:
Invoke Azure function : Trigger execution of an Azure function and ensure a successful completion. For more
details, see Azure function task.
Quer y Azure monitor aler ts : Observe the configured Azure monitor alert rules for active alerts. For more
details, see Azure monitor task.
Invoke REST API : Make a call to a REST API and continue if it returns a successful response. For more details,
see HTTP REST API task.
Quer y Work items : Ensure the number of matching work items returned from a query is within a threshold.
For more details, see Work item query task.
Security and compliance assessment : Assess Azure Policy compliance on resources within the scope of a
given subscription and resource group, and optionally at a specific resource level. For more details, see
Security Compliance and Assessment task.
You can create your own gates with Marketplace extensions.
The evaluation options that apply to all the gates you've added are:
Time between re-evaluation of gates . The time interval between successive evaluations of the gates. At
each sampling interval, new requests are sent concurrently to each gate and the new results are evaluated. It is
recommended that the sampling interval is greater than the longest typical response time of the configured
gates to allow time for all responses to be received for evaluation.
Timeout after which gates fail . The maximum evaluation period for all gates. The deployment will be
rejected if the timeout is reached before all gates succeed during the same sampling interval.
Gates and approvals . Select the required order of execution for gates and approvals if you have configured
both. For pre-deployment conditions, the default is to prompt for manual (user) approvals first, then evaluate
gates afterwards. This saves the system from evaluating the gate functions if the release is rejected by the user.
For post-deployment conditions, the default is to evaluate gates and prompt for manual approvals only when
all gates are successful. This ensures the approvers have all the information required to approve.
For information about viewing gate results and logs, see View the logs for approvals and Monitor and track
deployments.
Gate evaluation flow examples
The following diagram illustrates the flow of gate evaluation where, after the initial stabilization delay period and
three sampling intervals, the deployment is approved.

The following diagram illustrates the flow of gate evaluation where, after the initial stabilization delay period, not
all gates have succeeded at each sampling interval. In this case, after the timeout period expires, the deployment
is rejected.

Video

Related articles
Approvals and gates overview
Manual intervention
Use approvals and gates to control your deployment
Security Compliance and Assessment task
Stages
Triggers

Additional resources
Video: Deploy quicker and safer with gates in Azure Pipelines
Configure your release pipelines for safe deployments
Tutorial: Use approvals and gates to control your deployment
Twitter sentiment as a release gate
GitHub issues as a release gate
Author custom gates. Library with examples

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Use approvals and gates to control your deployment
2/26/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

By using a combination of manual deployment approvals, gates, and manual intervention within a release pipeline
in Azure Pipelines and Team Foundation Server (TFS), you can quickly and easily configure a release pipeline with
all the control and auditing capabilities you require for your DevOps CI/CD processes.
In this tutorial, you learn about:
Extending the approval process with gates
Extending the approval process with manual intervention
Viewing and monitoring approvals and gates

Prerequisites
This tutorial extends the tutorial Define your multi-stage continuous deployment (CD) pipeline. You must have
completed that tutorial first.
You'll also need a work item quer y that returns some work items from Azure Pipelines or TFS. This query is used
in the gate you will configure. You can use one of the built-in queries, or create a new one just for this gate to use.
For more information, see Create managed queries with the query editor.
In the previous tutorial, you saw a simple use of manual approvals to allow an administrator to confirm that a
release is ready to deploy to the production stage. In this tutorial, you'll see some additional and more powerful
ways to configure approvals for releases and deployments by using manual intervention and gates. For more
information about the ways you can configure approvals for a release, see Approvals and gates overview.

Configure a gate
First, you will extend the approval process for the release by adding a gate. Gates allow you to configure
automated calls to external services, where the results are used to approve or reject a deployment. You can use
gates to ensure that the release meets a wide range or criteria, without requiring user intervention.
1. In the Releases tab of Azure Pipelines , select your release pipeline and choose Edit to open the pipeline
editor.
2. Choose the pre-deployment conditions icon for the Production stage to open the conditions panel. Enable
gates by using the switch control in the Gates section.

3. To allow gate functions to initialize and stabilize (it may take some time for them to begin returning
accurate results), you configure a delay before the results are evaluated and used to determine if the
deployment should be approved or rejected. For this example, so that you can see a result reasonably
quickly, set the delay to a short period such as one minute.

4. Choose + Add and select the Quer y Work Items gate.


5. Configure the gate by selecting an existing work item query. You can use one of the built-in Azure Pipelines
and TFS queries, or create your own query. Depending on how many work items you expect it to return, set
the maximum and minimum thresholds (run the query in the Work hub if you're not sure what to expect).
You'll need to open the Advanced section to see the Lower Threshold setting. You can also set an
Output Variable to be returned from the gate task. For more details about the gate arguments, see
Work Item Query task.

6. Open the Evaluation options section and specify the timeout and the sampling interval. For this example,
choose short periods so that you can see the results reasonably quickly. The minimum values you can
specify are 6 minutes timeout and 5 minutes sampling interval.
The sampling interval and timeout work together so that the gates will call their functions at suitable
intervals, and reject the deployment if they don't all succeed during the same sampling interval and
within the timeout period. For more details, see Gates.

7. Save you release pipeline.

For more information about using other types of approval gates, see Approvals and gates.

Configure a manual intervention


Sometimes, you may need to introduce manual intervention into a release pipeline. For example, there may be
tasks that cannot be accomplished automatically such as confirming network conditions are appropriate, or that
specific hardware or software is in place, before you approve a deployment. You can do this by using the Manual
Inter vention task in your pipeline.
1. In the release pipeline editor, open the Tasks editor for the QA stage.

2. Choose the ellipses (...) in the QA deployment pipeline bar and then choose Add agentless job .
Several tasks, including the Manual Inter vention task, can be used only in an agentless job.
3. Drag and drop the new agentless job to the start of the QA process, before the existing agent job. Then
choose + in the Agentless job bar and add a Manual Inter vention task to the job.

4. Configure the task by entering a message (the Instructions ) to display when it executes and pauses the
release pipeline.
Notice that you can specify a list of users who will receive a notification that the deployment is waiting for
manual approval. You can also specify a timeout and the action (approve or reject) that will occur if there is
no user response within the timeout period. For more details, see Manual Intervention task.
5. Save the release pipeline and then start a new release.

View the logs for approvals


You typically need to validate and audit a release and the associated deployments after it has completed, or even
during the deployment pipeline. This is useful when debugging a problematic deployment, or when checking
when and by whom approvals were granted. The comprehensive logging capabilities provide this information.
1. Open the release summary for the release you just created. You can do this by choosing the link in the
information bar in the release editor after you create the release, or directly from the Releases tab of
Azure Pipelines .
2. You'll see the live status for each step in the release pipeline. It indicates that a manual intervention is
pending (this pre-deployment approval was configured in the previous tutorial Define your multi-stage
continuous deployment pipeline). Choose the Resume link.

3. You see the intervention message, and can choose to resume or reject the deployment. Enter some text
response to the intervention and choose Resume .

4. Go back to the pipeline view of the release. After deployment to the QA stage succeeds, you see the pre-
deployment approval pending message for the Production environment.
5. Enter your approval message and choose Approve to continue the deployment.

6. Go back to the pipeline view of the release. Now you see that the gates are being processed before the
release continues.

7. After the gate evaluation has successfully completed, the deployment occurs for the Production stage.
Choose the Production stage icon in the release summary to see more details of the approvals and gate
evaluations.
Altogether, by using a combination of manual approvals, approval gates, and the manual intervention task, you've
seen how can configure a release pipeline with all the control and auditing capabilities you may require.

Next step
Integrate with ServiceNow change management
Release deployment control using approvals
2/26/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

When a release is created from a release pipeline that defines approvals, the deployment stops at each point
where approval is required until the specified approver grants approval or rejects the release (or re-assigns the
approval to another user). You can enable manual deployment approvals for each stage in a release pipeline.

Define a deployment approval


You can define approvals at the start of a stage (pre-deployment approvers), at the end of a stage (post-
deployment approvers), or both. For details of how to define and use approvals, see Add approvals within a
release pipeline.
For a pre-deployment approval, choose the icon at the entry point of the stage and enable pre-deployment
approvers.
For a post-deployment approval, choose the icon at the exit point of the stage and enable post-deployment
approvers.
You can add multiple approvers for both pre-deployment and post-deployment settings. These approvers can be
individual users or groups of users. These users must have the View releases permission.
When a group is specified as an approver, only one of the users in that group needs to approve for the
deployment to occur or the release to move forward.
If you are using Azure Pipelines , you can use local groups managed in Azure Pipelines or Azure Active
Directory (Azure AD) groups if they have been added into Azure Pipelines.
If you are using Team Foundation Ser ver (TFS), you can use local groups managed in TFS or Active
Directory (AD) groups if they have been added into TFS.
The creator of a deployment is considered to be a separate user role for deployments. For more details, see
Release permissions. Either the release creator or the deployment creator can be restricted from approving
deployments.
If no approval is granted within the Timeout specified for the approval, the deployment is rejected.
Use the Approval policies to:
Specify that the user who requested (initiated or created) the release cannot approve it. If you are
experimenting with approvals, uncheck this option so that you can approve or reject your own deployments.
For information about the ID of the requester for CI/CD releases, see How are the identity variables set?
Force a revalidation of the user identity to take into account recently changed permissions.
Reduce user workload by automatically approving subsequent prompts if the specified user has already
approved the deployment to a previous stage in the pipeline (applies to pre-deployment approvals only). Take
care when using this option; for example, you may want to require a user to physically approve a deployment
to production even though that user has previously approved a deployment to a QA stage in the same release
pipeline.
For information about approving or rejecting deployments, and viewing approval logs, see Create a release, View
the logs for approvals, and Monitor and track deployments.
Approval notifications
Notifications such as an email message can be sent to the approver(s) defined for each approval step. Configure
recipients and settings in the Notifications section of the project settings page.

The link in the email message opens the Summar y page for the release where the user can approve or reject the
release.

Related articles
Approvals and gates overview
Manual intervention
Stages
Triggers

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Pipeline run sequence
11/2/2020 • 12 minutes to read • Edit Online

Runs represent one execution of a pipeline. During a run, the pipeline is processed, and agents process one or
more job. A pipeline run includes jobs, steps, and tasks. Runs power both continuous integration (CI) and
continuous delivery (CD) pipelines.

When you run a pipeline, a lot of things happen under the covers. While you often won't need to know about
them, once in a while it's useful to have the big picture. At a high level, Azure Pipelines will:
Process the pipeline
Request one or more agents to run jobs
Hand off jobs to agents and collect the results
On the agent side, for each job, an agent will:
Get ready for the job
Run each step in the job
Report results to Azure Pipelines
Jobs may succeed, fail, or be canceled. There are also situations where a job may not complete. Understanding
how this happens can help you troubleshoot issues.
Let's break down each action one by one.

Process the pipeline

To turn a pipeline into a run, Azure Pipelines goes through several steps in this order:
1. First, expand templates and evaluate template expressions.
2. Next, evaluate dependencies at the stage level to pick the first stage(s) to run.
3. For each stage selected to run, two things happen:
All resources used in all jobs are gathered up and validated for authorization to run.
Evaluate dependencies at the job level to pick the first job(s) to run.
4. For each job selected to run, expand multi-configs ( strategy: matrix or strategy: parallel in YAML) into
multiple runtime jobs.
5. For each runtime job, evaluate conditions to decide whether that job is eligible to run.
6. Request an agent for each eligible runtime job.
As runtime jobs complete, Azure Pipelines will see if there are new jobs eligible to run. If so, steps 4 - 6 repeat with
the new jobs. Similarly, as stages complete, steps 2 - 6 will be repeated for any new stages.
This ordering helps answer a common question: why can't I use certain variables in my template parameters? Step
1, template expansion, operates solely on the text of the YAML document. Runtime variables don't exist during that
step. After step 1, template parameters have been completely resolved and no longer exist.
It also answers another common issue: why can't I use variables to resolve service connection / environment
names? Resources are authorized before a stage can start running, so stage- and job-level variables aren't
available. Pipeline-level variables can be used, but only those explicitly included in the pipeline. Variable groups
are themselves a resource subject to authorization, so their data is likewise not available when checking resource
authorization.

Request an agent
Whenever Azure Pipelines needs to run a job, it will ask the pool for an agent. (Server jobs are an exception, since
they run on the Azure Pipelines server itself.) Microsoft-hosted and self-hosted agent pools work slightly
differently.
Microsoft-hosted agent pool requests
First, the service checks on your organization's parallel jobs. It adds up all running jobs on all Microsoft-hosted
agents and compares that with the number of parallel jobs purchased. If there are no available parallel slots, the
job has to wait on a slot to free up.
Once a parallel slot is available, the job is routed to the requested agent type. Conceptually, the Microsoft-hosted
pool is one giant, global pool of machines. (In reality, it's a number of different physical pools split by geography
and operating system type.) Based on the vmImage (in YAML) or pool name (in the classic editor) requested, an
agent is selected.

All agents in the Microsoft pool are fresh, new virtual machines which haven't run any pipelines before. When the
job completes, the agent VM will be discarded.
Self-hosted agent pool requests
Similar to the Microsoft-hosted pool, the service first checks on your organization's parallel jobs. It adds up all
running jobs on all self-hosted agents and compares that with the number of parallel jobs purchased. If there are
no available parallel slots, the job has to wait on a slot to free up.
Once a parallel slot is available, the self-hosted pool is examined for a compatible agent. Self-hosted agents offer
capabilities, which are strings indicating that particular software is installed or settings are configured. The
pipeline has demands, which are the capabilities required to run the job. If a free agent whose capabilities match
the pipeline's demands cannot be found, the job will continue waiting. If there are no agents in the pool whose
capabilities match the demands, the job will fail.
Self-hosted agents are typically re-used from run to run. This means that a pipeline job can have side effects:
warming up caches, having most commits already available in the local repo, and so on.

Prepare to run a job


Once an agent has accepted a job, it has some preparation work to do. The agent downloads (and caches for next
time) all the tasks needed to run the job. It creates working space on disk to hold the source code, artifacts, and
outputs used in the run. Then it begins running steps.

Run each step


Steps are run sequentially, one after another. Before a step can start, all the previous steps must be finished (or
skipped).

Steps are implemented by tasks. Tasks themselves are implemented as Node.js or PowerShell scripts. The task
system routes inputs and outputs to the backing scripts. It also provides some common services such as altering
the system path and creating new pipeline variables.
Each step runs in its own process, isolating it from the environment left by previous steps. Because of this process-
per-step model, environment variables are not preserved between steps. However, tasks and scripts have a
mechanism to communicate back to the agent: logging commands. When a task or script writes a logging
command to standard out, the agent will take whatever action is requested.
There is an agent command to create new pipeline variables. Pipeline variables will be automatically converted
into environment variables in the next step. In order to set a new variable myVar with a value of myValue , a script
can do this:

echo '##vso[task.setVariable variable=myVar]myValue'

Write-Host "##vso[task.setVariable variable=myVar]myValue"

Report and collect results


Each step can report warnings, errors, and failures. Errors and warnings are reported to the pipeline summary
page, marking the task as "succeeded with issues". Failures are also reported to the summary page, but they mark
the task as "failed". A step is a failure if it either explicitly reports failure (using a ##vso command) or ends the
script with a non-zero exit code.

As steps run, the agent is constantly sending output lines to the service. That's why you can see a live feed of the
console. At the end of each step, the entire output from the step is also uploaded as a log file. Logs can be
downloaded once the pipeline has finished. Other items that the agent can upload include artifacts and test
results. These are also available after the pipeline completes.

State and conditions


The agent keeps track of each step's success or failure. As steps succeed with issues or fail, the job's status will be
updated. The job always reflects the "worst" outcome from each of its steps: if a step fails, the job also fails.
Before running a step, the agent will check that step's condition to determine whether it should run. By default, a
step will only run when the job's status is succeeded or succeeded with issues. Many jobs have cleanup steps
which need to run no matter what else happened, so they can specify a condition of "always()". Cleanup steps
might also be set to run only on cancellation. A succeeding cleanup step cannot save the job from failing; jobs can
never go back to success after entering failure.

Timeouts and disconnects


Each job has a timeout. If the job has not completed in the specified time, the server will cancel the job. It will
attempt to signal the agent to stop, and it will mark the job as canceled. On the agent side, this means canceling all
remaining steps and uploading any remaining results.
Jobs have a grace period known as the cancel timeout in which to complete any cancellation work. (Remember,
steps can be marked to run even on cancellation.) After the timeout plus the cancel timeout, if the agent has not
reported that work has stopped, the server will mark the job as a failure.
Because Azure Pipelines distributes work to agent machines, from time to time, agents may stop responding to
the server. This can happen if the agent's host machine goes away (power loss, VM turned off) or if there's a
network failure. To help detect these conditions, the agent sends a heartbeat message once per minute to let the
server know it's still operating. If the server doesn't receive a heartbeat for five consecutive minutes, it assumes
the agent will not come back. The job is marked as a failure, letting the user know they should re-try the pipeline.
Manage runs through the CLI
Using the Azure DevOps CLI, you can list the pipeline runs in your project and view details about a specific run.
You can also add and delete tags in your pipeline run.
Prerequisites
You must have installed the Azure DevOps CLI extension as described in Get started with Azure DevOps CLI.
Sign into Azure DevOps using az login .
For the examples in this article, set the default organization using
az devops configure --defaults organization=YourOrganizationURL .

List pipeline runs


List the pipeline runs in your project with the az pipelines runs list command. To get started, see Get started with
Azure DevOps CLI.

az pipelines runs list [--branch]


[--org]
[--pipeline-ids]
[--project]
[--query-order {FinishTimeAsc, FinishTimeDesc, QueueTimeAsc, QueueTimeDesc,
StartTimeAsc, StartTimeDesc}]
[--reason {all, batchedCI, buildCompletion, checkInShelveset, individualCI, manual,
pullRequest, schedule, triggered, userCreated, validateShelveset}]
[--requested-for]
[--result {canceled, failed, none, partiallySucceeded, succeeded}]
[--status {all, cancelling, completed, inProgress, none, notStarted, postponed}]
[--tags]
[--top]

Optional parameters
branch : Filter by builds for this branch.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
pipeline-ids : Space-separated IDs of definitions for which to list builds.
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
quer y-order : Define the order in which pipeline runs are listed. Accepted values are FinishTimeAsc,
FinishTimeDesc, QueueTimeAsc, QueueTimeDesc, StartTimeAsc, and StartTimeDesc.
reason : Only list builds for this specified reason. Accepted values are batchedCI, buildCompletion,
checkInShelveset, individualCI, manual, pullRequest, schedule, triggered, userCreated, and validateShelveset.
requested-for : Limit to the builds requested for a specified user or group.
result : Limit to the builds with a specified result. Accepted values are canceled, failed, none, partiallySucceeded,
and succeeded.
status : Limit to the builds with a specified status. Accepted values are all, cancelling, completed, inProgress,
none, notStarted, and postponed.
tags : Limit to the builds with each of the specified tags. Space separated.
top : Maximum number of builds to list.
Example
The following command lists the first three pipeline runs which have a status of completed and a result of
succeeded , and returns the result in table format.
az pipelines runs list --status completed --result succeeded --top 3 --output table

Run ID Number Status Result Pipeline ID Pipeline Name Source Branch Queued
Time Reason
-------- ---------- --------- --------- ------------- -------------------------- --------------- ------
-------------------- ------
125 20200124.1 completed succeeded 12 Githubname.pipelines-java master 2020-
01-23 18:56:10.067588 manual
123 20200123.2 completed succeeded 12 Githubname.pipelines-java master 2020-
01-23 11:55:56.633450 manual
122 20200123.1 completed succeeded 12 Githubname.pipelines-java master 2020-
01-23 11:48:05.574742 manual

Show pipeline run details


Show the details for a pipeline run in your project with the az pipelines runs show command. To get started, see
Get started with Azure DevOps CLI.

az pipelines runs show --id


[--open]
[--org]
[--project]

Parameters
id : Required. ID of the pipeline run.
open : Optional. Opens the build results page in your web browser.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .

Example
The following command shows details for the pipeline run with the ID 123 and returns the results in table format.
It also opens your web browser to the build results page.

az pipelines runs show --id 122 --open --output table

Run ID Number Status Result Pipeline ID Pipeline Name Source Branch Queued
Time Reason
-------- ---------- --------- --------- ------------- -------------------------- --------------- ------
-------------------- --------
123 20200123.2 completed succeeded 12 Githubname.pipelines-java master 2020-
01-23 11:55:56.633450 manual

Add tag to pipeline run


Add a tag to a pipeline run in your project with the az pipelines runs tag add command. To get started, see Get
started with Azure DevOps CLI.

az pipelines runs tag add --run-id


--tags
[--org]
[--project]

Parameters
run-id : Required. ID of the pipeline run.
tags : Required. Tags to be added to the pipeline run (comma-separated values).
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .

Example
The following command adds the tag YAML to the pipeline run with the ID 123 and returns the result in JSON
format.

az pipelines runs tag add --run-id 123 --tags YAML --output json

[
"YAML"
]

List pipeline run tags


List the tags for a pipeline run in your project with the az pipelines runs tag list command. To get started, see Get
started with Azure DevOps CLI.

az pipelines runs tag list --run-id


[--org]
[--project]

Parameters
run-id : Required. ID of the pipeline run.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .

Example
The following command lists the tags for the pipeline run with the ID 123 and returns the result in table format.

az pipelines runs tag list --run-id 123 --output table

Tags
------
YAML

Delete tag from pipeline run


Delete a tag from a pipeline run in your project with the az pipelines runs tag delete command. To get started, see
Get started with Azure DevOps CLI.
az pipelines runs tag delete --run-id
--tag
[--org]
[--project]

Parameters
run-id : Required. ID of the pipeline run.
tag : Required. Tag to be deleted from the pipeline run.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .

Example
The following command deletes the YAML tag from the pipeline run with ID 123 .

az pipelines runs tag delete --run-id 123 --tag YAML


Access repositories, artifacts, and other resources
11/2/2020 • 13 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

At run-time, each job in a pipeline may access other resources in Azure DevOps. For example, a job may:
Check out source code from a Git repository
Add a tag to the repository
Access a feed in Azure Artifacts
Upload logs from the agent to the service
Upload test results and other artifacts from the agent to the service
Update a work item
Azure Pipelines uses job access tokens to perform these tasks. A job access token is a security token that is
dynamically generated by Azure Pipelines for each job at run time. The agent on which the job is running uses the
job access token in order to access these resources in Azure DevOps. You can control which resources your
pipeline has access to by controlling how permissions are granted to job access tokens.
The token's permissions are derived from (a) job authorization scope and (b) the permissions you set on project or
collection build service account.

Job authorization scope


You can set the job authorization scope to be collection or project . By setting the scope to collection , you
choose to let pipelines access all repositories in the collection or organization. By setting the scope to project , you
choose to restrict access to only those repositories that are in the same project as the pipeline.
YAML
Classic
Job authorization scope can be set for the entire Azure DevOps organization or for a specific project.

NOTE
In Azure DevOps Server 2020, Limit job authorization scope to current project applies only to YAML pipelines and
classic build pipelines. It does not apply to classic release pipelines. Classic release pipelines always run with project collection
scope.

To set job authorization scope for the organization:


Navigate to your organization settings page in the Azure DevOps user interface.
Select Settings under Pipelines .
Enable Limit job authorization scope to current project to limit the scope to project. This is the
recommended setting, as it enhances security for your pipelines.
To set job authorization scope for a specific project:
Navigate to your project settings page in the Azure DevOps user interface.
Select Settings under Pipelines .
Enable Limit job authorization scope to current project to limit the scope to project. This is the
recommended setting, as it enhances security for your pipelines.
To set job authorization scope at the organization level for all projects, choose Organization settings >
Pipelines > Settings .
To set job authorization scope for a specific project, choose Project settings > Pipelines > Settings .
Enable one or more of the following settings. Enabling these settings are recommended, as it enhances security for
your pipelines.
Limit job authorization scope to current project for non-release pipelines - This setting applies to
YAML pipelines and classic build pipelines, and does not apply to classic release pipelines.
Limit job authorization scope to current project for release pipelines - This setting applies to classic
release pipelines only.

NOTE
If the scope is set to project at the organization level, you cannot change the scope in each project.

IMPORTANT
If the scope is not restricted at either the organization level or project level, then every job in your YAML pipeline gets a
collection scoped job access token. In other words, your pipeline has access to any repository in any project of your
organization. If an adversary is able to gain access to a single pipeline in a single project, they will be able to gain access to
any repository in your organization. This is why, it is recommended that you restrict the scope at the highest level
(organization settings) in order to contain the attack to a single project.

If you use Azure DevOps Server 2019, then all YAML jobs run with the job authorization scope set to collection . In
other words, these jobs have access to all repositories in your project collection. You cannot change this in Azure
DevOps server 2019.
YAML pipelines are not available in TFS.

NOTE
If your pipeline is in a public project , then the job authorization scope is automatically restricted to project no matter
what you configure in any setting. Jobs in a public project can access resources such as build artifacts or test results only
within the project and not from other projects of the organization.

Limit job authorization scope to referenced Azure DevOps repositories


In addition to the job authorization scope settings described in the previous section, Azure Pipelines provides a
Limit job authorization scope to referenced Azure DevOps repositories setting.
Pipelines can access any Azure DevOps repositories in authorized projects unless Limit job authorization scope
to referenced Azure DevOps repositories is enabled. With this option enabled, you can reduce the scope of
access for all pipelines to only Azure DevOps repositories explicitly referenced by a checkout step in the pipeline
job that uses that repository.
For more information, see Azure Repos Git repositories - Limit job authorization scope to referenced Azure
DevOps repositories.
IMPORTANT
Limit job authorization scope to referenced Azure DevOps repositories is enabled by default for new
organizations and projects created after May 2020.

Scoped build identities


Azure DevOps uses two built-in identities to execute pipelines.
A collection-scoped identity , which has access to all projects in the collection (or organization for Azure
DevOps Services)
A project-scoped identity , which has access to a single project
These identities are allocated permissions necessary to perform build/release execution time activities when
calling back to the Azure DevOps system. There are built-in default permissions, and you may also manage your
own permissions as needed.
The collection-scoped identity name has the following format:
Project Collection Build Service ({OrgName})
For example, if the organization name is fabrikam-tailspin , this account has the name
Project Collection Build Service (fabrikam-tailspin) .

The project-scoped identity name has the following format:


{Project Name} Build Service ({Org Name})
For example, if the organization name is fabrikam-tailspin and the project name is SpaceGameWeb , this account
has the name SpaceGameWeb Build Service (fabrikam-tailspin) .

By default, the collection-scoped identity is used, unless configured otherwise as described in the previous Job
athorization scope section.

Manage build service account permissions


One result of setting project-scoped access may be that the project-scoped identity may not have permissions to a
resource that the collection-scoped one did have.
You may want to change the permissions of job access token in scenarios such as the following:
You want your pipeline to access a feed that is in a different project.
You want your pipeline to be restricted from changing code in the repository.
You want your pipeline to be restricted from creating work items.
To update the permissions of the job access token:
First, determine the job authorization scope for your pipeline. See the section above to understand job
authorization scope. If the job authorization scope is collection , then the corresponding build service
account to manage permissions on is Project Collection Build Ser vice (your-collection-name) . If the
job authorization scope is project , then the build service account to manage permissions on is Your-
project-name Build Ser vice (your-collection-name) .
To restrict or grant additional access to Project Collection Build Ser vice (your-collection-name) :
Select Manage security in the overflow menu on Pipelines page.
Under Users , select Project Collection Build Ser vice (your-collection-name) .
Make any changes to the pipelines-related permissions for this account.
Navigate to organization settings for your Azure DevOps organization (or collection settings for your
project collection).
Select Permissions under Security .
Under the Users tab, look for Project Collection Build Ser vice (your-collection-name) .
Make any changes to the non-pipelines-related permissions for this account.
Since Project Collection Build Ser vice (your-collection-name) is a user in your organization or
collection, you can add this account explicitly to any resource - for e.g., to a feed in Azure Artifacts.
To restrict or grant additional access to Your-project-name Build Ser vice (your-collection-name) :
The build service account on which you can manage permissions will only be created after you run the
pipeline once. Make sure that you already ran the pipeline once.
Select Manage security in the overflow menu on Pipelines page.
Under Users , select Your-project-name Build Ser vice (your-collection-name) .
Make any changes to the pipelines-related permissions for this account.
Navigate to organization settings for your Azure DevOps organization (or collection settings for your
project collection).
Select Permissions under Security .
Under the Users tab, look for Your-project-name build ser vice (your-collection-name) .
Make any changes to the non-pipelines-related permissions for this account.
Since Your-project-name Build Ser vice (your-collection-name) is a user in your organization or
collection, you can add this account explicitly to any resource - for e.g., to a feed in Azure Artifacts.
Example - Configure permissions to access another repo in the same project project collection
In this example, the fabrikam-tailspin/SpaceGameWeb project-scoped build identity is granted permission to access
the FabrikamFiber repository in the fabrikam-tailspin/FabrikamFiber project.
1. In the FabrikamFiber project, navigate to Project settings , Repositories , FabrikamFiber .

2. Choose the + icon, start to type in the name SpaceGameWeb , and select the SpaceGameWeb Build
Ser vice account.
3. Configure the desired permissions for that user.

Example - Configure permissions to access other resources in the same project collection
In this example, the fabrikam-tailspin/SpaceGameWeb project-scoped build identity is granted permissions to access
other resources in the fabrikam-tailspin/FabrikamFiber project.
1. In the FabrikamFiber project, navigate to Project settings , Permissions .
2. Choose Users , start to type in the name SpaceGameWeb , and select the SpaceGameWeb Build
Ser vice account. If you don't see any search results initially, select Expand search .

3. Configure the desired permissions for that user.


FAQ
How do I determine the job authorization scope of my YAML pipeline?
If your project is a public project, the job authorization scope is always project regardless of any other settings.
All YAML pipelines in Azure DevOps Server 2019 run under collection job authorization scope.
Check the Pipeline settings under your Azure DevOps Organization settings :
If Limit job authorization scope to current project is enabled, then the scope is project .
If Limit job authorization scope to current project is not enabled, then check the Pipeline settings
under your Project settings in Azure DevOps:
If Limit job authorization scope to current project is enabled, then the scope is project .
Otherwise, the scope is collection .
If the pipeline is in a private project, check the Pipeline settings under your Azure DevOps Organization
settings :
If Limit job authorization scope to current project for non-release pipelines is enabled, then
the scope is project .
If Limit job authorization scope to current project for non-release pipelines is not enabled,
then check the Pipeline settings under your Project settings in Azure DevOps:
If Limit job authorization scope to current project for non-release pipelines is enabled,
then the scope is project .
Otherwise, the scope is collection .
How do I determine the job authorization scope of my classic build pipeline?
If the pipeline is in a public project, then the job authorization scope is project regardless of any other settings.
Open the editor for the pipeline and navigate to the Options tab.
If the Build job authorization scope is Current project , then scope is project .
Otherwise, scope is collection .
Check the Pipeline settings under your Azure DevOps Organization settings :
If Limit job authorization scope to current project is enabled, then the scope is project .
If Limit job authorization scope to current project is not enabled, then check the Pipeline settings
under your Project settings in Azure DevOps:
If Limit job authorization scope to current project is enabled, then the scope is project .
If Limit job authorization scope to current project is not enabled, open the editor for the
pipeline, and navigate to the Options tab.
If the Build job authorization scope is Current project , then scope is project .
Otherwise, scope is collection .
If the pipeline is in a private project, check the Pipeline settings under your Azure DevOps Organization
settings :
If Limit job authorization scope to current project for non-release pipelines is enabled, then
the scope is project .
If Limit job authorization scope to current project for non-release pipelines is not enabled,
then check the Pipeline settings under your Project settings in Azure DevOps:
If Limit job authorization scope to current project for non-release pipelines is enabled,
then the scope is project .
If Limit job authorization scope to current project for non-release pipelines is not
enabled, open the editor for the pipeline, and navigate to the Options tab.
If the Build job authorization scope is Current project , then scope is project .
Or else, scope is collection .
How do I determine the job authorization scope of my classic release pipeline?
Classic release pipelines in Azure DevOps Server 2020 and below run with collection scope.
If the pipeline is in a public project, then the job authorization scope is project regardless of any other settings.
If the pipeline is in a private project, check the Pipeline settings under your Azure DevOps Organization
settings :
If Limit job authorization scope to current project for release pipelines is enabled, then the
scope is project .
If Limit job authorization scope to current project for release pipelines is not enabled, then
check the Pipeline settings under your Project settings in Azure DevOps:
If Limit job authorization scope to current project for release pipelines is enabled, then
the scope is project .
Otherwise, the scope is collection .
Pipeline reports
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
Teams track their pipeline health and efficiency to ensure continuous delivery to their customers. You can gain
visibility into your team's pipeline(s) using Pipeline analytics. The source of information for pipeline analytics is the
set of runs for your pipeline. These analytics are accrued over a period of time, and form the basis of the rich
insights offered. Pipelines reports show you metrics, trends, and can help you identify insights to improve the
efficiency of your pipeline.

Prerequisites
Ensure that you have installed the Analytics Marketplace extension for Azure DevOps Server.

View pipeline reports


A summary of the pass rate can be viewed in the Analytics tab of a pipeline. To drill into the trend and insights,
click on the card to view the full report.
A summary of the pass rate and duration can be viewed in the Analytics tab of a pipeline. To drill into the trend
and insights, click on the card to view the full report.
Pipeline pass rate report
The Pipeline pass rate report provides a granular view of the pipeline pass rate and its trend over time. You can
also view which specific task failure contributes to a high number of pipeline run failures, and use that insight to fix
the top failing tasks.
The report contain the following sections:
Summar y : Provides the key metrics of pass rate of the pipeline over the specified period. The default view
shows data for 14 days, which you can modify.

Failure trend : Shows the number of failures per day. This data is divided by stages if multiple stages are
applicable for the pipeline.

Top failing tasks & their failed runs : Lists the top failing tasks, their trend and provides pointers to their
failed runs. Analyze the failures in the build to fix your failing task and improve the pass rate of the pipeline.
Pipeline duration report
The Pipeline duration report shows how long your pipeline typically takes to complete successfully. You can
review the duration trend and analyze the top tasks by duration to optimize the duration of the pipeline.
Test failures report
The Test failures report provides a granular view of the top failing tests in the pipeline, along with the failure
details. For more information on this report, see Test failures.

Filters
Pipelines reports can be further filtered by date range or branch.
Date range : The default view shows data from the last 14 days. The filter helps change this range.
Branch filter : View the report for a particular branch or a set of branches.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Add widgets to a dashboard
11/2/2020 • 7 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2015
Widgets smartly format data to provide access to easily consumable data. You add widgets to your team
dashboards to gain visibility into the status and trends occurring as you develop your software project.
Each widget provides access to a chart, user-configurable information, or a set of links that open a feature or
function. You can add one or more charts or widgets to your dashboard. Up to 200 widgets total. You add several
widgets at a time simply by selecting each one. See Manage dashboards to determine the permissions you need to
add and remove widgets from a dashboard.

Prerequisites
You must be a member of a project. If you don't have a project yet, create one.
If you haven't been added as a project member, get added now.
Anyone with access to a project, including stakeholders, can view dashboards.
To add, edit, or manage a team dashboard, you must have Basic access or greater and be a team admin, a
project admin, or have dashboard permissions. In general, you need to be a team member for the currently
selected team to edit dashboards.
You must be a member of a project. If you don't have a project yet, create one.
If you haven't been added as a project member, get added now.
Anyone with access to a project, including stakeholders, can view dashboards.
To add, edit, or manage a team dashboard, you must have Basic access or greater and be a team admin, a
project admin, or have dashboard permissions. In general, you need to be a team admin for the currently
selected team to edit dashboards. Request your current team or project admin to add you as a team admin.
You must be a member of a project. If you don't have a project yet, create one.
If you haven't been added as a project member, get added now.
Anyone with access to a project, including stakeholders, can view dashboards.
To add, edit, or manage a team dashboard, you must have Basic access or greater and be added to the team
administrator role for the team.

NOTE
Widgets specific to a service are disabled if the service they depend on has been disabled. For example, if Boards is disabled,
New Work item and all work tracking Analytics widgets are disabled and won't appear in the widget catalog. If Analytics is
disabled or not installed, then all Analytics widgets are disabled.
To re-enable a service, see Turn an Azure DevOps service on or off. For Analytics, see enable or install Analytics].

Select a dashboard
All dashboards are associated with a team. You need to be a team administrator, project administrator, or a team
member with permissions to modify a dashboard.
1. Open a web browser, connect to your project, and choose Over view>Dashboards . The dashboard
directory page opens.

If you need to switch to a different project, choose the Azure DevOps logo to browse all projects.
2. Choose the dashboard you want to modify.
Open a web browser, connect to your project, and choose Dashboards .

Select the team whose dashboards you want to view. To switch your team focus, see Switch project or team focus.
Choose the name of the dashboard to modify it.
For example, here we choose to view the Work in Progress dashboard.

If you need to switch to a different project, choose the Azure DevOps logo to browse all projects.

Add a widget
To add widgets to the dashboard, choose Edit .
The widget catalog will automatically open. Add all the widgets that you want and drag their tiles into the sequence
you want.
When you're finished with your additions, choose Done Editing to exit dashboard editing. This will dismiss the
widget catalog. You can then configure the widgets as needed.

TIP
When you're in dashboard edit mode, you can remove, rearrange, and configure widgets, as well as add new widgets. Once
you leave edit mode, the widget tiles remain locked, reducing the chances of accidentally moving a widget.

To remove a widget, choose the actions icon and select the Delete option from the menu.

Choose to modify a dashboard. Choose to add a widget to the dashboard.


The widget catalog describes all the available widgets, many of which are scoped to the selected team context.
Or, you can drag and drop a widget from the catalog onto the dashboard.

Add an Analytics widget


This example shows how to add the Velocity widget available from Analytics to a dashboard.
1. Connect to the web portal for your project and choose Over view>Dashboards .
If you need to switch to a different project, choose the Azure DevOps logo to browse all projects and
teams.
2. Make sure that the Analytics Marketplace extension has been installed. The Analytics widgets won't be
available until it is installed.
3. Choose the dashboard that you want to modify.
4. Choose Edit to modify a dashboard. The widget catalog opens.
5. In the right pane search box, type Velocity to quickly locate the Velocity widget within the widget catalog.
6. Choose the widget, then Add to add it to the dashboard. Or, you can drag-and-drop it onto the dashboard.
7. Next, configure the widget. For details, see the following articles:
Configure burndown or burnup
Configure cumulative flow
Configure lead/cycle time
Configure velocity
Configure test trend results

Configure a widget
Most widgets support configuration, which may include specifying the title, setting the widget size, and other
widget-specific variables.
To configure a widget, add the widget to a dashboard, choose open the menu, and select Configure .

Additional information is provided to configure the following widgets:


Burndown/burnup
Cumulative flow
Lead time or cycle time
Velocity widget
Test trend results
Additional information is provided to configure the following widgets:
Burndown/burnup
Cumulative flow
Lead time or cycle time
Velocity widget

To configure a widget, add the widget to a dashboard and then choose the configure icon.

Once you've configured the widget, you can edit it by opening the actions menu.

Move or delete a widget


To move a widget, you need to enable the dashboard edit mode. To delete a widget, simply select the delete option
provided from the widget's options menu.
Just as you have to be a team or project admin to add items to a dashboard, you must have admin permissions to
remove items.
Choose Edit to modify your dashboard. You can then add widgets or drag tiles to reorder their sequence on the
dashboard.
To remove a widget, choose the actions icon and select the Delete option from the menu.

When you're finished with your changes, choose Done Editing to exit dashboard editing.

Choose to modify your dashboard. You can then drag tiles to reorder their sequence on the dashboard.
To remove a widget, choose the actions icon and select the Delete option from the menu.

To remove a widget, choose the widget's or delete icons.

When you're finished with your changes, choose to exit dashboard editing.

Copy a widget
You can copy a widget to the same dashboard or to another team dashboard. If you want to move widgets you
have configured to another dashboard, this is how you do it. Before you begin, add the dashboard you want to
copy or move the widget to. Once you've copied the widget, you can delete it from the current dashboard.
To copy a configured widget to another team dashboard, choose the actions icon and select Copy to
dashboard and then the dashboard to copy it to.
To copy a configured widget to another team dashboard, choose the actions icon and select Add to
dashboard and then the dashboard to copy it to.

Widget size
Some widgets are pre-sized and can't be changed. Others are configurable through their configuration dialog.
For example, the Chart for work items widget allows you to select an area size ranging from 2 x 2 to 4 x 4 (tiles).
Extensibility and Marketplace widgets
In addition to the widgets described in the Widget catalog, you can add widgets from the Marketplace, or create
your own widgets using the Widget REST APIs.
Disabled Marketplace widget
If your organization owner or project collection administrator disables a marketplace widget, you'll see the
following image:

To regain access to it, request your admin to reinstate or reinstall the widget.

Try this next


Review the widget catalog or Review Marketplace widgets

Related articles
Analytics-based widgets
What is Analytics?
Burndown guidance
Cumulative flow & lead/cycle time guidance
Velocity guidance
Burndown guidance
Cumulative flow & lead/cycle time guidance
Velocity guidance
Widgets based on Analytics
11/2/2020 • 4 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
Analytics supports several dashboard widgets that take advantage of the power of the service. Using these widgets,
you and your team can gain valuable insights into the health and status of your work.
Analytics supports several dashboard widgets that take advantage of the power of the service. Once you enable or
install Analytics on a project collection, you can add these widgets to your dashboard. You must be an organization
owner or a member of the Project Collection Administrator group to add extensions or enable the service. Using
these widgets, you and your team can gain valuable insights into the health and status of your work.
You add an Analytics widget to a dashboard the same way you add any other type of widget. For details, see Add a
widget to your dashboard.

NOTE
If Boards is disabled, then Analytics views will also be disabled and all widgets associated with work item tracking won't
appear in the widget catalog and will become disabled. To re-enable a service, see Turn an Azure DevOps service on or off.

Burndown
The Burndown widget lets you display a trend of remaining work across multiple teams and multiple sprints. You
can use it to create a release burndown, a bug burndown, or a burndown on any scope of work over time. It will
help you answer questions like:
Will we complete the scope of work by the targeted completion date? If not, what is the projected completion
date?
What kind of scope creep does my project have?
What is the projected completion date for my project?
Burndown widget showing a release Burndown
To learn more, see Configure a Burndown or Burnup widget.

Burnup
The Burnup widget lets you display a trend of completed work across multiple teams and multiple sprints. You can
use it to create a release burnup, a bug burnup, or a burnup on any scope of work over time. When completed
work meets total scope, your project is done!
Burnup widget showing a release Burnup

To learn more, see Configure a Burndown or Burnup widget.


Sprint Burndown widget
The Analytics-based Sprint Burndown widget adds a team's burndown chart for a sprint to the dashboard. This
widget supports several configuration options, including selecting a team, iteration, and time period. Teams use the
burndown chart to mitigate risk and check for scope creep throughout the sprint cycle.
Sprint Burndown widget

To learn more, see Configure and monitor sprint burndown .

Cumulative Flow Diagram (CFD)


The CFD widget shows the count of work items (over time) for each column of a Kanban board. This allows you to
see patterns in your team's development cycle over time. It will help you answer questions like:
Is there a bottleneck in my process?
Am I consistently delivering value to my users?
Cumulative flow diagram widget showing 30 days of data
To learn more, see Cumulative flow diagram widget.

Cycle Time
The Cycle time widget will help you analyze the time it takes for your team to complete work items once they begin
actively working on them. A lower cycle time is typically indicative of a healthier team process. Using the Cycle time
widget you will be able to answer questions like:
On average, how long does it take my team to build a feature or fix a bug?
Are bugs costing my team a lot of development time?
Cycle time widget showing 30 days of data
To learn more, see Cycle time and lead time control charts.

Lead Time
The Lead time widget will help you analyze the time it takes to deliver work from your backlog. Lead time
measures the total time elapsed from the creation of work items to their completion. Using the Lead time widget
you will be able to answer questions like:
How long does it take for work requested by a customer to be delivered?
Did work items take longer than usual to complete?
Lead time widget showing 60 days of data
To learn more, see Cycle time and lead time control charts.

Velocity
The Velocity widget will help you learn how much work your team can complete during a sprint. The widget shows
the team's velocity by Story Points, work item count, or any custom field. You can also compare the work delivered
against your plan and track work completed late. Using the Velocity widget, you will be able to answer questions
like:
On average, what is the velocity of my team?
Is my team consistently delivering what we planned?
How much work can we commit to deliver in upcoming sprints?
Velocity widget showing 8 sprints of data based on Stor y Points
To learn more, see Configure and view Velocity widgets.

Test Results Trend (Advanced)


With the Test Results Trend (Advanced) widget, you can track the test quality of your pipelines over time. Tracking
test quality and improving test collateral are essential tasks to maintaining a healthy DevOps pipeline.
The widget shows a trend of your test results for either build or release pipelines. You can track the daily count of
tests, pass rates, and test duration. The highly configurable widget allows you to use it for a wide variety of
scenarios.
You can find outliers in your test results and answer questions like:
What tests taking longer to run than usual?
What micro services are affecting my pass rate?
Test trend widget showing passed test results and pass rate for the last 7 days grouped by Priority
To learn more, see Configure a test results widget.
Azure Pipelines ecosystem support
11/2/2020 • 2 minutes to read • Edit Online

Build and deploy your apps. Find guidance based on your language and platform.

Build your app


.NET Core

Anaconda

Android

ASP.NET

C/C++ with GCC

C/C++ with VC++

Containers

Go

Java

JavaScript and Node.js

PHP

Python

Ruby

UWP

Xamarin
Xcode

.NET Core

Android

ASP.NET

C/C++ with GCC

C/C++ with VC++

Containers

Go

Java

JavaScript and Node.js

PHP

Python

Ruby

UWP

Xamarin

Xcode

Deploy your app


Kubernetes

Azure Stack
Azure SQL database

Azure Web Apps

Linux VM

npm

NuGet

Virtual Machine Manager

VMware

Web App for Containers

Windows VM

Azure SQL database

Azure Web Apps

Linux VM

npm

NuGet

Virtual Machine Manager

VMware

Web App for Containers

Windows VM
Build, test, and deploy .NET Core apps
11/2/2020 • 16 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use a pipeline to automatically build and test your .NET Core projects. Learn how to:
Set up your build environment with Microsoft-hosted or self-hosted agents.
Restore dependencies, build your project, and test with the .NET Core CLI task or a script.
Use the publish code coverage task to publish code coverage results.
Package and deliver your code with the .NET Core CLI task and the publish build artifacts task.
Publish to a NuGet feed.
Deploy your web app to Azure.

NOTE
For help with .NET Framework projects, see Build ASP.NET apps with .NET Framework.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

NOTE
This guidance applies to TFS version 2017.3 and newer.

Create your first pipeline


Are you new to Azure Pipelines? If so, then we recommend you try this section before moving on to other
sections.

Get the code


Fork this repo in GitHub:
Import this repo into your Git repo in Azure DevOps Server 2019:
Import this repo into your Git repo in TFS:

https://github.com/MicrosoftDocs/pipelines-dotnet-core

Sign in to Azure Pipelines


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name
and displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right
corner of the dashboard.
Create the pipeline
1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .

When the Configure tab appears, select ASP.NET Core .

1. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .

2. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job.

You just created and ran a pipeline that we automatically created for you, because your code
appeared to be a good match for the ASP.NET Core template.

You now have a working YAML pipeline ( azure-pipelines.yml ) in your repository that's ready for you to
customize!
3. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.

4. See the sections below to learn some of the more common ways to customize your pipeline.
YAML
1. Add an azure-pipelines.yml file in your repository. Customize this snippet for your build.
trigger:
- master

pool: Default

variables:
buildConfiguration: 'Release'

# do this before all your .NET Core tasks


steps:
- task: DotNetCoreInstaller@2
inputs:
version: '2.2.402' # replace this value with the version that you need for your project
- script: dotnet build --configuration $(buildConfiguration)
displayName: 'dotnet build $(buildConfiguration)'

2. Create a pipeline (if you don't know how, see Create your first pipeline), and for the template select YAML .
3. Set the Agent pool and YAML file path for your pipeline.
4. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
5. When you're ready to make changes to your pipeline, Edit it.
6. See the sections below to learn some of the more common ways to customize your pipeline.
Classic
1. Create a pipeline (if you don't know how, see Create your first pipeline), and for the template select
Empty Pipeline .
2. In the task catalog, find and add the .NET Core task. This task will run dotnet build to build the code in
the sample repository.
3. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
You now have a working pipeline that's ready for you to customize!
4. When you're ready to make changes to your pipeline, Edit it.
5. See the sections below to learn some of the more common ways to customize your pipeline.

Build environment
You can use Azure Pipelines to build your .NET Core projects on Windows, Linux, or macOS without needing to
set up any infrastructure of your own. The Microsoft-hosted agents in Azure Pipelines have several released
versions of the .NET Core SDKs preinstalled.
Ubuntu 18.04 is set here in the YAML file.

pool:
vmImage: 'ubuntu-18.04' # examples of other options: 'macOS-10.15', 'windows-2019'

See Microsoft-hosted agents for a complete list of images and Pool for further examples.
The Microsoft-hosted agents don't include some of the older versions of the .NET Core SDK. They also don't
typically include prerelease versions. If you need these kinds of SDKs on Microsoft-hosted agents, add the
UseDotNet@2 task to your YAML file.
To install the preview version of the 5.0.x SDK for building and 3.0.x for running tests that target .NET Core 3.0.x,
add this snippet:

steps:
- task: UseDotNet@2
inputs:
version: '5.0.x'
includePreviewVersions: true # Required for preview versions

- task: UseDotNet@2
inputs:
version: '3.0.x'
packageType: runtime

If you are installing on a Windows agent, it will already have a .NET Core runtime on it. To install a newer SDK,
set performMultiLevelLookup to true in this snippet:

steps:
- task: UseDotNet@2
displayName: 'Install .NET Core SDK'
inputs:
version: 5.0.x
performMultiLevelLookup: true
includePreviewVersions: true # Required for preview versions

TIP
As an alternative, you can set up a self-hosted agent and save the cost of running the tool installer. See Linux, MacOS, or
Windows. You can also use self-hosted agents to save additional time if you have a large repository or you run
incremental builds. A self-hosted agent can also help you in using the preview or private SDKs that are not officially
supported by Azure DevOps or you have available on your corporate or on-premises environments only.

You can build your .NET Core projects by using the .NET Core SDK and runtime on Windows, Linux, or macOS.
Your builds run on a self-hosted agent. Make sure that you have the necessary version of the .NET Core SDK and
runtime installed on the agent.

Restore dependencies
NuGet is a popular way to depend on code that you don't build. You can download NuGet packages and project-
specific tools that are specified in the project file by running the dotnet restore command either through the
.NET Core task or directly in a script in your pipeline.
You can download NuGet packages from Azure Artifacts, NuGet.org, or some other external or internal NuGet
repository. The .NET Core task is especially useful to restore packages from authenticated NuGet feeds.
This pipeline uses an artifact feed for dotnet restore in the .NET Core CLI task.
trigger:
- master

pool:
vmImage: 'windows-latest'

variables:
buildConfiguration: 'Release'

steps:
- task: DotNetCoreCLI@2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-vsts-feed' # A series of numbers and letters

- task: DotNetCoreCLI@2
inputs:
command: 'build'
arguments: '--configuration $(buildConfiguration)'
displayName: 'dotnet build $(buildConfiguration)'

You can download NuGet packages from NuGet.org.


dotnet restore internally uses a version of NuGet.exe that is packaged with the .NET Core SDK. dotnet restore
can only restore packages specified in the .NET Core project .csproj files. If you also have a Microsoft .NET
Framework project in your solution or use package.json to specify your dependencies, you must also use the
NuGet task to restore those dependencies.
In .NET Core SDK version 2.0 and newer, packages are restored automatically when running other commands
such as dotnet build .
In .NET Core SDK version 2.0 and newer, packages are restored automatically when running other commands
such as dotnet build . However, you might still need to use the .NET Core task to restore packages if you use
an authenticated feed.
If your builds occasionally fail when restoring packages from NuGet.org due to connection issues, you can use
Azure Artifacts in conjunction with upstream sources and cache the packages. The credentials of the pipeline are
automatically used when connecting to Azure Artifacts. These credentials are typically derived from the Project
Collection Build Ser vice account.
If you want to specify a NuGet repository, put the URLs in a NuGet.config file in your repository. If your feed is
authenticated, manage its credentials by creating a NuGet service connection in the Ser vices tab under Project
Settings .
If you use Microsoft-hosted agents, you get a new machine every time your run a build, which means restoring
the packages every time. This restoration can take a significant amount of time. To mitigate this issue, you can
either use Azure Artifacts or a self-hosted agent, in which case, you get the benefit of using the package cache.
To restore packages from an external custom feed, use the .NET Core task:
# do this before your build tasks
steps:
- task: DotNetCoreCLI@2
displayName: Restore
inputs:
command: restore
projects: '**/*.csproj'
feedsToUse: config
nugetConfigPath: NuGet.config # Relative to root of the repository
externalFeedCredentials: <Name of the NuGet service connection>
# ...

For more information about NuGet service connections, see publish to NuGet feeds.
1. Select Tasks in the pipeline. Select the job that runs your build tasks. Then select + to add a new task to
that job.
2. In the task catalog, find and add the .NET Core task.
3. Select the task and, for Command , select restore .
4. Specify any other options you need for this task. Then save the build.

NOTE
Make sure the custom feed is specified in your NuGet.config file and that credentials are specified in the NuGet service
connection.

Build your project


You build your .NET Core project either by running the dotnet build command in your pipeline or by using the
.NET Core task.
To build your project by using the .NET Core task, add the following snippet to your azure-pipelines.yml file:

steps:
- task: DotNetCoreCLI@2
displayName: Build
inputs:
command: build
projects: '**/*.csproj'
arguments: '--configuration $(buildConfiguration)' # Update this to match your need

You can run any custom dotnet command in your pipeline. The following example shows how to install and use
a .NET global tool, dotnetsay:

steps:
- task: DotNetCoreCLI@2
displayName: 'Install dotnetsay'
inputs:
command: custom
custom: tool
arguments: 'install -g dotnetsay'

Build
1. Select Tasks in the pipeline. Select the job that runs your build tasks. Then select + to add a new task to
that job.
2. In the task catalog, find and add the .NET Core task.
3. Select the task and, for Command , select build or publish .
4. Specify any other options you need for this task. Then save the build.
Install a tool
To install a .NET Core global tool like dotnetsay in your build running on Windows, take the following steps:
1. Add the .NET Core task and set the following properties:
Command : custom.
Path to projects : leave empty.
Custom command : tool.
Arguments : install -g dotnetsay .
2. Add a Command Line task and set the following properties:
Script: dotnetsay .

Run your tests


If you have test projects in your repository, then use the .NET Core task to run unit tests by using testing
frameworks like MSTest, xUnit, and NUnit. For this functionality, the test project must reference
Microsoft.NET.Test.SDK version 15.8.0 or higher. Test results are automatically published to the service. These
results are then made available to you in the build summary and can be used for troubleshooting failed tests
and test-timing analysis.
Add the following snippet to your azure-pipelines.yml file:

steps:
# ...
# do this after other tasks such as building
- task: DotNetCoreCLI@2
inputs:
command: test
projects: '**/*Tests/*.csproj'
arguments: '--configuration $(buildConfiguration)'

An alternative is to run the dotnet test command with a specific logger and then use the Publish Test
Results task:

steps:
# ...
# do this after your tests have run
- script: dotnet test <test-project> --logger trx
- task: PublishTestResults@2
condition: succeededOrFailed()
inputs:
testRunner: VSTest
testResultsFiles: '**/*.trx'

Use the .NET Core task with Command set to test . Path to projects should refer to the test projects in your
solution.

Collect code coverage


If you're building on the Windows platform, code coverage metrics can be collected by using the built-in
coverage data collector. For this functionality, the test project must reference Microsoft.NET.Test.SDK version
15.8.0 or higher. If you use the .NET Core task to run tests, coverage data is automatically published to the
server. The .coverage file can be downloaded from the build summary for viewing in Visual Studio.
Add the following snippet to your azure-pipelines.yml file:

steps:
# ...
# do this after other tasks such as building
- task: DotNetCoreCLI@2
inputs:
command: test
projects: '**/*Tests/*.csproj'
arguments: '--configuration $(buildConfiguration) --collect "Code coverage"'

If you choose to run the dotnet test command, specify the test results logger and coverage options. Then use
the Publish Test Results task:

steps:
# ...
# do this after your tests have run
- script: dotnet test <test-project> --logger trx --collect "Code coverage"
- task: PublishTestResults@2
inputs:
testRunner: VSTest
testResultsFiles: '**/*.trx'

1. Add the .NET Core task to your build job and set the following properties:
Command : test.
Path to projects : Should refer to the test projects in your solution.
Arguments : --configuration $(BuildConfiguration) --collect "Code coverage" .
2. Ensure that the Publish test results option remains selected.
Collect code coverage metrics with Coverlet
If you're building on Linux or macOS, you can use Coverlet or a similar tool to collect code coverage metrics.
Code coverage results can be published to the server by using the Publish Code Coverage Results task. To
leverage this functionality, the coverage tool must be configured to generate results in Cobertura or JaCoCo
coverage format.
To run tests and publish code coverage with Coverlet:
Add a reference to the coverlet.msbuild NuGet package in your test project(s).
Add this snippet to your azure-pipelines.yml file:
- task: DotNetCoreCLI@2
displayName: 'dotnet test'
inputs:
command: 'test'
arguments: '--configuration $(buildConfiguration) /p:CollectCoverage=true
/p:CoverletOutputFormat=cobertura /p:CoverletOutput=$(Build.SourcesDirectory)/TestResults/Coverage/'
publishTestResults: true
projects: '**/test-library/*.csproj' # update with your test project directory

- task: PublishCodeCoverageResults@1
displayName: 'Publish code coverage report'
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: '$(Build.SourcesDirectory)/**/coverage.cobertura.xml'

Package and deliver your code


After you've built and tested your app, you can upload the build output to Azure Pipelines or TFS, create and
publish a NuGet package, or package the build output into a .zip file to be deployed to a web application.
Publish artifacts to Azure Pipelines
To publish the output of your .NET build ,
Run dotnet publish --output $(Build.ArtifactStagingDirectory) on CLI or add the DotNetCoreCLI@2 task
with publish command.
Publish the artifact by using Publish artifact task.
Add the following snippet to your azure-pipelines.yml file:

steps:

- task: DotNetCoreCLI@2
inputs:
command: publish
publishWebProjects: True
arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)'
zipAfterPublish: True

# this code takes all the files in $(Build.ArtifactStagingDirectory) and uploads them as an artifact of your
build.
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'myWebsiteName'

NOTE
The dotNetCoreCLI@2 task has a publishWebProjects input that is set to true by default. This publishes all web
projects in your repo by default. You can find more help and information in the open source task on GitHub.

To copy additional files to Build directory before publishing, use Utility: copy files.
Publish to a NuGet feed
To create and publish a NuGet package, add the following snippet:
steps:
# ...
# do this near the end of your pipeline in most cases
- script: dotnet pack /p:PackageVersion=$(version) # define version variable elsewhere in your pipeline
- task: NuGetAuthenticate@0
input:
nuGetServiceConnections: '<Name of the NuGet service connection>'
- task: NuGetCommand@2
inputs:
command: push
nuGetFeedType: external
publishFeedCredentials: '<Name of the NuGet service connection>'
versioningScheme: byEnvVar
versionEnvVar: version

For more information about versioning and publishing NuGet packages, see publish to NuGet feeds.
Deploy a web app
To create a .zip file archive that's ready for publishing to a web app, add the following snippet:

steps:
# ...
# do this after you've built your app, near the end of your pipeline in most cases
# for example, you do this before you deploy to an Azure web app on Windows
- task: DotNetCoreCLI@2
inputs:
command: publish
publishWebProjects: True
arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)'
zipAfterPublish: True

To publish this archive to a web app, see Azure Web Apps deployment.
Publish artifacts to Azure Pipelines
To simply publish the output of your build to Azure Pipelines or TFS, use the Publish Ar tifacts task.
Publish to a NuGet feed
If you want to publish your code to a NuGet feed, take the following steps:
1. Use a .NET Core task with Command set to pack.
2. Publish your package to a NuGet feed.
Deploy a web app
1. Use a .NET Core task with Command set to publish.
2. Make sure you've selected the option to create a .zip file archive.
3. To publish this archive to a web app, see Azure Web Apps deployment.

Build an image and push to container registry


For your app, you can also build an image and push it to a container registry.

Troubleshooting
If you're able to build your project on your development machine, but you're having trouble building it on Azure
Pipelines or TFS, explore the following potential causes and corrective actions:
We don't install prerelease versions of the .NET Core SDK on Microsoft-hosted agents. After a new version of
the .NET Core SDK is released, it can take a few weeks for us to roll it out to all the datacenters that Azure
Pipelines runs on. You don't have to wait for us to finish this rollout. You can use the .NET Core Tool
Installer , as explained in this guidance, to install the desired version of the .NET Core SDK on Microsoft-
hosted agents.
Check that the versions of the .NET Core SDK and runtime on your development machine match those on
the agent. You can include a command-line script dotnet --version in your pipeline to print the version
of the .NET Core SDK. Either use the .NET Core Tool Installer , as explained in this guidance, to deploy
the same version on the agent, or update your projects and development machine to the newer version
of the .NET Core SDK.
You might be using some logic in the Visual Studio IDE that isn't encoded in your pipeline. Azure Pipelines
or TFS runs each of the commands you specify in the tasks one after the other in a new process. Look at
the logs from the Azure Pipelines or TFS build to see the exact commands that ran as part of the build.
Repeat the same commands in the same order on your development machine to locate the problem.
If you have a mixed solution that includes some .NET Core projects and some .NET Framework projects,
you should also use the NuGet task to restore packages specified in packages.config files. Similarly, you
should add MSBuild or Visual Studio Build tasks to build the .NET Framework projects.
If your builds fail intermittently while restoring packages, either NuGet.org is having issues, or there are
networking problems between the Azure datacenter and NuGet.org. These aren't under our control, and
you might need to explore whether using Azure Artifacts with NuGet.org as an upstream source
improves the reliability of your builds.
Occasionally, when we roll out an update to the hosted images with a new version of the .NET Core SDK
or Visual Studio, something might break your build. This can happen, for example, if a newer version or
feature of the NuGet tool is shipped with the SDK. To isolate these problems, use the .NET Core Tool
Installer task to specify the version of the .NET Core SDK that's used in your build.

FAQ
Where can I learn more about Azure Artifacts and the TFS Package Management service?
Package Management in Azure Artifacts and TFS
Where can I learn more about .NET Core commands?
.NET Core CLI tools
Where can I learn more about running tests in my solution?
Unit testing in .NET Core projects
Where can I learn more about tasks?
Build and release tasks
Build ASP.NET apps with .NET Framework
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

NOTE
This article focuses on building .NET Framework projects with Azure Pipelines. For help with .NET Core projects, see .NET
Core.

NOTE
This guidance applies to TFS version 2017.3 and newer.

Create your first pipeline


Are you new to Azure Pipelines? If so, then we recommend you try this section before moving on to other
sections.

Get the code


Fork this repo in GitHub:
Import this repo into your Git repo in Azure DevOps Server 2019:
Import this repo into your Git repo in TFS:

https://github.com/Microsoft/devops-project-samples.git

The sample repo includes several different projects, and the sample application for this article is located in the
following path:

https://github.com/Microsoft/devops-project-samples

You will use the code in /dotnet/aspnet/webapp/ . Your azure-pipelines.yml file needs to run from within the
dotnet/aspnet/webapp/Application folder for the build to complete successfully.
The sample app is a Visual Studio solution that has two projects:
An ASP.NET Web Application project that targets .NET Framework 4.5
A Unit Test project
Sign in to Azure Pipelines
Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name
and displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner
of the dashboard.

NOTE
This scenario works on TFS, but some of the following instructions might not exactly match the version of TFS that you are
using. Also, you'll need to set up a self-hosted agent, possibly also installing software. If you are a new user, you might have
a better learning experience by trying this procedure out first using a free Azure DevOps organization. Then change the
selector in the upper-left corner of this page from Team Foundation Server to Azure DevOps .

After you have the sample code in your own repository, create a pipeline using the instructions in Create
your first pipeline and select the ASP.NET template. This automatically adds the tasks required to build the
code in the sample repository.
Save the pipeline and queue a build to see it in action.

Build environment
You can use Azure Pipelines to build your .NET Framework projects without needing to set up any infrastructure
of your own. The Microsoft-hosted agents in Azure Pipelines have several released versions of Visual Studio pre-
installed to help you build your projects.
Use windows-2019 for Windows Server 2019 with Visual Studio 2019
Use vs2017-win2016 for Windows Server 2016 with Visual Studio 2017

You can also use a self-hosted agent to run your builds. This is particularly helpful if you have a large repository
and you want to avoid downloading the source code to a fresh machine for every build.
Your builds run on a self-hosted agent. Make sure that you have the necessary version of the Visual Studio
installed on the agent.

Build multiple configurations


It is often required to build your app in multiple configurations. The following steps extend the example above to
build the app on four configurations: [Debug, x86], [Debug, x64], [Release, x86], [Release, x64].
1. Click the Variables tab and modify these variables:
BuildConfiguration = debug, release
= x86, x64
BuildPlatform
2. Select Tasks and click on the agent job to change the options for the job:
Select Multi-configuration .
Specify Multipliers: BuildConfiguration, BuildPlatform
3. Select Parallel if you have multiple build agents and want to build your configuration/platform pairings in
parallel.
Build, test, and deploy JavaScript and Node.js apps
11/2/2020 • 23 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use a pipeline to build and test JavaScript and Node.js apps, and then deploy or publish to targets. Learn how
to:
Set up your build environment with Microsoft-hosted or self-hosted agents.
Use the npm task or a script to download packages for your build.
Implement JavaScript frameworks: Angular, React, or Vue.
Run unit tests and publish them with the publish test results task.
Use the publish code coverage task to publish code coverage results.
Publish npm packages with Azure artifacts.
Create a .zip file archive that is ready for publishing to a web app with the Archive Files task and deploy to
Azure.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.

NOTE
This guidance applies to Team Foundation Server (TFS) version 2017.3 and newer.

Create your first pipeline


Are you new to Azure Pipelines? If so, then we recommend you try this section to create before moving on
to other sections.

Get the code


See an example
Fork this repo in GitHub:

https://github.com/MicrosoftDocs/pipelines-javascript

Sign in to Azure Pipelines


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name
and displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see
a Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right
corner of the dashboard.
Create the pipeline
1. The following code is a simple Node server implemented with the Express.js framework. Tests for the
app are written through the Mocha framework. To get started, fork this repo in GitHub.

https://github.com/MicrosoftDocs/pipelines-javascript

2. Sign in to your Azure DevOps organization and navigate to your project.


3. In your project, navigate to the Pipelines page. Then choose the action to create a new pipeline.
4. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
5. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
6. When the list of repositories appears, select your Node.js sample repository.
7. Azure Pipelines will analyze the code in your repository and recommend Node.js template for your
pipeline. Select that template.
8. Azure Pipelines will generate a YAML file for your pipeline. Select Save and run , then select Commit
directly to the master branch , and then choose Save and run again.
9. A new run is started. Wait for the run to finish.
When you're done, you'll have a working YAML file ( azure-pipelines.yml ) in your repository that's ready for
you to customize.

TIP
To make changes to the YAML file as described in this topic, select the pipeline in the Pipelines page, and then Edit the
azure-pipelines.yml file.

YAML
1. The following code is a simple Node server implemented with the Express.js framework. Tests for the
app are written through the Mocha framework. To get started, fork this repo in GitHub.

https://github.com/MicrosoftDocs/pipelines-javascript

2. Add an azure-pipelines.yml file in your repository. This YAML assumes that you have Node.js with npm
installed on your server.

trigger:
- master

pool: Default

- script: |
npm install
npm run build
displayName: 'npm install and build'

3. Create a pipeline (if you don't know how, see Create your first pipeline), and for the template select
YAML .
4. Set the Agent pool and YAML file path for your pipeline.
5. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
6. When you're ready to make changes to your pipeline, Edit it.
7. See the sections below to learn some of the more common ways to customize your pipeline.
Classic
1. The following code is a simple Node server implemented with the Express.js framework. Tests for the
app are written through the Mocha framework. To get started, fork this repo in GitHub.

https://github.com/MicrosoftDocs/pipelines-javascript

2. After you have the sample code in your own repository, create a pipeline by using the instructions in
Create your first pipeline and select the Empty process template.
3. Select Process under the Tasks tab in the pipeline editor and change the properties as follows:
Agent queue: Hosted Ubuntu 1604
4. Add the following tasks to the pipeline in the specified order:
npm
Command: install

npm
Display name: npm test
Command: custom
Command and arguments: test
Publish Test Results
Leave all the default values for properties
Archive Files
Root folder or file to archive: $(System.DefaultWorkingDirectory)
Prepend root folder name to archive paths: Unchecked
Publish Build Ar tifacts
Leave all the default values for properties
5. Save the pipeline and queue a build to see it in action.
Learn some of the common ways to customize your JavaScript build process.

Build environment
You can use Azure Pipelines to build your JavaScript apps without needing to set up any infrastructure of your
own. You can use either Windows or Linux agents to run your builds.
Update the following snippet in your azure-pipelines.yml file to select the appropriate image.

pool:
vmImage: 'ubuntu-latest' # examples of other options: 'macOS-10.15', 'vs2017-win2016'

Tools that you commonly use to build, test, and run JavaScript apps - like npm, Node, Yarn, and Gulp - are pre-
installed on Microsoft-hosted agents in Azure Pipelines. For the exact version of Node.js and npm that is
preinstalled, refer to Microsoft-hosted agents. To install a specific version of these tools on Microsoft-hosted
agents, add the Node Tool Installer task to the beginning of your process.
You can also use a self-hosted agent.
Use a specific version of Node.js
If you need a version of Node.js and npm that is not already installed on the Microsoft-hosted agent, use the
Node tool installer task. Add the following snippet to your azure-pipelines.yml file.

NOTE
The hosted agents are regularly updated, and setting up this task will result in spending significant time updating to a
newer minor version every time the pipeline is run. Use this task only when you need a specific Node version in your
pipeline.

- task: NodeTool@0
inputs:
versionSpec: '12.x' # replace this value with the version that you need for your project

If you need a version of Node.js/npm that is not already installed on the agent:
1. In the pipeline, select Tasks , choose the phase that runs your build tasks, and then select + to add a new
task to that phase.
2. In the task catalog, find and add the Node Tool Installer task.
3. Select the task and specify the version of the Node.js runtime that you want to install.
To update just the npm tool, run the npm i -g npm@version-number command in your build process.
Use multiple node versions
You can build and test your app on multiple versions of Node by using a strategy and the Node tool installer
task.

pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
node_12_x:
node_version: 12.x
node_13_x:
node_version: 13.x

steps:
- task: NodeTool@0
inputs:
versionSpec: $(node_version)

- script: npm install

See multi-configuration execution.

Install tools on your build agent


If you have defined tools needed for your build as development dependencies in your project's package.json
or package-lock.json file, install these tools along with the rest of your project dependencies through npm.
This will install the exact version of the tools defined in the project, isolated from other versions that exist on
the build agent.
You can use a script or the npm task.
Using a script to install with package.json

- script: npm install --only=dev

Using the npm task to install with package.json

- task: Npm@1
inputs:
command: 'install'

Run tools installed this way by using npm's npx package runner, which will first look for tools installed this
way in its path resolution. The following example calls the mocha test runner but will look for the version
installed as a dev dependency before using a globally installed (through npm install -g ) version.

- script: npx mocha

To install tools that your project needs but that are not set as dev dependencies in package.json , call
npm install -g from a script stage in your pipeline.

The following example installs the latest version of the Angular CLI by using npm . The rest of the pipeline can
then use the ng tool from other script stages.

NOTE
On Microsoft-hosted Linux agents, preface the command with sudo , like sudo npm install -g .

- script: npm install -g @angular/cli

These tasks will run every time your pipeline runs, so be mindful of the impact that installing tools has on build
times. Consider configuring self-hosted agents with the version of the tools you need if overhead becomes a
serious impact to your build performance.
Use the npm or command line tasks in your pipeline to install tools on your build agent.

Dependency management
In your build, use Yarn or Azure Artifacts/TFS to download packages from the public npm registry, which is a
type of private npm registry that you specify in the .npmrc file.
npm
You can use NPM in a few ways to download packages for your build:
Directly run npm install in your pipeline. This is the simplest way to download packages from a registry
that does not need any authentication. If your build doesn't need development dependencies on the agent to
run, you can speed up build times with the --only=prod option to npm install .
Use an npm task. This is useful when you're using an authenticated registry.
Use an npm Authenticate task. This is useful when you run npm install from inside your task runners -
Gulp, Grunt, or Maven.
If you want to specify an npm registry, put the URLs in an .npmrc file in your repository. If your feed is
authenticated, manage its credentials by creating an npm service connection on the Ser vices tab under
Project Settings .
To install npm packages by using a script in your pipeline, add the following snippet to azure-pipelines.yml .

- script: npm install

To use a private registry specified in your .npmrc file, add the following snippet to azure-pipelines.yml .

- task: Npm@1
inputs:
customEndpoint: <Name of npm service connection>

To pass registry credentials to npm commands via task runners such as Gulp, add the following task to
azure-pipelines.yml before you call the task runner.

- task: npmAuthenticate@0
inputs:
customEndpoint: <Name of npm service connection>

Use the npm or npm authenticate task in your pipeline to download and install packages.
If your builds occasionally fail because of connection issues when you're restoring packages from the npm
registry, you can use Azure Artifacts in conjunction with upstream sources, and cache the packages. The
credentials of the pipeline are automatically used when you're connecting to Azure Artifacts. These credentials
are typically derived from the Project Collection Build Ser vice account.
If you're using Microsoft-hosted agents, you get a new machine every time you run a build - which means
restoring the dependencies every time.
This can take a significant amount of time. To mitigate this, you can use Azure Artifacts or a self-hosted agent.
You'll then get the benefit of using the package cache.
Yarn
Use a script stage to invoke Yarn to restore dependencies. Yarn is available preinstalled on some Microsoft-
hosted agents. You can install and configure it on self-hosted agents like any other tool.

- script: yarn install

Use the CLI or Bash task in your pipeline to invoke Yarn.

Run JavaScript compilers


Use compilers such as Babel and the TypeScript tsc compiler to convert your source code into versions that
are usable by the Node.js runtime or in web browsers.
If you have a script object set up in your project's package.json file that runs your compiler, invoke it in your
pipeline by using a script task.

- script: npm run compile

You can call compilers directly from the pipeline by using the script task. These commands will run from the
root of the cloned source-code repository.
- script: tsc --target ES6 --strict true --project tsconfigs/production.json

Use the npm task in your pipeline if you have a compile script defined in your project's package.json to build
the code. Use the Bash task to compile your code if you don't have a separate script defined in your project
configuration.

Run unit tests


Configure your pipelines to run your JavaScript tests so that they produce results formatted in the JUnit XML
format. You can then publish the results using the built-in publish test results task.
If your test framework doesn't support JUnit output, you'll need to add support through a partner reporting
module, such as mocha-junit-reporter. You can either update your test script to use the JUnit reporter, or if the
reporter supports command-line options, pass those into the task definition.
The following table lists the most commonly used test runners and the reporters that can be used to produce
XML results:

T EST RUN N ER REP O RT ERS TO P RO DUC E XM L REP O RT S

mocha mocha-junit-reporter
cypress-multi-reporters

jasmine jasmine-reporters

jest jest-junit
jest-junit-reporter

karma karma-junit-reporter

Ava tap-xunit

This example uses the mocha-junit-reporter and invokes mocha test directly by using a script. This produces
the JUnit XML output at the default location of ./test-results.xml .

- script: mocha test --reporter mocha-junit-reporter

If you have defined a test script in your project's package.json file, you can invoke it by using npm test .

- script: npm test

Publish test results


To publish the results, use the Publish Test Results task.

- task: PublishTestResults@2
condition: succeededOrFailed()
inputs:
testRunner: JUnit
testResultsFiles: '**/TEST-RESULTS.xml'

Publish code coverage results


If your test scripts run a code coverage tool such as Istanbul, add the Publish Code Coverage Results task to
publish code coverage results along with your test results. When you do this, you can find coverage metrics in
the build summary and download HTML reports for further analysis. The task expects Cobertura or JaCoCo
reporting output, so ensure that your code coverage tool runs with the necessary options to generate the right
output. (For example, --report cobertura .)

- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura # or JaCoCo
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/*coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/coverage'

Use the Publish Test Results and Publish Code Coverage Results tasks in your pipeline to publish test results
along with code coverage results by using Istanbul.
Set the Control Options for the Publish Test Results task to run the task even if a previous task has failed, unless
the deployment was canceled.

End-to-end browser testing


Run tests in headless browsers as part of your pipeline with tools like Protractor or Karma. Then publish the
results for the build to VSTS with these steps:
1. Install a headless browser testing driver such as headless Chrome or Firefox, or a browser mocking tool
such as PhantomJS, on the build agent.
2. Configure your test framework to use the headless browser/driver option of your choice according to the
tool's documentation.
3. Configure your test framework (usually with a reporter plug-in or configuration) to output JUnit-formatted
test results.
4. Set up a script task to run any CLI commands needed to start the headless browser instances.
5. Run the end-to-end tests in the pipeline stages along with your unit tests.
6. Publish the results by using the same Publish Test Results task alongside your unit tests.

Package web apps


Package applications to bundle all your application modules with intermediate outputs and dependencies into
static assets ready for deployment. Add a pipeline stage after your compilation and tests to run a tool like
Webpack or ng build by using the Angular CLI.
The first example calls webpack . To have this work, make sure that webpack is configured as a development
dependency in your package.json project file. This will run webpack with the default configuration unless you
have a webpack.config.js file in the root folder of your project.

- script: webpack

The next example uses the npm task to call npm run build to call the build script object defined in the project
package.json. Using script objects in your project moves the logic for the build into the source code and out of
the pipeline.

- script: npm run build

Use the CLI or Bash task in your pipeline to invoke your packaging tool, such as webpack or Angular's
ng build .

JavaScript frameworks
Angular
For Angular apps, you can include Angular-specific commands such as ng test , ng build , and ng e2e . To use
Angular CLI commands in your pipeline, you need to install the angular/cli npm package on the build agent.

NOTE
On Microsoft-hosted Linux agents, preface the command with sudo , like sudo npm install -g .

- script: |
npm install -g @angular/cli
npm install
ng build --prod

Add the following tasks to your pipeline:


npm
Command: custom
Command and arguments: install -g @angular/cli
npm
Command: install
bash
Type: inline
Script: ng build --prod

For tests in your pipeline that require a browser to run (such as the ng test command in the starter app, which
runs Karma), you need to use a headless browser instead of a standard browser. In the Angular starter app:
1. Change the browsers entry in your karma.conf.js project file from browsers: ['Chrome'] to
browsers: ['ChromeHeadless'] .

2. Change the singleRun entry in your karma.conf.js project file from a value of false to true . This helps
make sure that the Karma process stops after it runs.
React and Vue
All the dependencies for your React and Vue apps are captured in your package.json file. Your azure-
pipelines.yml file contains the standard Node.js script:

- script: |
npm install
npm run build
displayName: 'npm install and build'

The build files are in a new folder, dist (for Vue) or build (for React). This snippet builds an artifact, www , that
is ready for release. It uses the Node Installer, Copy Files, and Publish Build Artifacts tasks.
trigger:
- master

pool:
vmImage: 'ubuntu-latest'

steps:
- task: NodeTool@0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'

- script: |
npm install
npm run build
displayName: 'npm install and build'

- task: CopyFiles@2
inputs:
Contents: 'build/**' # Pull the build directory (React)
TargetFolder: '$(Build.ArtifactStagingDirectory)'

- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: $(Build.ArtifactStagingDirectory) # dist or build files
ArtifactName: 'www' # output artifact named www

To release, point your release task to the dist or build artifact and use the Azure Web App Deploy task.
Webpack
You can use a webpack configuration file to specify a compiler (such as Babel or TypeScript) to transpile JSX or
TypeScript to plain JavaScript, and to bundle your app.

- script: |
npm install webpack webpack-cli --save-dev
npx webpack --config webpack.config.js

Add the following tasks to your pipeline:


npm
Command: custom
Command and arguments: install -g webpack webpack-cli --save-dev
bash
Type: inline
Script: npx webpack --config webpack.config.js

Build task runners


It's common to use Gulp or Grunt as a task runner to build and test a JavaScript app.
Gulp
Gulp is preinstalled on Microsoft-hosted agents. To run the gulp command in the YAML file:

- script: gulp # include any additional options that are needed

If the steps in your gulpfile.js file require authentication with an npm registry:
- task: npmAuthenticate@0
inputs:
customEndpoint: <Name of npm service connection>

- script: gulp # include any additional options that are needed

Add the Publish Test Results task to publish JUnit or xUnit test results to the server.

- task: PublishTestResults@2
inputs:
testResultsFiles: '**/TEST-RESULTS.xml'
testRunTitle: 'Test results for JavaScript using gulp'

Add the Publish Code Coverage Results task to publish code coverage results to the server. You can find
coverage metrics in the build summary, and you can download HTML reports for further analysis.

- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/*coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/coverage'

The simplest way to create a pipeline if your app uses Gulp is to use the Node.js with gulp build template
when creating the pipeline. This will automatically add various tasks to invoke Gulp commands and to publish
artifacts. In the task, select Enable Code Coverage to enable code coverage by using Istanbul.
Grunt
Grunt is preinstalled on Microsoft-hosted agents. To run the grunt command in the YAML file:

- script: grunt # include any additional options that are needed

If the steps in your Gruntfile.js file require authentication with a npm registry:

- task: npmAuthenticate@0
inputs:
customEndpoint: <Name of npm service connection>

- script: grunt # include any additional options that are needed

The simplest way to create a pipeline if your app uses Grunt is to use the Node.js with Grunt build template
when creating the pipeline. This will automatically add various tasks to invoke Gulp commands and to publish
artifacts. In the task, select the Publish to TFS/Team Ser vices option to publish test results, and select
Enable Code Coverage to enable code coverage by using Istanbul.

Package and deliver your code


After you have built and tested your app, you can upload the build output to Azure Pipelines, create and publish
an npm or Maven package, or package the build output into a .zip file to be deployed to a web application.
Publish files to Azure Pipelines
To simply upload the entire working directory of files, use the Publish Build Artifacts task and add the following
to your azure-pipelines.yml file.
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(System.DefaultWorkingDirectory)'

To upload a subset of files, first copy the necessary files from the working directory to a staging directory with
the Copy Files task, and then use the Publish Build Artifacts task.

- task: CopyFiles@2
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: |
**\*.js
package.json
TargetFolder: '$(Build.ArtifactStagingDirectory)'

- task: PublishBuildArtifacts@1

Publish a module to a npm registry


If your project's output is an npm module for use by other projects and not a web application, use the npm task
to publish the module to a local registry or to the public npm registry. You must provide a unique
name/version combination each time you publish, so keep this in mind when configuring publishing steps as
part of a release or development pipeline.
The first example assumes that you manage version information (such as through an npm version) through
changes to your package.json file in version control. This example uses the script task to publish to the public
registry.

- script: npm publish

The next example publishes to a custom registry defined in your repo's .npmrc file. You'll need to set up an
npm service connection to inject authentication credentials into the connection as the build runs.

- task: Npm@1
inputs:
command: publish
publishRegistry: useExternalRegistry
publishEndpoint: https://my.npmregistry.com

The final example publishes the module to an Azure DevOps Services package management feed.

- task: Npm@1
inputs:
command: publish
publishRegistry: useFeed
publishFeed: https://my.npmregistry.com

For more information about versioning and publishing npm packages, see Publish npm packages and How can
I version my npm packages as part of the build process?.
Deploy a web app
To create a .zip file archive that is ready for publishing to a web app, use the Archive Files task:
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false

To publish this archive to a web app, see Azure web app deployment.
Publish artifacts to Azure Pipelines
Use the Publish Build Artifacts task to publish files from your build to Azure Pipelines or TFS.
Publish to an npm registry
To create and publish an npm package, use the npm task. For more information about versioning and
publishing npm packages, see Publish npm packages.
Deploy a web app
To create a .zip file archive that is ready for publishing to a web app, use the Archive Files task. To publish this
archive to a web app, see Azure Web App deployment.

Build and push image to container registry


Once your source code is building successfully and your unit tests are in place and successful, you can also
build an image and push it to a container registry.

Troubleshooting
If you can build your project on your development machine but are having trouble building it on Azure
Pipelines or TFS, explore the following potential causes and corrective actions:
Check that the versions of Node.js and the task runner on your development machine match those on
the agent. You can include command-line scripts such as node --version in your pipeline to check what
is installed on the agent. Either use the Node Tool Installer (as explained in this guidance) to deploy
the same version on the agent, or run npm install commands to update the tools to desired versions.
If your builds fail intermittently while you're restoring packages, either the npm registry is having issues
or there are networking problems between the Azure datacenter and the registry. These factors are not
under our control, and you might need to explore whether using Azure Artifacts with an npm registry as
an upstream source improves the reliability of your builds.
If you're using nvm to manage different versions of Node.js, consider switching to the Node Tool
Installer task instead. ( nvm is installed for historical reasons on the macOS image.) nvm manages
multiple Node.js versions by adding shell aliases and altering PATH , which interacts poorly with the way
Azure Pipelines runs each task in a new process. The Node Tool Installer task handles this model
correctly. However, if your work requires the use of nvm , you can add the following script to the
beginning of each pipeline:

steps:
- bash: |
NODE_VERSION=12 # or whatever your preferred version is
npm config delete prefix # avoid a warning
. ${NVM_DIR}/nvm.sh
nvm use ${NODE_VERSION}
nvm alias default ${NODE_VERSION}
VERSION_PATH="$(nvm_version_path ${NODE_VERSION})"
echo "##vso[task.prependPath]$VERSION_PATH"
Then node and other command-line tools will work for the rest of the pipeline job. In each step where you
need to use the nvm command, you'll need to start the script with:

- bash: |
. ${NVM_DIR}/nvm.sh
nvm <command>

FAQ
Where can I learn more about Azure Artifacts and the Package Management service?
Package Management in Azure Artifacts and TFS
Where can I learn more about tasks?
Build, release, and test tasks
How can I version my npm packages as part of the build process?
One option is to use a combination of version control and npm version. At the end of a pipeline run, you can
update your repo with the new version. In this YAML, there is a GitHub repo and the package gets deployed to
npmjs. Note that your build will fail if there is a mismatch between your package version on npmjs and your
package.json file.

variables:
MAP_NPMTOKEN: $(NPMTOKEN) # Mapping secret var

trigger:
- none

pool:
vmImage: 'ubuntu-latest'

steps: # Checking out connected repo


- checkout: self
persistCredentials: true
clean: true

- task: npmAuthenticate@0
inputs:
workingFile: .npmrc
customEndpoint: 'my-npm-connection'

- task: NodeTool@0
inputs:
versionSpec: '12.x'
displayName: 'Install Node.js'

- script: |
npm install
displayName: 'npm install'

- script: |
npm pack
displayName: 'Package for release'

- bash: | # Grab the package version


v=`node -p "const p = require('./package.json'); p.version;"`
echo "##vso[task.setvariable variable=packageVersion]$v"

- task: CopyFiles@2
inputs:
contents: '*.tgz'
targetFolder: $(Build.ArtifactStagingDirectory)/npm
displayName: 'Copy archives to artifacts staging directory'
displayName: 'Copy archives to artifacts staging directory'

- task: CopyFiles@2
inputs:
sourceFolder: '$(Build.SourcesDirectory)'
contents: 'package.json'
targetFolder: $(Build.ArtifactStagingDirectory)/npm
displayName: 'Copy package.json'

- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)/npm'
artifactName: npm
displayName: 'Publish npm artifact'

- script: | # Config can be set in .npmrc


npm config set //registry.npmjs.org/:_authToken=$(MAP_NPMTOKEN)
npm config set scope "@myscope"
# npm config list
# npm --version
npm version patch --force
npm publish --access public

- task: CmdLine@2 # Push changes to GitHub (substitute your repo)


inputs:
script: |
git config --global user.email "[email protected]"
git config --global user.name "Azure Pipeline"
git add package.json
git commit -a -m "Test Commit from Azure DevOps"
git push -u origin HEAD:master
Build Python apps
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines
Use a pipeline to automatically build and test your Python apps or scripts. After those steps are done, you can
then deploy or publish your project.
If you want an end-to-end walkthrough, see Use CI/CD to deploy a Python web app to Azure App Service on
Linux.
To create and activate an Anaconda environment and install Anaconda packages with conda , see Run pipelines
with Anaconda environments.

Create your first pipeline


Are you new to Azure Pipelines? If so, then we recommend you try this section before moving on to other
sections.

Get the code


Import this repo into your Git repo in Azure DevOps Server 2019:
Import this repo into your Git repo:

https://github.com/Microsoft/python-sample-vscode-flask-tutorial

Sign in to Azure Pipelines


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name
and displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner
of the dashboard.
Create the pipeline
1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .

When the Configure tab appears, select Python package . This will create a Python package to test on
multiple Python versions.

7. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .

8. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job.

You just created and ran a pipeline that we automatically created for you, because your code appeared
to be a good match for the Python package template.

You now have a working YAML pipeline ( azure-pipelines.yml ) in your repository that's ready for you to
customize!
9. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.

See the sections below to learn some of the more common ways to customize your pipeline.
YAML
1. Add an azure-pipelines.yml file in your repository. Customize this snippet for your build.

trigger:
- master

pool: Default

steps:
- script: python -m pip install --upgrade pip
displayName: 'Install dependencies'

- script: pip install -r requirements.txt


displayName: 'Install requirements'

2. Create a pipeline (if you don't know how, see Create your first pipeline), and for the template select YAML .
3. Set the Agent pool and YAML file path for your pipeline.
4. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
5. When you're ready to make changes to your pipeline, Edit it.
6. See the sections below to learn some of the more common ways to customize your pipeline.

Build environment
You don't have to set up anything for Azure Pipelines to build Python projects. Python is preinstalled on
Microsoft-hosted build agents for Linux, macOS, or Windows. To see which Python versions are preinstalled, see
Use a Microsoft-hosted agent.
Use a specific Python version
To use a specific version of Python in your pipeline, add the Use Python Version task to azure-pipelines.yml. This
snippet sets the pipeline to use Python 3.6:

steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.6'

Use multiple Python versions


To run a pipeline with multiple Python versions, for example to test a package against those versions, define a
job with a matrix of Python versions. Then set the UsePythonVersion task to reference the matrix variable.

jobs:
- job: 'Test'
pool:
vmImage: 'ubuntu-16.04' # other options: 'macOS-10.14', 'vs2017-win2016'
strategy:
matrix:
Python27:
python.version: '2.7'
Python35:
python.version: '3.5'
Python36:
python.version: '3.6'

steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(python.version)'

You can add tasks to run using each Python version in the matrix.

Run Python scripts


To run Python scripts in your repository, use a script element and specify a filename. For example:

- script: python src/example.py

You can also run inline Python scripts with the Python Script task:

- task: PythonScript@0
inputs:
scriptSource: 'inline'
script: |
print('Hello world 1')
print('Hello world 2')

To parameterize script execution, use the PythonScript task with arguments values to pass arguments into the
executing process. You can use sys.argv or the more sophisticated argparse library to parse the arguments.
- task: PythonScript@0
inputs:
scriptSource: inline
script: |
import sys
print ('Executing script file is:', str(sys.argv[0]))
print ('The arguments are:', str(sys.argv))
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--world", help="Provide the name of the world to greet.")
args = parser.parse_args()
print ('Hello ', args.world)
arguments: --world Venus

Install dependencies
You can use scripts to install specific PyPI packages with pip . For example, this YAML installs or upgrades pip
and the setuptools and wheel packages.

- script: python -m pip install --upgrade pip setuptools wheel


displayName: 'Install tools'

Install requirements
After you update pip and friends, a typical next step is to install dependencies from requirements.txt:

- script: pip install -r requirements.txt


displayName: 'Install requirements'

Run tests
You can use scripts to install and run various tests in your pipeline.
Run lint tests with flake8
To install or upgrade flake8 and use it to run lint tests, use this YAML:

- script: |
python -m pip install flake8
flake8 .
displayName: 'Run lint tests'

Test with pytest and collect coverage metrics with pytest-cov


Use this YAML to install pytest and pytest-cov , run tests, output test results in JUnit format, and output code
coverage results in Cobertura XML format:

- script: |
pip install pytest
pip install pytest-cov
pytest tests --doctest-modules --junitxml=junit/test-results.xml --cov=. --cov-report=xml --cov-
report=html
displayName: 'Test with pytest'

Run tests with Tox


Azure Pipelines can run parallel Tox test jobs to split up the work. On a development computer, you have to run
your test environments in series. This sample uses tox -e py to run whichever version of Python is active for
the current job.

- job:

pool:
vmImage: 'ubuntu-16.04'
strategy:
matrix:
Python27:
python.version: '2.7'
Python35:
python.version: '3.5'
Python36:
python.version: '3.6'
Python37:
python.version: '3.7'

steps:
- task: UsePythonVersion@0
displayName: 'Use Python $(python.version)'
inputs:
versionSpec: '$(python.version)'

- script: pip install tox


displayName: 'Install Tox'

- script: tox -e py
displayName: 'Run Tox'

Publish test results


Add the Publish Test Results task to publish JUnit or xUnit test results to the server:

- task: PublishTestResults@2
condition: succeededOrFailed()
inputs:
testResultsFiles: '**/test-*.xml'
testRunTitle: 'Publish test results for Python $(python.version)'

Publish code coverage results


Add the Publish Code Coverage Results task to publish code coverage results to the server. You can see coverage
metrics in the build summary, and download HTML reports for further analysis.

- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov'

Package and deliver code


To authenticate with twine , use the Twine Authenticate task to store authentication credentials in the
PYPIRC_PATH environment variable.

- task: TwineAuthenticate@0
inputs:
artifactFeed: '<Azure Artifacts feed name>'
pythonUploadServiceConnection: '<twine service connection from external organization>'
Then, add a custom script that uses twine to publish your packages.

- script: |
twine upload -r "<feed or service connection name>" --config-file $(PYPIRC_PATH) <package path/files>

You can also use Azure Pipelines to build an image for your Python app and push it to a container registry.

Related extensions
PyLint Checker (Darren Fuller)
Python Test (Darren Fuller)
Azure DevOps plugin for PyCharm (IntelliJ) (Microsoft)
Python in Visual Studio Code (Microsoft)
Use CI/CD to deploy a Python web app to Azure
App Service on Linux
11/2/2020 • 17 minutes to read • Edit Online

Azure Pipelines
In this article, you use Azure Pipelines continuous integration and continuous delivery (CI/CD) to deploy a Python
web app to Azure App Service on Linux. You begin by running app code from a GitHub repository locally. You then
provision a target App Service through the Azure portal. Finally, you create an Azure Pipelines CI/CD pipeline that
automatically builds the code and deploys it to the App Service whenever there's a commit to the repository.

Create a repository for your app code


If you already have a Python web app to use, make sure it's committed to a GitHub repository.

NOTE
If your app uses Django and a SQLite database, it won't work for this walkthrough. For more information, see considerations
for Django later in this article. If your Django app uses a separate database, you can use it with this walkthrough.

If you need an app to work with, you can fork and clone the repository at https://github.com/Microsoft/python-
sample-vscode-flask-tutorial. The code is from the tutorial Flask in Visual Studio Code.
To test the example app locally, from the folder containing the code, run the following appropriate commands for
your operating system:

# Mac/Linux
sudo apt-get install python3-venv # If needed
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
export set FLASK_APP=hello_app.webapp
python3 -m flask run

# Windows
py -3 -m venv .env
.env\scripts\activate
pip install -r requirements.txt
$env:FLASK_APP = "hello_app.webapp"
python -m flask run

Open a browser and navigate to http://localhost:5000 to view the app. When you're finished, close the browser, and
stop the Flask server with Ctrl +C .

Provision the target Azure App Service


The quickest way to create an App Service instance is to use the Azure command-line interface (CLI) through the
interactive Azure Cloud Shell. In the following steps, you use az webapp up to both provision the App Service and
perform the first deployment of your app.
1. Sign in to the Azure portal at https://portal.azure.com.
2. Open the Azure CLI by selecting the Cloud Shell button on the portal's toolbar:

3. The Cloud Shell appears along the bottom of the browser. Select Bash from the dropdown:

4. In the Cloud Shell, clone your repository using git clone . For the example app, use:

git clone https://github.com/<your-alias>/python-sample-vscode-flask-tutorial

Replace <your-alias> with the name of the GitHub account you used to fork the repository.

TIP
To paste into the Cloud Shell, use Ctrl+Shift +V , or right-click and select Paste from the context menu.

NOTE
The Cloud Shell is backed by an Azure Storage account in a resource group called cloud-shell-storage-<your-region>.
That storage account contains an image of the Cloud Shell's file system, which stores the cloned repository. There is a
small cost for this storage. You can delete the storage account at the end of this article, along with other resources
you create.

5. In the Cloud Shell, change directories into the repository folder that has your Python app, so the
az webapp up command will recognize the app as Python.

cd python-sample-vscode-flask-tutorial

6. In the Cloud Shell, use az webapp up to create an App Service and initially deploy your app.

az webapp up -n <your-appservice>

Change <your-appservice> to a name for your app service that's unique across Azure. Typically, you use a
personal or company name along with an app identifier, such as <your-name>-flaskpipelines . The app URL
becomes <your-appservice>.azurewebsites.net.
When the command completes, it shows JSON output in the Cloud Shell.
TIP
If you encounter a "Permission denied" error with a .zip file, you may have tried to run the command from a folder
that doesn't contain a Python app. The az webapp up command then tries to create a Windows app service plan,
and fails.

7. If your app uses a custom startup command, set the az webapp config property. For example, the python-
sample-vscode-flask-tutorial app contains a file named startup.txt that contains its specific startup
command, so you set the az webapp config property to startup.txt .
a. From the first line of output from the previous az webapp up command, copy the name of your
resource group, which is similar to <your-name>_rg_Linux_<your-region> .
b. Enter the following command, using your resource group name, your app service name, and your
startup file or command:

az webapp config set -g <your-resource-group> -n <your-appservice> --startup-file <your-startup-


file-or-command>

Again, when the command completes, it shows JSON output in the Cloud Shell.
8. To see the running app, open a browser and go to http://<your-appservice>.azurewebsites.net. If you see a
generic page, wait a few seconds for the App Service to start, and refresh the page.

NOTE
For a detailed description of the specific tasks performed by the az webapp up command, see Provision an App
Service with single commands at the end of this article.

Create an Azure DevOps project and connect to Azure


To deploy to Azure App Service from Azure Pipelines, you need to establish a service connection between the two
services.
1. In a browser, go to dev.azure.com. If you don't yet have an account on Azure DevOps, select Star t free and
get a free account. If you have an account already, select Sign in to Azure DevOps .

IMPORTANT
To simplify the service connection, use the same email address for Azure DevOps as you use for Azure.

2. Once you sign in, the browser displays your Azure DevOps dashboard, at the URL
https://dev.azure.com/<your-organization-name>. An Azure DevOps account can belong to one or more
organizations, which are listed on the left side of the Azure DevOps dashboard. If more than one
organization is listed, select the one you want to use for this walkthrough. By default, Azure DevOps creates
a new organization using the email alias you used to sign in.
A project is a grouping for boards, repositories, pipelines, and other aspects of Azure DevOps. If your
organization doesn't have any projects, enter the project name Flask Pipelines under Create a project to
get star ted , and then select Create project .
If your organization already has projects, select New project on the organization page. In the Create new
project dialog box, enter the project name Flask Pipelines, and select Create .
3. From the new project page, select Project settings from the left navigation.

4. On the Project Settings page, select Pipelines > Ser vice connections , then select New ser vice
connection , and then select Azure Resource Manager from the dropdown.
5. In the Add an Azure Resource Manager ser vice connection dialog box:
a. Give the connection a name. Make note of the name to use later in the pipeline.
b. For Scope level , select Subscription .
c. Select the subscription for your App Service from the Subscription drop-down list.
d. Under Resource Group , select your resource group from the dropdown.
e. Make sure the option Allow all pipelines to use this connection is selected, and then select OK .
The new connection appears in the Ser vice connections list, and is ready for Azure Pipelines to use from
the project.

NOTE
If you need to use an Azure subscription from a different email account, follow the instructions on Create an Azure
Resource Manager service connection with an existing service principal.

Create a Python-specific pipeline to deploy to App Service


1. From your project page left navigation, select Pipelines .
2. Select New pipeline :

3. On the Where is your code screen, select GitHub . You may be prompted to sign into GitHub.
4. On the Select a repositor y screen, select the repository that contains your app, such as your fork of the
example app.

5. You may be prompted to enter your GitHub password again as a confirmation, and then GitHub prompts
you to install the Azure Pipelines extension:

On this screen, scroll down to the Repositor y access section, choose whether to install the extension on all
repositories or only selected ones, and then select Approve and install :
6. On the Configure your pipeline screen, select Python to Linux Web App on Azure .
Your new pipeline appears. When prompted, select the Azure subscription in which you created your Web
App.
Select the Web App
Select Validate and configure
Azure Pipelines creates an azure-pipelines.yml file that defines your CI/CD pipeline as a series of stages,
Jobs, and steps, where each step contains the details for different tasks and scripts. Take a look at the
pipeline to see what it does. Make sure all the default inputs are appropriate for your code.
YAML pipeline explained
The YAML file contains the following key elements:
The trigger at the top indicates the commits that trigger the pipeline, such as commits to the master
branch.
The variables that parameterize the YAML template

TIP
To avoid hard-coding specific variable values in your YAML file, you can define variables in the pipeline's web interface
instead. For more information, see Variables - Secrets.

The stages

Build stage , which builds your project, and a Deploy stage, which deploys it to Azure as a Linux web app.
Deploy stage that also creates an Environment with default name same as the Web App. You can choose
to modify the environment name.
Each stage has a pool element that specifies one or more virtual machines (VMs) in which the pipeline runs
the steps . By default, the pool element contains only a single entry for an Ubuntu VM. You can use a pool
to run tests in multiple environments as part of the build, such as using different Python versions for
creating a package.
The steps element can contain children like task , which runs a specific task as defined in the Azure
Pipelines task reference, and script , which runs an arbitrary set of commands.
The first task under Build stage is UsePythonVersion, which specifies the version of Python to use on the
build agent. The @<n> suffix indicates the version of the task. The @0 indicates preview version. Then we
have script-based task that creates a virtual environment and installs dependencies from file
(requirements.txt).

steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
python -m pip install --upgrade pip
pip install setup
pip install -r requirements.txt
workingDirectory: $(projectRoot)
displayName: "Install requirements"
```

Next step creates the .zip file that the steps under deploy stage of the pipeline deploys. To create the .zip file,
add an ArchiveFiles task to the end of the YAML file:

- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
replaceExistingArchive: true
verbose: # (no value); this input is optional
- publish: $(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop

You use $() in a parameter value to reference variables. The built-in Build.SourcesDirectory variable
contains the location on the build agent where the pipeline cloned the app code. The archiveFile
parameter indicates where to place the .zip file. In this case, the archiveFile parameter uses the built-in
variable Build.ArtifactsStagingDirectory .

IMPORTANT
When deploying to Azure App Service, be sure to use includeRootFolder: false . Otherwise, the contents of the
.zip file are put in a folder named s, for "sources," which is replicated on the App Service. The App Service on Linux
container then can't find the app code.

Then we have the task to upload the artifacts.


In the Deploy stage, we use the deployment keyword to define a deployment job targeting an environment.
By using the template, an environment with same name as the Web app is automatically created if it doesn't
already exist. Alternatively you can pre-create the environment and provide the environmentName
Within the deployment job, first task is UsePythonVersion, which specifies the version of Python to use on
the build agent.
We then use the AzureWebApp task to deploy the .zip file to the App Service you identified by the
azureServiceConnectionId and webAppName variables at the beginning of the pipeline file. Paste the following
code at the end of the file:

jobs:
- deployment: DeploymentJob
pool:
vmImage: $(vmImageName)
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:

- task: UsePythonVersion@0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python version'

- task: AzureWebApp@1
displayName: 'Deploy Azure Web App : {{ webAppName }}'
inputs:
azureSubscription: $(azureServiceConnectionId)
appName: $(webAppName)
package: $(Pipeline.Workspace)/drop/$(Build.BuildId).zip

# The following parameter is specific to the Flask example code. You may
# or may not need a startup command for your app.

startUpCommand: 'gunicorn --bind=0.0.0.0 --workers=4 startup:app'

The StartupCommand parameter shown here is specific to the python-vscode-flask-tutorial example code,
which defines the app in the startup.py file. By default, Azure App Service looks for the Flask app object in a
file named app.py or application.py. If your code doesn't follow this pattern, you need to customize the
startup command. Django apps may not need customization at all. For more information, see How to
configure Python on Azure App Service - Customize startup command.
Also, because the python-vscode-flask-tutorial repository contains the same startup command in a file
named startup.txt, you could specify that file in the StartupCommand parameter rather than the command, by
using StartupCommand: 'startup.txt' .

Run the pipeline


You're now ready to try it out!
1. Select Save at upper right in the editor, and in the pop-up window, add a commit message and select Save .
2. Select Run on the pipeline editor, and select Run again in the Run pipeline dialog box. Azure Pipelines
queues another pipeline run, acquires an available build agent, and has that build agent run the pipeline.
The pipeline takes a few minutes to complete, especially the deployment steps. You should see green
checkmarks next to each of the steps.
If there's an error, you can quickly return to the YAML editor by selecting the vertical dots at upper right and
selecting Edit pipeline :
3. From the build page, select the Azure Web App task to display its output. To visit the deployed site, hold
down the Ctrl key and select the URL after App Ser vice Application URL .
If you're using the Flask example, the app should appear as follows:

IMPORTANT
If your app fails because of a missing dependency, then your requirements.txt file was not processed during deployment. This
behavior happens if you created the web app directly on the portal rather than using the az webapp up command as
shown in this article.
The az webapp up command specifically sets the build action SCM_DO_BUILD_DURING_DEPLOYMENT to true . If you
provisioned the app service through the portal, however, this action is not automatically set.
The following steps set the action:
1. Open the Azure portal, select your App Service, then select Configuration .
2. Under the Application Settings tab, select New Application Setting .
3. In the popup that appears, set Name to SCM_DO_BUILD_DURING_DEPLOYMENT , set Value to true , and select OK .
4. Select Save at the top of the Configuration page.
5. Run the pipeline again. Your dependencies should be installed during deployment.

Run a post-deployment script


A post-deployment script can, for example, define environment variables expected by the app code. Add the script
as part of the app code and execute it using startup command.
To avoid hard-coding specific variable values in your YAML file, you can instead define variables in the pipeline's
web interface and then refer to the variable name in the script. For more information, see Variables - Secrets.

Considerations for Django


As noted earlier in this article, you can use Azure Pipelines to deploy Django apps to Azure App Service on Linux,
provided that you're using a separate database. You can't use a SQLite database, because App Service locks the
db.sqlite3 file, preventing both reads and writes. This behavior doesn't affect an external database.
As described in Configure Python app on App Service - Container startup process, App Service automatically looks
for a wsgi.py file within your app code, which typically contains the app object. If you need to customize the startup
command in any way, use the StartupCommand parameter in the AzureWebApp@1 step of your YAML pipeline file, as
described in the previous section.
When using Django, you typically want to migrate the data models using manage.py migrate after deploying the
app code. You can add startUpCommand with post-deployment script for this purpose:

startUpCommand: python3.6 manage.py migrate

Run tests on the build agent


As part of your build process, you may want to run tests on your app code. Tests run on the build agent, so you
probably need to first install your dependencies into a virtual environment on the build agent computer. After the
tests run, delete the virtual environment before you create the .zip file for deployment. The following script
elements illustrate this process. Place them before the ArchiveFiles@2 task in the azure-pipelines.yml file. For more
information, see Run cross-platform scripts.

# The | symbol is a continuation character, indicating a multi-line script.


# A single-line script can immediately follow "- script:".
- script: |
python3.6 -m venv .env
source .env/bin/activate
pip3.6 install setuptools
pip3.6 install -r requirements.txt

# The displayName shows in the pipeline UI when a build runs


displayName: 'Install dependencies on build agent'

- script: |
# Put commands to run tests here
displayName: 'Run tests'

- script: |
echo Deleting .env
deactivate
rm -rf .env
displayName: 'Remove .env before zip'

You can also use a task like PublishTestResults@2 to make test results appear in the pipeline results screen. For
more information, see Build Python apps - Run tests.

Provision an App Service with single commands


The az webapp up command used earlier in this article is a convenient method to provision the App Service and
initially deploy your app in a single step. If you want more control over the deployment process, you can use single
commands to accomplish the same tasks. For example, you might want to use a specific name for the resource
group, or create an App Service within an existing App Service Plan.
The following steps perform the equivalent of the az webapp up command:
1. Create a resource group.
A resource group is a collection of related Azure resources. Creating a resource group makes it easy to
delete all those resources at once when you no longer need them. In the Cloud Shell, run the following
command to create a resource group in your Azure subscription. Set a location for the resource group by
specifying the value of <your-region> . JSON output appears in the Cloud Shell when the command
completes successfully.

az group create -l <your-region> -n <your-resource-group>

2. Create an App Service Plan.


An App Service runs inside a VM defined by an App Service Plan. Run the following command to create an
App Service Plan, substituting your own values for <your-resource-group> and <your-appservice-plan> . The
--is-linux is required for Python deployments. If you want a pricing plan other than the default F1 Free
plan, use the sku argument. The --sku B1 specifies the lower-price compute tier for the VM. You can easily
delete the plan later by deleting the resource group.

az appservice plan create -g <your-resource-group> -n <your-appservice-plan> --is-linux --sku B1

Again, you see JSON output in the Cloud Shell when the command completes successfully.
3. Create an App Service instance in the plan.
Run the following command to create the App Service instance in the plan, replacing <your-appservice>
with a name that's unique across Azure. Typically, you use a personal or company name along with an app
identifier, such as <your-name>-flaskpipelines . The command fails if the name is already in use. By assigning
the App Service to the same resource group as the plan, it's easy to clean up all the resources at once.

az webapp create -g <your-resource-group> -p <your-appservice-plan> -n <your-appservice> --runtime


"Python|3.6"

NOTE
If you want to deploy your code at the same time you create the app service, you can use the
--deployment-source-url and --deployment-source-branch arguments with the az webapp create command.
For more information, see az webapp create.

TIP
If you see the error message "The plan (name) doesn't exist", and you're sure that the plan name is correct, check that
the resource group specified with the -g argument is also correct, and the plan you identify is part of that resource
group. If you misspell the resource group name, the command doesn't find the plan in that nonexistent resource
group, and gives this particular error.

4. If your app requires a custom startup command, use the az webapp config set command, as described
earlier in Provision the target Azure App Service. For example, to customize the App Service with your
resource group, app name, and startup command, run:

az webapp config set -g <your-resource-group> -n <your-appservice> --startup-file <your-startup-command-


or-file>

The App Service at this point contains only default app code. You can now use Azure Pipelines to deploy
your specific app code.

Clean up resources
To avoid incurring ongoing charges for any Azure resources you created in this walkthrough, such as a B1 App
Service Plan, delete the resource group that contains the App Service and the App Service Plan. To delete the
resource group from the Azure portal, select Resource groups in the left navigation. In the resource group list,
select the ... to the right of the resource group you want to delete, select Delete resource group , and follow the
prompts.
You can also use az group delete in the Cloud Shell to delete resource groups.
To delete the storage account that maintains the file system for Cloud Shell, which incurs a small monthly charge,
delete the resource group that begins with cloud-shell-storage- .

Next steps
Build Python apps
Learn about build agents
Configure Python app on App Service
Run pipelines with Anaconda environments
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
This guidance explains how to set up and use Anaconda environments in your pipelines.

Get started
Follow these instructions to set up a pipeline for a sample Python app with Anaconda environment.
1. The code in the following repository is a simple Python app. To get started, fork this repo to your GitHub
account.

https://github.com/MicrosoftDocs/pipelines-anaconda

2. Sign in to your Azure DevOps organization and navigate to your project.


3. In your project, navigate to the Pipelines page. Then choose the action to create a new pipeline.
4. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
5. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
6. When the list of repositories appears, select your Java sample repository.
7. Azure Pipelines will analyze the code in your repository and detect an existing azure-pipelines.yml file.
8. Select Run .
9. A new run is started. Wait for the run to finish.

TIP
To make changes to the YAML file as described in this topic, select the pipeline in the Pipelines page, and then Edit the
azure-pipelines.yml file.

Add conda to your system path


On hosted agents, conda is left out of PATH by default to keep its Python version from conflicting with other
installed versions. The task.prependpath agent command will make it available to all subsequent steps.
Hosted Ubuntu 16.04
Hosted macOS
Hosted VS2017

- bash: echo "##vso[task.prependpath]$CONDA/bin"


displayName: Add conda to PATH

Create an environment
From command-line arguments
The conda create command will create an environment with the arguments you pass it.
Hosted Ubuntu 16.04
Hosted macOS
Hosted VS2017

- bash: conda create --yes --quiet --name myEnvironment


displayName: Create Anaconda environment

From YAML
You can check in an environment.yml file to your repo that defines the configuration for an Anaconda environment.

- script: conda env create --quiet --file environment.yml


displayName: Create Anaconda environment

NOTE
If you are using a self-hosted agent and don't remove the environment at the end, you'll get an error on the next build since
the environment already exists. To resolve, use the --force argument:
conda env create --quiet --force --file environment.yml .

Install packages from Anaconda


The following YAML installs the scipy package in the conda environment named myEnvironment .
Hosted Ubuntu 16.04
Hosted macOS
Hosted VS2017

- bash: |
source activate myEnvironment
conda install --yes --quiet --name myEnvironment scipy
displayName: Install Anaconda packages

Run pipeline steps in an Anaconda environment


NOTE
Each build step runs in its own process. When you activate an Anaconda environment, it will edit PATH and make other
changes to its current process. Therefore, an Anaconda environment must be activated separately for each step.

Hosted Ubuntu 16.04


Hosted macOS
Hosted VS2017
- bash: |
source activate myEnvironment
python -m pytest --junitxml=junit/unit-test.xml
displayName: pytest

- task: PublishTestResults@2
inputs:
testResultsFiles: 'junit/*.xml'
condition: succeededOrFailed()

FAQs
Why am I getting a "Permission denied" error?
On Hosted macOS, the agent user doesn't have ownership of the directory where Miniconda is installed. For a fix,
see the "Hosted macOS" tab under Add conda to your system path.
Why does my build stop responding on a conda create or conda install step?
If you forget to pass --yes , conda will stop and wait for user interaction.
Why is my script on Windows stopping after it activates the environment?
On Windows, activate is a Batch script. You must use the call command to resume running your script after
activating. See examples of using call above.
How can I run my tests with multiple versions of Python?
See Build Python apps in Azure Pipelines.
Build C++ Windows apps
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

This guidance explains how to automatically build C++ projects for Windows.

NOTE
This guidance applies to TFS version 2017.3 and newer.

Example
This example shows how to build a C++ project. To start, import (into Azure Repos or TFS) or fork (into GitHub)
this repo:

https://github.com/adventworks/cpp-sample

NOTE
This scenario works on TFS, but some of the following instructions might not exactly match the version of TFS that you are
using. Also, you'll need to set up a self-hosted agent, possibly also installing software. If you are a new user, you might have
a better learning experience by trying this procedure out first using a free Azure DevOps organization. Then change the
selector in the upper-left corner of this page from Team Foundation Server to Azure DevOps .

After you have the sample code in your own repository, create a pipeline using the instructions in Create
your first pipeline and select the .NET Desktop template. This automatically adds the tasks required to
build the code in the sample repository.
Save the pipeline and queue a build to see it in action.

Build multiple configurations


It is often required to build your app in multiple configurations. The following steps extend the example above to
build the app on four configurations: [Debug, x86], [Debug, x64], [Release, x86], [Release, x64].
1. Click the Variables tab and modify these variables:
BuildConfiguration = debug, release

BuildPlatform = x86, x64

2. Select Tasks and click on the agent job . From the Execution plan section, select Multi-configuration to
change the options for the job:
Specify Multipliers: BuildConfiguration, BuildPlatform

Specify Maximum number of agents


3. Select Parallel if you have multiple build agents and want to build your configuration/platform pairings in
parallel.

Copy output
To copy the results of the build to Azure Pipelines or TFS, perform these steps:
1. Click the Copy Files task. Specify the following arguments:
Contents: **\$(BuildConfiguration)\**\?(*.exe|*.dll|*.pdb)
Build Java apps
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
This guidance uses YAML-based pipelines available in Azure Pipelines. For TFS, use tasks that correspond to those used in
the YAML below.

This guidance explains how to automatically build Java projects. (If you're working on an Android project, see
Build, test, and deploy Android apps.)

Create your first pipeline


Are you new to Azure Pipelines? If so, then we recommend you try this section to create before moving on
to other sections.

Get the code


Fork this repo in GitHub:
Import this repo into your Git repo in Azure DevOps Server 2019:
Import this repo into your Git repo in TFS:

https://github.com/MicrosoftDocs/pipelines-java

Sign in to Azure Pipelines


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name
and displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right
corner of the dashboard.
Create the pipeline
1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .

When the Configure tab appears, select Maven .

1. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .

2. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job.

You just created and ran a pipeline that we automatically created for you, because your code
appeared to be a good match for the Maven template.

You now have a working YAML pipeline ( azure-pipelines.yml ) in your repository that's ready for you to
customize!
3. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.

4. See the sections below to learn some of the more common ways to customize your pipeline.
1. Create a pipeline (if you don't know how, see Create your first pipeline, and for the template select
Maven . This template automatically adds the tasks you need to build the code in the sample repository.
2. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
You now have a working pipeline that's ready for you to customize!
3. When you're ready to make changes to your pipeline, Edit it.
4. See the sections below to learn some of the more common ways to customize your pipeline.

Build environment
You can use Azure Pipelines to build Java apps without needing to set up any infrastructure of your own. You can
build on Windows, Linux, or MacOS images. The Microsoft-hosted agents in Azure Pipelines have modern JDKs
and other tools for Java pre-installed. To know which versions of Java are installed, see Microsoft-hosted agents.
Update the following snippet in your azure-pipelines.yml file to select the appropriate image.

pool:
vmImage: 'ubuntu-16.04' # other options: 'macOS-10.14', 'vs2017-win2016'

See Microsoft-hosted agents for a complete list of images.


As an alternative to using Microsoft-hosted agents, you can set up self-hosted agents with Java installed. You can
also use self-hosted agents to save additional time if you have a large repository or you run incremental builds.
Your builds run on a self-hosted agent. Make sure that you have Java installed on the agent.
Build your code
Maven
To build with Maven, add the following snippet to your azure-pipelines.yml file. Change values, such as the
path to your pom.xml file, to match your project configuration. See the Maven task for more about these
options.

steps:
- task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
mavenOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'
goals: 'package'

Customize the build path


Adjust the mavenPomFile value if your pom.xml file isn't in the root of the repository. The file path value should
be relative to the root of the repository, such as IdentityService/pom.xml or
$(system.defaultWorkingDirectory)/IdentityService/pom.xml .

Customize Maven goals


Set the goals value to a space-separated list of goals for Maven to execute, such as clean package .
For details about common Java phases and goals, see Apache's Maven documentation.
Gradle
To build with Gradle, add the following snippet to your azure-pipelines.yml file. See the Gradle task for more
about these options.

steps:
- task: Gradle@2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'
tasks: 'build'

Choose the version of Gradle


The version of Gradle installed on the agent machine will be used unless your repository's
gradle/wrapper/gradle-wrapper.properties file has a distributionUrl property that specifies a different Gradle
version to download and use during the build.
Adjust the build path
Adjust the workingDirectory value if your gradlew file isn't in the root of the repository. The directory value
should be relative to the root of the repository, such as IdentityService or
$(system.defaultWorkingDirectory)/IdentityService .

Adjust the gradleWrapperFile value if your gradlew file isn't in the root of the repository. The file path value
should be relative to the root of the repository, such as IdentityService/gradlew or
$(system.defaultWorkingDirectory)/IdentityService/gradlew .
Adjust Gradle tasks
Adjust the tasks value for the tasks that Gradle should execute, such as build or check .
For details about common Java Plugin tasks for Gradle, see Gradle's documentation.
Ant
To build with Ant, add the following snippet to your azure-pipelines.yml file. Change values, such as the path to
your build.xml file, to match your project configuration. See the Ant task for more about these options.

steps:
- task: Ant@1
inputs:
workingDirectory: ''
buildFile: 'build.xml'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'

Script
To build with a command line or script, add one of the following snippets to your azure-pipelines.yml file.
Inline script
The script: step runs an inline script using Bash on Linux and macOS and Command Prompt on Windows. For
details, see the Bash or Command line task.

steps:
- script: |
echo Starting the build
mvn package
displayName: 'Build with Maven'

Script file
This snippet runs a script file that is in your repository. For details, see the Shell Script, Batch script, or
PowerShell task.

steps:
- task: ShellScript@2
inputs:
scriptPath: 'build.sh'

Next Steps
After you've built and tested your app, you can upload the build output to Azure Pipelines or TFS, create and
publish a Maven package, or package the build output into a .war/jar file to be deployed to a web application.
Next we recommend that you learn more about creating a CI/CD pipeline for the deployment target you choose:
Build and deploy to a Java web app
Build and deploy Java to Azure Functions
Build and deploy Java to Azure Kubernetes service
Build and deploy to a Java web app
2/26/2020 • 4 minutes to read • Edit Online

Azure Pipelines
A web app is a lightweight way to host a web application. In this step-by-step guide you'll learn how to create a
pipeline that continuously builds and deploys a your Java app. Your team can then automatically build each commit
in GitHub, and if you want, automatically deploy the change to an Azure App Service. You can use whatever
runtime your prefer: Tomcat, or Java SE.

Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.

TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.

Get the code


Select the runtime you want to use.
Tomcat
Java SE
If you already have an app in GitHub that you want to deploy, you can create a pipeline for that code.
If you are a new user, fork this repo in GitHub:

https://github.com/spring-petclinic/spring-framework-petclinic

Create an Azure App Service


Sign in to the Azure Portal, and then select the Cloud Shell button in the upper-right corner.
Create an Azure App Service on Linux.
Tomcat
Java SE
# Create a resource group
az group create --location westus --name myapp-rg

# Create an app service plan of type Linux


az appservice plan create -g myapp-rg -n myapp-service-plan --is-linux

# Create an App Service from the plan with Tomcat and JRE 8 as the runtime
az webapp create -g myapp-rg -p myapp-service-plan -n my-app-name --runtime "TOMCAT|8.5-jre8"

Sign in to Azure Pipelines and connect to Azure


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name and
displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner of
the dashboard.
Now create the service connection:
1. From your project dashboard, select Project settings on the bottom left.
2. On the settings page, select Pipelines > Ser vice connections , select New ser vice connection , and then
select Azure Resource Manager .
3. The Add an Azure Resource Manager service connection* dialog box appears.
Name Type a name and then copy and paste it into a text file so you can use it later.
Scope Select Subscription.
Subscription Select the subscription in which you created the App Service.
Resource Group Select the resource group you created earlier
Select Allow all pipelines to use this connection .
TIP
If you need to create a connection to an Azure subscription that's owned by someone else, see Create an Azure Resource
Manager service connection with an existing service principal.

Create the pipeline


1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .
When the Configure tab appears, select Show more , and then select Maven package Java project Web App
to Linux on Azure . Your new pipeline appears.
1. When prompted, select the Azure subscription in which you created your Web App.
2. Select the Web App.
3. Select Validate and configure .
As Azure Pipelines creates an azure-pipelines.yml file, which defines your CI/CD pipeline, it:
Includes a Build stage, which builds your project, and a Deploy stage, which deploys it to Azure as a Linux
web app.
As part of the Deploy stage, it also creates an Environment with default name same as the Web App. You
can choose to modify the environment name.
4. Take a look at the pipeline to see what it does. Make sure that all the default inputs are appropriate for your
code.
5. After you've looked at what the pipeline does, select Save and run , after which you're prompted for a
commit message because Azure Pipelines adds the azure-pipelines.yml file to your repository. After editing
the message, select Save and run again to see your pipeline in action.

See the pipeline run, and your app deployed


As your pipeline runs, watch as your build stage, and then your deployment stage, go from blue (running) to green
(completed). You can select the stages and jobs to watch your pipeline in action.
After the pipeline has run, check out your site!
https://my-app-name.azurewebsites.net

Also explore deployment history for the App by navigating to the "Environment". From the pipeline summary:
1. Select the Environments tab.
2. Select View environment .

Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:

az group delete --name myapp-rg

Type y when prompted.


Build and deploy Java to Azure Functions
3/9/2020 • 5 minutes to read • Edit Online

Azure Pipelines
You can use Azure Functions to run small pieces of code in the cloud without the overhead of running a server. In
this step-by-step guide you'll learn how to create a pipeline that continuously builds and deploys a your Java
function app. Your team can then automatically build each commit in GitHub, and if you want, automatically deploy
the change to Azure Functions.

Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.

TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.

Get the code


If you already have an app in GitHub that you want to deploy, you can create a pipeline for that code.
If you are a new user, fork this repo in GitHub:

https://github.com/MicrosoftDocs/pipelines-java-function

Create an Azure Functions app


Sign in to the Azure Portal, and then select the Cloud Shell button in the upper-right corner.
Create an Azure App Service on Linux. Select the runtime you want to use.

# Create a resource group


az group create --location westus --name myapp-rg

# Create a storage account


az storage account create --name mystorage --location westeurope --resource-group myapp-rg --sku Standard_LRS

# Create an Azure Functions app


az functionapp create --resource-group myapp-rg --consumption-plan-location westeurope \
--name my-app-name --storage-account mystorage --runtime java
Sign in to Azure Pipelines and connect to Azure
Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name and
displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner
of the dashboard.
Now create the service connection:
1. From your project dashboard, select Project settings on the bottom left.
2. On the settings page, select Pipelines > Ser vice connections , select New ser vice connection , and then
select Azure Resource Manager .
3. The Add an Azure Resource Manager service connection* dialog box appears.
Name Type a name and then copy and paste it into a text file so you can use it later.
Scope Select Subscription.
Subscription Select the subscription in which you created the App Service.
Resource Group Select the resource group you created earlier
Select Allow all pipelines to use this connection .
TIP
If you need to create a connection to an Azure subscription that's owned by someone else, see Create an Azure Resource
Manager service connection with an existing service principal.

Create the pipeline


1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .
When the Configure tab appears, select Maven . Your new pipeline appears.
1. When prompted, select the Azure subscription in which you created your Web App.
2. Select the Web App.
3. Select Validate and configure .
As Azure Pipelines creates an azure-pipelines.yml file, which defines your CI/CD pipeline, it:
Includes a Build stage, which builds your project, and a Deploy stage, which deploys it to Azure as a Linux
web app.
As part of the Deploy stage, it also creates an Environment with default name same as the Web App. You
can choose to modify the environment name.
4. Take a look at the pipeline to see what it does. Make sure that all the default inputs are appropriate for your
code.
5. After you've looked at what the pipeline does, select Save and run , after which you're prompted for a
commit message because Azure Pipelines adds the azure-pipelines.yml file to your repository. After editing
the message, select Save and run again to see your pipeline in action.

You just created and ran a pipeline that we automatically created for because your code appeared to be a good
match for the Maven Azure Pipelines template.

Edit the pipeline


After the pipeline has run, select the vertical ellipses in the upper-right corner of the window and then select Edit
pipeline .
Set some variables for your deployment
# at the top of your YAML file
# set some variables that you'll need when you deploy
variables:
# the name of the service connection that you created above
serviceConnectionToAzure: name-of-your-service-connection
# the name of your web app here is the same one you used above
# when you created the web app using the Azure CLI
appName: my-app-name

# ...

Deploy to Azure Functions

# ...
# add these as the last steps
# to deploy to your app service
- task: CopyFiles@2
displayName: Copy Files
inputs:
SourceFolder: $(system.defaultworkingdirectory)/target/azure-functions/
Contents: '**'
TargetFolder: $(build.artifactstagingdirectory)

- task: PublishBuildArtifacts@1
displayName: Publish Artifact
inputs:
PathtoPublish: $(build.artifactstagingdirectory)

- task: AzureFunctionApp@1
displayName: Azure Function App deploy
inputs:
azureSubscription: $(serviceConnectionToAzure)
appType: functionApp
appName: $(appName)
package: $(build.artifactstagingdirectory)

Run the pipeline and check out your site


You're now ready to save your changes and try it out!
1. Select Save in the upper-right corner of the editor.
2. In the dialog box that appears, add a Commit message such as add deployment to our pipeline , and then
select Save .
3. In the pipeline editor, select Run .
When the Build #nnnnnnnn.n has been queued message appears, select the number link to see your pipeline
in action.
After the pipeline has run, test the function app running on Azure. For example, in bash or from a command
prompt enter:
curl -w '\n' https://my-app-name-00000000000000000.azurewebsites.net/api/HttpTrigger-Java -d fromYourPipeline

Your function then returns:


Hello PipelineCreator

Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:

az group delete --name myapp-rg

Type y when prompted.


Build, test, and deploy Android apps
2/26/2020 • 5 minutes to read • Edit Online

This guidance explains how to automatically build, test, and deploy Android apps.

Get started
Follow these instructions to set up a pipeline for a sample Android app.
1. The code in the following repository is a simple Android app. To get started, fork this repo to your GitHub
account.

https://github.com/MicrosoftDocs/pipelines-android

2. Sign in to your Azure DevOps organization and navigate to your project.


3. In your project, navigate to the Pipelines page. Then choose the action to create a new pipeline.
4. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
5. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
6. When the list of repositories appears, select your Java sample repository.
7. Azure Pipelines will analyze the code in your repository and recommend starter templates for your pipeline.
Select the Android template.
8. Azure Pipelines will generate a YAML file for your pipeline. Select Save and run , then select Commit
directly to the master branch , and then choose Save and run again.
9. A new run is started. Wait for the run to finish.
When you're done, you'll have a working YAML file ( azure-pipelines.yml ) in your repository that's ready for you
to customize.

TIP
To make changes to the YAML file as described in this topic, select the pipeline in the Pipelines page, and then Edit the
azure-pipelines.yml file.

Gradle
Gradle is a common build tool used for building Android projects. See the Gradle task for more about these
options.
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/android
pool:
vmImage: 'macOS-10.14'

steps:
- task: Gradle@2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'
tasks: 'assembleDebug'

Adjust the build path


Adjust the workingDirector y value if your gradlew file isn't in the root of the repository. The directory value
should be relative to the root of the repository, such as AndroidApps/MyApp or
$(system.defaultWorkingDirectory)/AndroidApps/MyApp .

Adjust the gradleWrapperFile value if your gradlew file isn't in the root of the repository. The file path value
should be relative to the root of the repository, such as AndroidApps/MyApp/gradlew or
$(system.defaultWorkingDirectory)/AndroidApps/MyApp/gradlew .

Adjust Gradle tasks


Adjust the tasks value for the build variant you prefer, such as assembleDebug or assembleRelease . For details, see
Google's Android development documentation: Build a debug APK and Configure build variants.

Sign and align an Android APK


If your build does not already sign and zipalign the APK, add the Android Signing task to the YAML. An APK must
be signed to run on a device instead of an emulator. Zipaligning reduces the RAM consumed by the app.

Impor tant: We recommend storing each of the following passwords in a secret variable.

- task: AndroidSigning@2
inputs:
apkFiles: '**/*.apk'
jarsign: true
jarsignerKeystoreFile: 'pathToYourKeystoreFile'
jarsignerKeystorePassword: '$(jarsignerKeystorePassword)'
jarsignerKeystoreAlias: 'yourKeystoreAlias'
jarsignerKeyPassword: '$(jarsignerKeyPassword)'
zipalign: true

Test on the Android Emulator


Note: The Android Emulator is currently available only on the Hosted macOS agent.

Create the Bash Task and copy paste the code below in order to install and run the emulator. Don't forget to
arrange the emulator parameters to fit your testing environment. The emulator will be started as a background
process and available in subsequent tasks.
#!/usr/bin/env bash

# Install AVD files


echo "y" | $ANDROID_HOME/tools/bin/sdkmanager --install 'system-images;android-27;google_apis;x86'

# Create emulator
echo "no" | $ANDROID_HOME/tools/bin/avdmanager create avd -n xamarin_android_emulator -k 'system-
images;android-27;google_apis;x86' --force

$ANDROID_HOME/emulator/emulator -list-avds

echo "Starting emulator"

# Start emulator in background


nohup $ANDROID_HOME/emulator/emulator -avd xamarin_android_emulator -no-snapshot > /dev/null 2>&1 &
$ANDROID_HOME/platform-tools/adb wait-for-device shell 'while [[ -z $(getprop sys.boot_completed | tr -d '\r')
]]; do sleep 1; done; input keyevent 82'

$ANDROID_HOME/platform-tools/adb devices

echo "Emulator started"

Test on Azure-hosted devices


Add the App Center Test task to test the app in a hosted lab of iOS and Android devices. An App Center free trial is
required which must later be converted to paid.
Sign up with App Center first.
# App Center test
# Test app packages with Visual Studio App Center
- task: AppCenterTest@1
inputs:
appFile:
#artifactsDirectory: '$(Build.ArtifactStagingDirectory)/AppCenterTest'
#prepareTests: true # Optional
#frameworkOption: 'appium' # Required when prepareTests == True# Options: appium, espresso, calabash,
uitest, xcuitest
#appiumBuildDirectory: # Required when prepareTests == True && Framework == Appium
#espressoBuildDirectory: # Optional
#espressoTestApkFile: # Optional
#calabashProjectDirectory: # Required when prepareTests == True && Framework == Calabash
#calabashConfigFile: # Optional
#calabashProfile: # Optional
#calabashSkipConfigCheck: # Optional
#uiTestBuildDirectory: # Required when prepareTests == True && Framework == Uitest
#uitestStorePath: # Optional
#uiTestStorePassword: # Optional
#uitestKeyAlias: # Optional
#uiTestKeyPassword: # Optional
#uiTestToolsDirectory: # Optional
#signInfo: # Optional
#xcUITestBuildDirectory: # Optional
#xcUITestIpaFile: # Optional
#prepareOptions: # Optional
#runTests: true # Optional
#credentialsOption: 'serviceEndpoint' # Required when runTests == True# Options: serviceEndpoint, inputs
#serverEndpoint: # Required when runTests == True && CredsType == ServiceEndpoint
#username: # Required when runTests == True && CredsType == Inputs
#password: # Required when runTests == True && CredsType == Inputs
#appSlug: # Required when runTests == True
#devices: # Required when runTests == True
#series: 'master' # Optional
#dsymDirectory: # Optional
#localeOption: 'en_US' # Required when runTests == True# Options: da_DK, nl_NL, en_GB, en_US, fr_FR,
de_DE, ja_JP, ru_RU, es_MX, es_ES, user
#userDefinedLocale: # Optional
#loginOptions: # Optional
#runOptions: # Optional
#skipWaitingForResults: # Optional
#cliFile: # Optional
#showDebugOutput: # Optional

Retain artifacts with the build record


Add the Copy Files and Publish Build Artifacts tasks to store your APK with the build record or test and deploy it in
subsequent pipelines. See Artifacts.

- task: CopyFiles@2
inputs:
contents: '**/*.apk'
targetFolder: '$(build.artifactStagingDirectory)'
- task: PublishBuildArtifacts@1

Deploy
App Center
Add the App Center Distribute task to distribute an app to a group of testers or beta users, or promote the app to
Intune or Google Play. A free App Center account is required (no payment is necessary).
# App Center distribute
# Distribute app builds to testers and users via Visual Studio App Center
- task: AppCenterDistribute@1
inputs:
serverEndpoint:
appSlug:
appFile:
#symbolsOption: 'Apple' # Optional. Options: apple
#symbolsPath: # Optional
#symbolsPdbFiles: '**/*.pdb' # Optional
#symbolsDsymFiles: # Optional
#symbolsMappingTxtFile: # Optional
#symbolsIncludeParentDirectory: # Optional
#releaseNotesOption: 'input' # Options: input, file
#releaseNotesInput: # Required when releaseNotesOption == Input
#releaseNotesFile: # Required when releaseNotesOption == File
#isMandatory: false # Optional
#distributionGroupId: # Optional

Google Play
Install the Google Play extension and use the following tasks to automate interaction with Google Play. By default,
these tasks authenticate to Google Play using a service connection that you configure.
Release
Add the Google Play Release task to release a new Android app version to the Google Play store.

- task: GooglePlayRelease@2
inputs:
apkFile: '**/*.apk'
serviceEndpoint: 'yourGooglePlayServiceConnectionName'
track: 'internal'

Promote
Add the Google Play Promote task to promote a previously-released Android app update from one track to
another, such as alpha → beta .

- task: GooglePlayPromote@2
inputs:
packageName: 'com.yourCompany.appPackageName'
serviceEndpoint: 'yourGooglePlayServiceConnectionName'
sourceTrack: 'internal'
destinationTrack: 'alpha'

Increase rollout
Add the Google Play Increase Rollout task to increase the rollout percentage of an app that was previously
released to the rollout track.

- task: GooglePlayIncreaseRollout@1
inputs:
packageName: 'com.yourCompany.appPackageName'
serviceEndpoint: 'yourGooglePlayServiceConnectionName'
userFraction: '0.5' # 0.0 to 1.0 (0% to 100%)

Related extensions
Codified Security (Codified Security)
Google Play (Microsoft)
Mobile App Tasks for iOS and Android (James Montemagno)
Mobile Testing Lab (Perfecto Mobile)
React Native (Microsoft)
Build and test Go projects
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines
Use a pipeline to automatically build and test your Go projects.

Create your first pipeline


Are you new to Azure Pipelines? If so, then we recommend you try this section before moving on to other
sections.

Import this repo into your Git repo:

https://github.com/MicrosoftDocs/pipelines-go

Sign in to Azure Pipelines


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name
and displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner
of the dashboard.

Create the pipeline


1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .

When the Configure tab appears, select Go .

7. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .
8. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job.

You just created and ran a pipeline that we automatically created for you, because your code appeared
to be a good match for the Go template.

You now have a working YAML pipeline ( azure-pipelines.yml ) in your repository that's ready for you to
customize!
9. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.

See the sections below to learn some of the more common ways to customize your pipeline.

TIP
To make changes to the YAML file as described in this topic, select the pipeline in Pipelines page, and then select Edit to
open an editor for the azure-pipelines.yml file.

Build environment
You can use Azure Pipelines to build your Go projects without needing to set up any infrastructure of your own.
You can use Linux, macOS, or Windows agents to run your builds.
Update the following snippet in your azure-pipelines.yml file to select the appropriate image.

pool:
vmImage: 'ubuntu-latest'

Modern versions of Go are pre-installed on Microsoft-hosted agents in Azure Pipelines. For the exact versions of
Go that are pre-installed, refer to Microsoft-hosted agents.

Set up Go
Go 1.11+
Go < 1.11
Starting with Go 1.11, you no longer need to define a $GOPATH environment, set up a workspace layout, or use
the dep module. Dependency management is now built-in.
This YAML implements the go get command to download Go packages and their dependencies. It then uses
go build to generate the content that is published with PublishBuildArtifacts@1 task.
trigger:
- master

pool:
vmImage: 'ubuntu-latest'

steps:
- task: GoTool@0
inputs:
version: '1.13.5'
- task: Go@0
inputs:
command: 'get'
arguments: '-d'
workingDirectory: '$(System.DefaultWorkingDirectory)'
- task: Go@0
inputs:
command: 'build'
workingDirectory: '$(System.DefaultWorkingDirectory)'
- task: CopyFiles@2
inputs:
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
artifactName: drop

Build
Use go build to build your Go project. Add the following snippet to your azure-pipelines.yml file:

- task: Go@0
inputs:
command: 'build'
workingDirectory: '$(System.DefaultWorkingDirectory)'

Test
Use go testto test your go module and its subdirectories ( ./... ). Add the following snippet to your
azure-pipelines.yml file:

- task: Go@0
inputs:
command: 'test'
arguments: '-v'
workingDirectory: '$(modulePath)'

Build an image and push to container registry


For your Go app, you can also build an image and push it to a container registry.

Related extensions
Go extension for Visual Studio Code (Microsoft)
Build and test PHP apps
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use a pipeline to automatically build and test your PHP projects.

Create your first pipeline


Are you new to Azure Pipelines? If so, then we recommend you try this section before moving on to other
sections.

Fork this repo in GitHub:

https://github.com/MicrosoftDocs/pipelines-php

The sample code includes an azure-pipelines.yml file at the root of the repository. You can use this file to build the
project.
Follow all the instructions in Create your first pipeline to create a build pipeline for the sample project.
See the sections below to learn some of the more common ways to customize your pipeline.

Build environment
You can use Azure Pipelines to build your PHP projects without needing to set up any infrastructure of your own.
PHP is preinstalled on Microsoft-hosted agents in Azure Pipelines, along with many common libraries per PHP
version. You can use Linux, macOS, or Windows agents to run your builds.
For the exact versions of PHP that are preinstalled, refer to Microsoft-hosted agents.
Use a specific PHP version
On the Microsoft-hosted Ubuntu agent, multiple versions of PHP are installed. A symlink at /usr/bin/php points to
the currently set PHP version, so that when you run php , the set version executes. To use a PHP version other than
the default, the symlink can be pointed to that version using the update-alternatives tool. Set the PHP version
that you prefer by adding the following snippet to your azure-pipelines.yml file and changing the value of the
phpVersion variable accordingly.
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/php
pool:
vmImage: 'ubuntu-16.04'

variables:
phpVersion: 7.2

steps:
- script: |
sudo update-alternatives --set php /usr/bin/php$(phpVersion)
sudo update-alternatives --set phar /usr/bin/phar$(phpVersion)
sudo update-alternatives --set phpdbg /usr/bin/phpdbg$(phpVersion)
sudo update-alternatives --set php-cgi /usr/bin/php-cgi$(phpVersion)
sudo update-alternatives --set phar.phar /usr/bin/phar.phar$(phpVersion)
php -version
displayName: 'Use PHP version $(phpVersion)'

Install dependencies
To use Composer to install dependencies, add the following snippet to your azure-pipelines.yml file.

- script: composer install --no-interaction --prefer-dist


displayName: 'composer install'

Test with phpunit


To run tests with phpunit, add the following snippet to your azure-pipelines.yml file.

- script: ./phpunit
displayName: 'Run tests with phpunit'

Retain the PHP app with the build record


To save the artifacts of this build with the build record, add the following snippet to your azure-pipelines.yml file.
Optionally, customize the value of rootFolderOrFile to alter what is included in the archive.

- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(system.defaultWorkingDirectory)'
includeRootFolder: false
- task: PublishBuildArtifacts@1

Using a custom composer location


If your composer.json is in a subfolder instead of the root directory, you can leverage the --working-dir argument
to tell composer what directory to use. For example, if your composer.json is inside the subfolder pkgs
composer install --no-interaction --working-dir=pkgs

You can also specify the absolute path, using the built-in system variables:
composer install --no-interaction --working-dir='$(system.defaultWorkingDirectory)/pkgs'

Build image and push to container registry


For your PHP app, you can also build an image and push it to a container registry.
Build and deploy to a PHP web app
2/26/2020 • 4 minutes to read • Edit Online

Azure Pipelines
A web app is a lightweight way to host a web application. In this step-by-step guide you'll learn how to create a
pipeline that continuously builds and deploys your PHP app. Your team can then automatically build each commit
in GitHub, and if you want, automatically deploy the change to an Azure App Service. You can use whichever
runtime your prefer: PHP|5.6 or PHP|7.0.

Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.

TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.

Get the code


If you already have an app in GitHub that you want to deploy, you can try creating a pipeline for that code.
However, if you are a new user, then you might get a better start by using our sample code. In that case, fork this
repo in GitHub:

https://github.com/Azure-Samples/php-docs-hello-world

Create an Azure App Service


Sign in to the Azure Portal, and then select the Cloud Shell button in the upper-right corner.
Create an Azure App Service on Linux.

# Create a resource group


az group create --location westus --name myapp-rg

# Create an app service plan of type Linux


az appservice plan create -g myapp-rg -n myapp-service-plan --is-linux

# Create an App Service from the plan with PHP as the runtime
az webapp create -g myapp-rg -p myapp-service-plan -n my-app-name --runtime "PHP|7.0"
Sign in to Azure Pipelines and connect to Azure
Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name and
displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner of
the dashboard.
Now create the service connection:
1. From your project dashboard, select Project settings on the bottom left.
2. On the settings page, select Pipelines > Ser vice connections , select New ser vice connection , and then
select Azure Resource Manager .
3. The Add an Azure Resource Manager service connection* dialog box appears.
Name Type a name and then copy and paste it into a text file so you can use it later.
Scope Select Subscription.
Subscription Select the subscription in which you created the App Service.
Resource Group Select the resource group you created earlier
Select Allow all pipelines to use this connection .
TIP
If you need to create a connection to an Azure subscription that's owned by someone else, see Create an Azure Resource
Manager service connection with an existing service principal.

Create the pipeline


1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .
When the Configure tab appears, select PHP . Your new pipeline appears.
1. Take a look at the pipeline to see what it does.
2. After you've looked at what the pipeline does, select Save and run to see the pipeline in action.
3. Select Save and run , after which you're prompted for a commit message because Azure Pipelines adds the
azure-pipelines.yml file to your repository. After editing the message, select Save and run again.

You just created and ran a pipeline that we automatically created for you, because your code appeared to be a
good match for the PHP Azure Pipelines template.

Edit the pipeline


After the pipeline has run, select the vertical ellipses in the upper-right corner of the window and then select Edit
pipeline .
Set some variables for your deployment

# at the top of your YAML file


# set some variables that you'll need when you deploy
variables:
# the name of the service connection that you created above
serviceConnectionToAzure: name-of-your-service-connection
# the name of your web app here is the same one you used above
# when you created the web app using the Azure CLI
appName: my-app-name

# ...

Deploy to your app service


# add these as the last steps (below all the other `task` items under `steps`)
# to deploy to your app service
- task: ArchiveFiles@1
displayName: Archive files
inputs:
rootFolder: $(System.DefaultWorkingDirectory)
includeRootFolder: false
archiveType: zip

- task: PublishBuildArtifacts@1
displayName: Publish Artifact
inputs:
PathtoPublish: $(build.artifactstagingdirectory)

- task: AzureWebApp@1
displayName: Azure Web App Deploy
inputs:
azureSubscription: $(serviceConnectionToAzure)
appType: webAppLinux
appName: $(appName)
package: $(build.artifactstagingdirectory)/**/*.zip

Run the pipeline and check out your site


You're now ready to save your changes and try it out!
1. Select Save in the upper-right corner of the editor.
2. In the dialog box that appears, add a Commit message such as add deployment to our pipeline , and then
select Save .
3. In the pipeline editor, select Run .
When the Build #nnnnnnnn.n has been queued message appears, select the number link to see your pipeline
in action.
After the pipeline has run, check out your site!
https://my-app-name.azurewebsites.net/

Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:

az group delete --name myapp-rg

Type y when prompted.


Build and test Ruby apps
2/26/2020 • 3 minutes to read • Edit Online

Azure Pipelines
This guidance explains how to automatically build Ruby projects.

Get started
Follow these instructions to set up a pipeline for a Ruby app.
1. The code in the following repository is a simple Ruby app. To get started, fork this repo to your GitHub
account.

https://github.com/MicrosoftDocs/pipelines-ruby

2. Sign in to your Azure DevOps organization and navigate to your project.


3. In your project, navigate to the Pipelines page. Then choose the action to create a new pipeline.
4. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
5. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
6. When the list of repositories appears, select your Ruby sample repository.
7. Azure Pipelines will analyze the code in your repository and recommend Ruby template for your pipeline.
Select that template.
8. Azure Pipelines will generate a YAML file for your pipeline. Select Save and run , then select Commit
directly to the master branch , and then choose Save and run again.
9. A new run is started. Wait for the run to finish.
When you're done, you'll have a working YAML file ( azure-pipelines.yml ) in your repository that's ready for you to
customize.

TIP
To make changes to the YAML file as described in this topic, select the pipeline in the Pipelines page, and then Edit the
azure-pipelines.yml file.

Build environment
You can use Azure Pipelines to build your Ruby projects without needing to set up any infrastructure of your own.
Ruby is preinstalled on Microsoft-hosted agents in Azure Pipelines. You can use Linux, macOS, or Windows agents
to run your builds.
For the exact versions of Ruby that are preinstalled, refer to Microsoft-hosted agents. To install a specific version of
Ruby on Microsoft-hosted agents, add the Use Ruby Version task to the beginning of your pipeline.
Use a specific Ruby version
Add the Use Ruby Version task to set the version of Ruby used in your pipeline. This snippet adds Ruby 2.4 or later
to the path and sets subsequent pipeline tasks to use it.

# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/ruby
pool:
vmImage: 'ubuntu-16.04' # other options: 'macOS-10.14', 'vs2017-win2016'

steps:
- task: UseRubyVersion@0
inputs:
versionSpec: '>= 2.4'
addToPath: true

Install Rails
To install Rails, add the following snippet to your azure-pipelines.yml file.

- script: gem install rails && rails -v


displayName: 'gem install rails'

Install dependencies
To use Bundler to install dependencies, add the following snippet to your azure-pipelines.yml file.

- script: |
CALL gem install bundler
bundle install --retry=3 --jobs=4
displayName: 'bundle install'

Run Rake
To execute Rake in the context of the current bundle (as defined in your Gemfile), add the following snippet to your
azure-pipelines.yml file.

- script: bundle exec rake


displayName: 'bundle exec rake'

Publish test results


The sample code includes unit tests written using RSpec. When Rake is run by the previous step, it runs the RSpec
tests. The RSpec RakeTask in the Rakefile has been configured to produce JUnit style results using the
RspecJUnitFormatter.
Add the Publish Test Results task to publish JUnit style test results to the server. When you do this, you get a rich
test reporting experience that can be used for easily troubleshooting any failed tests and for test timing analysis.

- task: PublishTestResults@2
condition: succeededOrFailed()
inputs:
testResultsFiles: '**/test-*.xml'
testRunTitle: 'Ruby tests'

Publish code coverage results


The sample code uses SimpleCov to collect code coverage data when unit tests are run. SimpleCov is configured to
use Cobertura and HTML report formatters.
Add the Publish Code Coverage Results task to publish code coverage results to the server. When you do this,
coverage metrics can be seen in the build summary and HTML reports can be downloaded for further analysis.
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/coverage'

Build an image and push to container registry


For your Ruby app, you can also build an image and push it to a container registry.
Quickstart: Build and deploy Xamarin apps with a
pipeline
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines
Get started with Xamarin and Azure Pipelines by using building a pipeline to deploy a Xamarin app. You can
deploy Android and iOS apps in the same or separate pipelines.

Prerequisites
Before you begin, you need:
An Azure account with an active subscription. Create an account for free.
An active Azure DevOps organization. Sign up for Azure Pipelines.
Get code
Fork this repo in GitHub:

https://github.com/MicrosoftDocs/pipelines-xamarin

Sign in to Azure Pipelines


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name
and displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner
of the dashboard.
Create the pipeline
1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .

When the Configure tab appears, select Xamarin.Android to build an Android project or Xamarin.iOS to
build an iOS project.

7. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .
8. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job. You now have a working YAML pipeline (
azure-pipelines.yml ) in your repository that's ready for you to customize!

9. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.

10. See the sections below to learn some of the more common ways to customize your pipeline.

Set up Xamarin tools


You can use Azure Pipelines to build your Xamarin apps without needing to set up any infrastructure of your own.
Xamarin tools are preinstalled on Microsoft-hosted agents in Azure Pipelines. You can use macOS or Windows
agents to run Xamarin.Android builds, and macOS agents to run Xamarin.iOS builds. If you are using a self-hosted
agent, you must install Visual Studio Tools for Xamarin for Windows agents or Visual Studio for Mac for macOS
agents.
For the exact versions of Xamarin that are preinstalled, refer to Microsoft-hosted agents.
Create a file named azure-pipelines.yml in the root of your repository. Then, add the following snippet to your
azure-pipelines.yml file to select the appropriate agent pool:

# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/xamarin
pool:
vmImage: 'macOS-10.15' # For Windows, use 'windows-2019'

Build a Xamarin.Android app


To build a Xamarin.Android app, add the following snippet to your azure-pipelines.yml file. Change values to
match your project configuration. See the Xamarin.Android task for more about these options.
variables:
buildConfiguration: 'Release'
outputDirectory: '$(build.binariesDirectory)/$(buildConfiguration)'

steps:
- task: NuGetToolInstaller@0

- task: NuGetCommand@2
inputs:
restoreSolution: '**/*.sln'

- task: XamarinAndroid@1
inputs:
projectFile: '**/*Droid*.csproj'
outputDirectory: '$(outputDirectory)'
configuration: '$(buildConfiguration)'

Sign a Xamarin.Android app


See Sign your mobile Android app during CI for information about signing your app.

Build a Xamarin.iOS app


To build a Xamarin.iOS app, add the following snippet to your azure-pipelines.yml file. Change values to match
your project configuration. See the Xamarin.iOS task for more about these options.

variables:
buildConfiguration: 'Release'

steps:
- task: XamariniOS@2
inputs:
solutionFile: '**/*iOS.csproj'
configuration: '$(buildConfiguration)'
packageApp: false
buildForSimulator: true

Sign and provision a Xamarin.iOS app - The PackageApp option


To generate a signed and provisioned Xamarin.iOS app .ipa package, set packageApp to true and make sure prior
to this task you installed the right Apple Provisioning Profile and Apple Certificates that match your App Bundle ID
into the agent running the job.
To fulfill these mandatory requisites, use the Microsoft Provided tasks for installing an Apple Provisioning Profile
and installing Apple Certificates.

- task: XamariniOS@2
inputs:
solutionFile: '**/*iOS.csproj'
configuration: 'AppStore'
packageApp: true

TIP
The Xamarin.iOS build task will only generate an .ipa package if the agent running the job has the appropriate provisioning
profile and Apple certificate installed. If you enable the packageApp option and the agent does not have the appropriate
apple provisioning profile(.mobileprovision) and apple certificate(.p12) the build may report succeeded but there will be no
.ipa generated.
For Microsoft Hosted agents the .ipa package is by default located under path:
{iOS.csproj root}/bin/{Configuration}/{iPhone/iPhoneSimulator}/

You can configure the output path by adding an argument to the Xamarin.iOS task:
YAML
Classic

- task: XamariniOS@2
inputs:
solutionFile: '**/*iOS.csproj'
configuration: 'AppStore'
packageApp: true
args: /p:IpaPackageDir="/Users/vsts/agent/2.153.2/work/1/a"

This example locates the .ipa in the Build Artifact Staging Directory ready to be pushed into Azure DevOps as an
artifact to each build run.To push it into Azure DevOps simply add a Publish Artifact task to the end of your
pipeline.
For more information about signing and provisioning your iOS app, see Sign your mobile iOS app during CI.
Set the Xamarin SDK version on macOS
To set a specific Xamarin SDK version to use on the Microsoft-hosted macOS agent pool, add the following snippet
before the XamariniOS task in your azure-pipelines.yml file. For details on properly formatting the version
number (shown as 5_4_1 below), see How can I manually select versions of tools on the Hosted macOS agent?.

- script: sudo $AGENT_HOMEDIRECTORY/scripts/select-xamarin-sdk.sh 5_4_1


displayName: 'Select Xamarin SDK version'

Build Xamarin.Android and Xamarin.iOS apps with one pipeline


You can build and test your Xamarin.Android app, Xamarin.iOS app, and related apps in the same pipeline by
defining multiple jobs in azure-pipelines.yml . These jobs can run in parallel to save time. The following complete
example builds a Xamarin.Android app on Windows, and a Xamarin.iOS app on macOS, using two jobs.
# https://docs.microsoft.com/vsts/pipelines/ecosystems/xamarin
jobs:
- job: Android
pool:
vmImage: 'vs2017-win2016'
variables:
buildConfiguration: 'Release'
outputDirectory: '$(build.binariesDirectory)/$(buildConfiguration)'
steps:
- task: NuGetToolInstaller@0
- task: NuGetCommand@2
inputs:
restoreSolution: '**/*.sln'
- task: XamarinAndroid@1
inputs:
projectFile: '**/*droid*.csproj'
outputDirectory: '$(outputDirectory)'
configuration: '$(buildConfiguration)'

- job: iOS
pool:
vmImage: 'macOS-10.14'
variables:
buildConfiguration: 'Release'
steps:
- task: NuGetToolInstaller@0
- task: NuGetCommand@2
inputs:
restoreSolution: '**/*.sln'
- task: XamariniOS@2
inputs:
solutionFile: '**/*iOS.csproj'
configuration: '$(buildConfiguration)'
buildForSimulator: true
packageApp: false

Clean up resources
If you don't need the example code, delete your GitHub repository and Azure Pipelines project.

Next steps
Learn more about using Xcode in pipelines
Learn more about using Android in pipelines
Build, test, and deploy Xcode apps
2/26/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
This guidance explains how to automatically build Xcode projects.

Example
For a working example of how to build an app with Xcode, import (into Azure Repos or TFS) or fork (into GitHub)
this repo:

https://github.com/MicrosoftDocs/pipelines-xcode

The sample code includes an azure-pipelines.yml file at the root of the repository. You can use this file to build
the app.
Follow all the instructions in Create your first pipeline to create a build pipeline for the sample app.

Build environment
You can use Azure Pipelines to build your apps with Xcode without needing to set up any infrastructure of your
own. Xcode is preinstalled on Microsoft-hosted macOS agents in Azure Pipelines. You can use the macOS agents
to run your builds.
For the exact versions of Xcode that are preinstalled, refer to Microsoft-hosted agents.
Create a file named azure-pipelines.yml in the root of your repository. Then, add the following snippet to your
azure-pipelines.yml file to select the appropriate agent pool:

# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/xcode
pool:
vmImage: 'macOS-10.14'

Build an app with Xcode


To build an app with Xcode, add the following snippet to your azure-pipelines.yml file. This is a minimal snippet
for building an iOS project using its default scheme, for the Simulator, and without packaging. Change values to
match your project configuration. See the Xcode task for more about these options.
variables:
scheme: ''
sdk: 'iphoneos'
configuration: 'Release'

steps:
- task: Xcode@5
inputs:
sdk: '$(sdk)'
scheme: '$(scheme)'
configuration: '$(configuration)'
xcodeVersion: 'default' # Options: default, 10, 9, 8, specifyPath
exportPath: '$(agent.buildDirectory)/output/$(sdk)/$(configuration)'
packageApp: false

Signing and provisioning


An Xcode app must be signed and provisioned to run on a device or be published to the App Store. The signing
and provisioning process needs access to your P12 signing certificate and one or more provisioning profiles. The
Install Apple Certificate and Install Apple Provisioning Profile tasks make these available to Xcode during a build.
The following snippet installs an Apple P12 certificate and provisioning profile in the build agent's Keychain. Then,
it builds, signs, and provisions the app with Xcode. Finally, the certificate and provisioning profile are automatically
removed from the Keychain at the end of the build, regardless of whether the build succeeded or failed. For more
details, see Sign your mobile app during CI.

# The `certSecureFile` and `provProfileSecureFile` files are uploaded to the Azure Pipelines secure files
library where they are encrypted.
# The `P12Password` variable is set in the Azure Pipelines pipeline editor and marked 'secret' to be
encrypted.
steps:
- task: InstallAppleCertificate@2
inputs:
certSecureFile: 'chrisid_iOSDev_Nov2018.p12'
certPwd: $(P12Password)

- task: InstallAppleProvisioningProfile@1
inputs:
provProfileSecureFile: '6ffac825-ed27-47d0-8134-95fcf37a666c.mobileprovision'

- task: Xcode@5
inputs:
actions: 'build'
scheme: ''
sdk: 'iphoneos'
configuration: 'Release'
xcWorkspacePath: '**/*.xcodeproj/project.xcworkspace'
xcodeVersion: 'default' # Options: 8, 9, 10, default, specifyPath
signingOption: 'default' # Options: nosign, default, manual, auto
useXcpretty: 'false' # Makes it easier to diagnose build failures

CocoaPods
If your project uses CocoaPods, you can run CocoaPods commands in your pipeline using a script, or with the
CocoaPods task. The task optionally runs pod repo update , then runs pod install , and allows you to set a custom
project directory. Following are common examples of using both.
- script: /usr/local/bin/pod install
displayName: 'pod install using a script'

- task: CocoaPods@0
displayName: 'pod install using the CocoaPods task with defaults'

- task: CocoaPods@0
inputs:
forceRepoUpdate: true
projectDirectory: '$(system.defaultWorkingDirectory)'
displayName: 'pod install using the CocoaPods task with a forced repo update and a custom project directory'

Carthage
If your project uses Carthage with a private Carthage repository, you can set up authentication by setting an
environment variable named GITHUB_ACCESS_TOKEN with a value of a token that has access to the repository.
Carthage will automatically detect and use this environment variable.
Do not add the secret token directly to your pipeline YAML. Instead, create a new pipeline variable with its lock
enabled on the Variables pane to encrypt this value. See secret variables.
Here is an example that uses a secret variable named myGitHubAccessToken for the value of the
GITHUB_ACCESS_TOKEN environment variable.

- script: carthage update --platform iOS


env:
GITHUB_ACCESS_TOKEN: $(myGitHubAccessToken)

Testing on Azure -hosted devices


Add the App Center Test task to test the app in a hosted lab of iOS and Android devices. An App Center free trial is
required which must later be converted to paid.
Sign up with App Center first.
# App Center test
# Test app packages with Visual Studio App Center
- task: AppCenterTest@1
inputs:
appFile:
#artifactsDirectory: '$(Build.ArtifactStagingDirectory)/AppCenterTest'
#prepareTests: true # Optional
#frameworkOption: 'appium' # Required when prepareTests == True# Options: appium, espresso, calabash,
uitest, xcuitest
#appiumBuildDirectory: # Required when prepareTests == True && Framework == Appium
#espressoBuildDirectory: # Optional
#espressoTestApkFile: # Optional
#calabashProjectDirectory: # Required when prepareTests == True && Framework == Calabash
#calabashConfigFile: # Optional
#calabashProfile: # Optional
#calabashSkipConfigCheck: # Optional
#uiTestBuildDirectory: # Required when prepareTests == True && Framework == Uitest
#uitestStorePath: # Optional
#uiTestStorePassword: # Optional
#uitestKeyAlias: # Optional
#uiTestKeyPassword: # Optional
#uiTestToolsDirectory: # Optional
#signInfo: # Optional
#xcUITestBuildDirectory: # Optional
#xcUITestIpaFile: # Optional
#prepareOptions: # Optional
#runTests: true # Optional
#credentialsOption: 'serviceEndpoint' # Required when runTests == True# Options: serviceEndpoint, inputs
#serverEndpoint: # Required when runTests == True && CredsType == ServiceEndpoint
#username: # Required when runTests == True && CredsType == Inputs
#password: # Required when runTests == True && CredsType == Inputs
#appSlug: # Required when runTests == True
#devices: # Required when runTests == True
#series: 'master' # Optional
#dsymDirectory: # Optional
#localeOption: 'en_US' # Required when runTests == True# Options: da_DK, nl_NL, en_GB, en_US, fr_FR,
de_DE, ja_JP, ru_RU, es_MX, es_ES, user
#userDefinedLocale: # Optional
#loginOptions: # Optional
#runOptions: # Optional
#skipWaitingForResults: # Optional
#cliFile: # Optional
#showDebugOutput: # Optional

Retain artifacts with the build record


Add the Copy Files and Publish Build Artifacts tasks to store your IPA with the build record or test and deploy it in
subsequent pipelines. See Artifacts.

- task: CopyFiles@2
inputs:
contents: '**/*.ipa'
targetFolder: '$(build.artifactStagingDirectory)'
- task: PublishBuildArtifacts@1

Deploy
App Center
Add the App Center Distribute task to distribute an app to a group of testers or beta users, or promote the app to
Intune or the Apple App Store. A free App Center account is required (no payment is necessary).
# App Center distribute
# Distribute app builds to testers and users via Visual Studio App Center
- task: AppCenterDistribute@1
inputs:
serverEndpoint:
appSlug:
appFile:
#symbolsOption: 'Apple' # Optional. Options: apple
#symbolsPath: # Optional
#symbolsPdbFiles: '**/*.pdb' # Optional
#symbolsDsymFiles: # Optional
#symbolsMappingTxtFile: # Optional
#symbolsIncludeParentDirectory: # Optional
#releaseNotesOption: 'input' # Options: input, file
#releaseNotesInput: # Required when releaseNotesOption == Input
#releaseNotesFile: # Required when releaseNotesOption == File
#isMandatory: false # Optional
#distributionGroupId: # Optional

Apple App Store


Install the Apple App Store extension and use the following tasks to automate interaction with the App Store. By
default, these tasks authenticate to Apple using a service connection that you configure.
Release
Add the App Store Release task to automate the release of updates to existing iOS TestFlight beta apps or
production apps in the App Store.
See limitations of using this task with Apple two-factor authentication, since Apple authentication is region specific
and fastlane session tokens expire quickly and must be recreated and reconfigured.

- task: AppStoreRelease@1
displayName: 'Publish to the App Store TestFlight track'
inputs:
serviceEndpoint: 'My Apple App Store service connection' # This service connection must be added by you
appIdentifier: com.yourorganization.testapplication.etc
ipaPath: '$(build.artifactstagingdirectory)/**/*.ipa'
shouldSkipWaitingForProcessing: true
shouldSkipSubmission: true

Promote
Add the App Store Promote task to automate the promotion of a previously submitted app from iTunes Connect to
the App Store.

- task: AppStorePromote@1
displayName: 'Submit to the App Store for review'
inputs:
serviceEndpoint: 'My Apple App Store service connection' # This service connection must be added by you
appIdentifier: com.yourorganization.testapplication.etc
shouldAutoRelease: false

Related extensions
Apple App Store (Microsoft)
Codified Security (Codified Security)
MacinCloud (Moboware Inc.)
Mobile App Tasks for iOS and Android (James Montemagno)
Mobile Testing Lab (Perfecto Mobile)
Raygun (Raygun)
React Native (Microsoft)
Version Setter (Tom Gilder)
Quickstart: trigger a pipeline run from GitHub Actions
11/2/2020 • 2 minutes to read • Edit Online

Get started using GitHub Actions and Azure Pipelines together.


If you have both Azure Pipelines and GitHub Actions workflows, you may want to trigger a pipeline run from within
a GitHub action. You can do so with the Azure Pipelines Action.

Prerequisites
A working Azure pipeline. Create your first pipeline.
A GitHub account with a repository. Join GitHub and create a repository.
An Azure DevOps personal access token (PAT) to use with your GitHub action. Create a PAT.

Create a GitHub secret


1. Open your GitHub repository and go to Settings .

2. Select Secrets and then New Secret .

3. Paste in your PAT and give it the name AZURE_DEVOPS_TOKEN .


4. Save by selecting Add secret .

Add a GitHub workflow


1. Open your GitHub repository and select Actions .

2. Select Set up your workflow yourself.


3. Delete everything after branches: [ master ] . Your remaining workflow should look like this.
name: CI

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

4. Select the Azure Pipelines Action in the Marketplace.

5. Copy this workflow and replace the contents of your GitHub Actions workflow file. Customize the
azure-devops-project-url and azure-pipeline-name values. Your complete workflow should look like this.

name: CI

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
build:
name: Call Azure Pipeline
runs-on: ubuntu-latest
steps:
- name: Azure Pipelines Action
uses: Azure/pipelines@v1
with:
azure-devops-project-url: https://dev.azure.com/organization/project-name
azure-pipeline-name: 'My Pipeline'
azure-devops-token: ${{ secrets.AZURE_DEVOPS_TOKEN }}

6. On the Actions page, verify that your workflow ran. Select the workflow title to see more information about
the run. You should see a green check mark for the Azure Pipelines Action. Open the Action to see a direct
link to the pipeline run.

Clean up resources
If you're not going to continue to use the GitHub Action, delete the workflow with the following steps:
1. Open .github/workflows in your GitHub repository.
2. Open the workflow you created and Delete .

Next steps
Learn how to connect to the Azure environment and deploy to Azure with GitHub.
Deploy to Azure using GitHub Actions
Build multiple branches
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

You can build every commit and pull request to your Git repository using Azure Pipelines or TFS. In this tutorial, we
will discuss additional considerations when building multiple branches in your Git repository. You will learn how to:
Set up a CI trigger for topic branches
Automatically build a change in topic branch
Exclude or include tasks for builds based on the branch being built
Keep code quality high by building pull requests
Use retention policies to clean up completed builds

Prerequisites
You need a Git repository in Azure Pipelines, TFS, or GitHub with your app. If you do not have one, we
recommend importing the sample .NET Core app into your Azure Pipelines or TFS project, or forking it into
your GitHub repository. Note that you must use Azure Pipelines to build a GitHub repository. You cannot use
TFS.
You also need a working build for your repository.

Set up a CI trigger for a topic branch


A common workflow with Git is to create temporary branches from your master branch. These branches are called
topic or feature branches and help you isolate your work. In this workflow, you create a branch for a particular
feature or bug fix. Eventually, you merge the code back to the master branch and delete the topic branch.
YAML
Classic
Unless you specify a trigger in your YAML file, a change in any of the branches will trigger a build. Add the
following snippet to your YAML file in the master branch. This will cause any changes to master and feature/*
branches to be automatically built.

trigger:
- master
- feature/*

YAML builds are not yet available on TFS.

Automatically build a change in topic branch


You're now ready for CI for both the master branch and future feature branches that match the branch pattern.
Every code change for the branch will use an automated build pipeline to ensure the quality of your code remains
high.
Follow the steps below to edit a file and create a new topic branch.
1. Navigate to your code in Azure Repos, TFS, or GitHub.
2. Create a new branch for your code that starts with feature/ , e.g., feature/feature-123 .
3. Make a change to your code in the feature branch and commit the change.
4. Navigate to the Pipelines menu in Azure Pipelines or TFS and select Builds .
5. Select the build pipeline for this repo. You should now see a new build executing for the topic branch. This build
was initiated by the trigger you created earlier. Wait for the build to finish.
Your typical development process includes developing code locally and periodically pushing to your remote topic
branch. Each push you make results in a build pipeline executing in the background. The build pipeline helps you
catch errors earlier and helps you to maintain a quality topic branch that can be safely merged to master. Practicing
CI for your topic branches helps to minimize risk when merging back to master.

Exclude or include tasks for builds based on the branch being built
The master branch typically produces deployable artifacts such as binaries. You do not need to spend time creating
and storing those artifacts for short-lived feature branches. You implement custom conditions in Azure Pipelines or
TFS so that certain tasks only execute on your master branch during a build run. You can use a single build with
multiple branches and skip or perform certain tasks based on conditions.
YAML
Classic
Edit the azure-pipelines.yml file in your master branch, locate a task in your YAML file, and add a condition to it.
For example, the following snippet adds a condition to publish artifacts task.

- task: PublishBuildArtifacts@1
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))

YAML builds are not yet available on TFS.

Validate pull requests


Use policies to protect your branches by requiring successful builds before merging pull requests. You have options
to always require a new successful build before merging changes to important branches such as the master
branch. There are other branch policy settings to build less frequently. You can also require a certain number of
code reviewers to help ensure your pull requests are high quality and don't result in broken builds for your
branches.
GitHub repository
YAML
Classic
Unless you specify pr triggers in your YAML file, pull request builds are automatically enabled for all branches.
You can specify the target branches for your pull request builds. For example, to run the build only for pull requests
that target: master and feature/* :
pr:
- master
- feature/*

For more details, see Triggers.


YAML builds are not yet available on TFS.
Azure Pipelines or TFS repository
1. Navigate to the Repos hub in Azure Repos or TFS.
2. Choose your repositor y and select Branches . Choose the master branch .
3. You will implement a branch policy to protect the master branch. Select the ellipsis to the right of your branch
name and select Branch policies .
4. Choose the checkbox for Protect this branch . There are several options for protecting the branch.
5. Under the Build validation menu choose Add build policy .
6. Choose the appropriate build pipeline.
7. Ensure Trigger is set to automatic and the Policy requirement is set to required.
8. Enter a descriptive Display name to describe the policy.
9. Select Save to create and enable the policy. Select Save changes at the top left of your screen.
10. To test the policy navigate to the Pull request menu in Azure Pipelines or TFS.
11. Select New pull request . Ensure your topic branch is set to merge into your master branch. Select create .
12. Your screen displays the policy being executed.
13. Select the policy name to examine the build. If the build succeeds your code will be merged to master. If the
build fails the merge is blocked.
Once the work is completed in the topic branch and merged to master, you can delete your topic branch. You can
then create additional feature or bug fix branches as necessary.

Use retention policies to clean up your completed builds


Retention policies allow you to control and automate the cleanup of your various builds. For shorter-lived branches
like topic branches, you may want to retain less history to reduce clutter and storage costs. If you create CI builds
on multiple related branches, it will become less important to keep builds for all of your branches.
1. Navigate to the Pipelines menu in Azure Pipelines or TFS.
2. Locate the build pipeline that you set up for your repo.
3. Select Edit at the top right of your screen.
4. Under the build pipeline name, Select the Retention tab. Select Add to add a new retention policy.
5. Type feature/ * in the Branch specification dropdown. This ensures any feature branches matching the
wildcard will use the policy.
6. Set Days to keep to 1 and Minimum to keep to 1.
7. Select the Save & queue menu and then Select Save .
Policies are evaluated in order, applying the first matching policy to each build. The default rule at the bottom
matches all builds. The retention policy will clean up build resources each day. You retain at least one build at all
times. You can also choose to keep any particular build for an indefinite amount of time.

Next steps
In this tutorial, you learned how to manage CI for multiple branches in your Git repositories using Azure Pipelines
or TFS.
You learned how to:
Set up a CI trigger for topic branches
Automatically build a change in topic branch
Exclude or include tasks for builds based on the branch being built
Keep code quality high by building pull requests
Use retention policies to clean up completed builds
Create a multi-platform pipeline
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
This is a step-by-step guide to using Azure Pipelines to build on macOS, Linux, and Windows.

Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.

Get the sample code


You can use Azure Pipelines to build an app on written in any language, on multiple platforms at the same time.
1. Go to https://github.com/MicrosoftDocs/pipelines-javascript.
2. Fork the repo into your own GitHub account.
You should now have a sample app in your GitHub account.

Add a pipeline
In the sample repo, there's no pipeline yet. You're going to add jobs that run on three platforms.
1. Go to your fork of the sample code on GitHub.
2. Choose 'Create new file'. Name the file azure-pipelines.yml , and give it the contents below.
# Build NodeJS Express app using Azure Pipelines
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/javascript?view=azure-devops
strategy:
matrix:
linux:
imageName: 'ubuntu-16.04'
mac:
imageName: 'macos-10.14'
windows:
imageName: 'vs2017-win2016'

pool:
vmImage: $(imageName)

steps:
- task: NodeTool@0
inputs:
versionSpec: '8.x'

- script: |
npm install
npm test

- task: PublishTestResults@2
inputs:
testResultsFiles: '**/TEST-RESULTS.xml'
testRunTitle: 'Test results for JavaScript'

- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/*coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/coverage'

- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false

- task: PublishBuildArtifacts@1

At the bottom of the GitHub editor, select Commit changes .


Each job in this example runs on a different VM image. By default, the jobs run at the same time in parallel.
Note: script runs in each platform's native script interpreter: Bash on macOS and Linux, CMD on Windows. See
multi-platform scripts to learn more.

Create the pipeline


Now that you've configured your GitHub repo with a pipeline, you're ready to build it.
1. Sign in to your Azure DevOps organization and navigate to your project.
2. In your project, go to the Pipelines page, and then select New pipeline .
3. Select GitHub as the location of your source code.
4. For Repositor y , select Authorize and then Authorize with OAuth .
5. You might be redirected to GitHub to sign in. If this happens, then enter your GitHub credentials. After you're
redirected back to Azure Pipelines, select the sample app repository.
6. For the Template , Azure Pipelines analyzes the code in your repository. If your repository already contains
an azure-pipelines.yml file (as in this case), then this step is skipped. Otherwise, Azure Pipelines
recommends a starter template based on the code in your repository.
7. Azure Pipelines shows you the YAML file that it will use to create your pipeline.
8. Select Save and run , and then select the option to Commit directly to the master branch .
9. The YAML file is pushed to your GitHub repository, and a new build is automatically started. Wait for the
build to finish.

FAQ
Can I build my multi-platform pipeline on both self-hosted and Microsoft-hosted agents?
You can, you would need to specify both a vmImage and a Pool variable, like the following example. For the hosted
agent, specify Azure Pipelines as the pool name, and for self-hosted agents, leave the vmImage blank. The blank
vmImage for the self-hosted agent may result in some unusual entries in the logs but they won't affect the pipeline.

strategy:
matrix:
microsofthosted:
poolName: Azure Pipelines
vmImage: ubuntu-latest

selfhosted:
poolName: FabrikamPool
vmImage:

pool:
name: $(poolName)
vmImage: $(vmImage)

steps:
- checkout: none
- script: echo test
Next steps
You've just learned the basics of using multiple platforms with Azure Pipelines. From here, you can learn more
about:
Jobs
Cross-platform scripting
Templates to remove the duplication
Building Node.js apps
Building .NET Core, Go, Java, or Python apps
For details about building GitHub repositories, see Build GitHub repositories.
Service containers
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines
If your pipeline requires the support of one or more services, in many cases you'll want to create, connect to, and
clean up each service on a per-job basis. For instance, a pipeline may run integration tests that require access to a
database and a memory cache. The database and memory cache need to be freshly created for each job in the
pipeline.
A container provides a simple and portable way to run a service that your pipeline depends on. A service container
enables you to automatically create, network, and manage the lifecycle of your containerized service. Each service
container is accessible by only the job that requires it. Service containers work with any kind of job, but they're
most commonly used with container jobs.

Requirements
Service containers must define a CMD or ENTRYPOINT . The pipeline will docker run the provided container without
additional arguments.
Azure Pipelines can run Linux or Windows Containers. Use either hosted Ubuntu for Linux containers, or the
Hosted Windows Container pool for Windows containers. (The Hosted macOS pool does not support running
containers.)
YAML
Classic

Single container job


A simple example of using container jobs:

resources:
containers:
- container: my_container
image: ubuntu:16.04
- container: nginx
image: nginx
- container: redis
image: redis

pool:
vmImage: 'ubuntu-16.04'

container: my_container

services:
nginx: nginx
redis: redis

steps:
- script: |
apt install -y curl
curl nginx
apt install redis-tools
redis-cli -h redis ping
This pipeline fetches the latest nginx and redis containers from Docker Hub and then starts the containers. The
containers are networked together so that they can reach each other by their services name. The pipeline then
runs the apt , curl and redis-cli commands inside the ubuntu:16.04 container. From inside this job container,
the nginx and redis host names resolve to the correct services using Docker networking. All containers on the
network automatically expose all ports to each other.

Single job
You can also use service containers without a job container. A simple example:

resources:
containers:
- container: nginx
image: nginx
ports:
- 8080:80
env:
NGINX_PORT: 80
- container: redis
image: redis
ports:
- 6379

pool:
vmImage: 'ubuntu-16.04'

services:
nginx: nginx
redis: redis

steps:
- script: |
curl localhost:8080
redis-cli -p "${AGENT_SERVICES_REDIS_PORTS_6379}" ping

This pipeline starts the latest nginx and redis containers, and then publishes the specified ports to the host.
Since the job is not running in a container, there's no automatic name resolution. This example shows how you can
instead reach services by using localhost . In the above example we provide the port explicitly (for example,
8080:80 ).

An alternative approach is to let a random port get assigned dynamically at runtime. You can then access these
dynamic ports by using variables. In a Bash script, you can access a variable by using the process environment.
These variables take the form: agent.services.<serviceName>.ports.<port> . In the above example, redis is
assigned a random available port on the host. The agent.services.redis.ports.6379 variable contains the port
number.

Multiple jobs
Service containers are also useful for running the same steps against multiple versions of the same service. In the
following example, the same steps run against multiple versions of PostgreSQL.
resources:
containers:
- container: my_container
image: ubuntu:16.04
- container: pg11
image: postgres:11
- container: pg10
image: postgres:10

pool:
vmImage: 'ubuntu-16.04'

strategy:
matrix:
postgres11:
postgresService: pg11
postgres10:
postgresService: pg10

container: my_container

services:
postgres: $[ variables['postgresService'] ]
steps:
- script: |
apt install -y postgresql-client
psql --host=postgres --username=postgres --command="SELECT 1;"

Ports
When specifying a container resource or an inline container, you can specify an array of ports to expose on the
container.

resources:
containers:
- container: my_service
image: my_service:latest
ports:
- 8080:80
- 5432

services:
redis:
image: redis
ports:
- 6379/tcp

Specifying ports is not required if your job is running in a container because containers on the same Docker
network automatically expose all ports to each other by default.
If your job is running on the host, then ports are required to access the service. A port takes the form
<hostPort>:<containerPort> or just <containerPort> , with an optional /<protocol> at the end, for example
6379/tcp to expose tcp over port 6379 , bound to a random port on the host machine.

For ports bound to a random port on the host machine, the pipeline creates a variable of the form
agent.services.<serviceName>.ports.<port> so that it can be accessed by the job. For example,
agent.services.redis.ports.6379 resolves to the randomly assigned port on the host machine.

Volumes
Volumes are useful for sharing data between services, or for persisting data between multiple runs of a job.
You can specify volume mounts as an array of volumes . Volumes can either be named Docker volumes,
anonymous Docker volumes, or bind mounts on the host.

services:
my_service:
image: myservice:latest
volumes:
- mydockervolume:/data/dir
- /data/dir
- /src/dir:/dst/dir

Volumes take the form <source>:<destinationPath> , where <source> can be a named volume or an absolute path
on the host machine, and <destinationPath> is an absolute path in the container.

NOTE
If you use our hosted pools, then your volumes will not be persisted between jobs because the host machine is cleaned up
after the job is completed.

Other options
Service containers share the same container resources as container jobs. This means that you can use the same
additional options.

Healthcheck
Optionally, if any service container specifies a HEALTHCHECK, the agent waits until the container is healthy before
running the job.
Run cross-platform scripts
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
With Azure Pipelines and Team Foundation Server (TFS), you can run your builds on macOS, Linux, and Windows.
If you develop on cross-platform technologies such as Node.js and Python, these capabilities bring benefits, and
also some challenges. For example, most pipelines include one or more scripts that you want to run during the
build process. But scripts often don't run the same way on different platforms. Below are some tips on how to
handle this kind of challenge.

Run cross-platform tools with a script step


Some scripts just pass arguments to a cross-platform tool. For instance, calling npm with a set of arguments can
be easily accomplished with a script step. script runs in each platform's native script interpreter: Bash on
macOS and Linux, CMD on Windows.
YAML
Classic

steps:
- script: |
npm install
npm test

Handle environment variables


Environment variables throw the first wrinkle into writing cross-platform scripts. Command line, PowerShell, and
Bash each have different ways of reading environment variables. If you need to access an operating system-
provided value like PATH, you'll need different techniques per platform.
However, Azure Pipelines offers a cross-platform way to refer to variables that it knows about. By surrounding a
variable name in $( ) , it will be expanded before the platform's shell ever sees it. For instance, if you want to
echo out the ID of the pipeline, the following script is cross-platform friendly:
YAML
Classic

steps:
- script: echo This is pipeline $(System.DefinitionId)

This also works for variables you specify in the pipeline.

variables:
Example: 'myValue'

steps:
- script: echo The value passed in is $(Example)
Consider Bash or pwsh
If you have more complex scripting needs than the examples shown above, then consider writing them in Bash.
Most macOS and Linux agents have Bash as an available shell, and Windows agents include Git Bash or Windows
Subsystem for Linux Bash.
For Azure Pipelines, the Microsoft-hosted agents always have Bash available.
For example, if you need to make a decision based on whether this is a pull request build:
YAML
Classic

trigger:
batch: true
branches:
include:
- master
steps:
- bash: |
echo "Hello world from $AGENT_NAME running on $AGENT_OS"
case $BUILD_REASON in
"Manual") echo "$BUILD_REQUESTEDFOR manually queued the build." ;;
"IndividualCI") echo "This is a CI build for $BUILD_REQUESTEDFOR." ;;
"BatchedCI") echo "This is a batched CI build for $BUILD_REQUESTEDFOR." ;;
*) $BUILD_REASON ;;
esac
displayName: Hello world

PowerShell Core ( pwsh ) is also an option. It requires each agent to have PowerShell Core installed.

Switch based on platform


In general we recommend that you avoid platform-specific scripts to avoid problems such as duplication of your
pipeline logic. Duplication causes extra work and extra risk of bugs. However, if there's no way to avoid platform-
specific scripting, then you can use a condition to detect what platform you're on.
For example, suppose that for some reason you need the IP address of the build agent. On Windows, ipconfig
gets that information. On macOS, it's ifconfig . And on Ubuntu Linux, it's ip addr .
Set up the below pipeline, then try running it against agents on different platforms.
YAML
Classic
steps:
# Linux
- bash: |
export IPADDR=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/')
echo "##vso[task.setvariable variable=IP_ADDR]$IPADDR"
condition: eq( variables['Agent.OS'], 'Linux' )
displayName: Get IP on Linux
# macOS
- bash: |
export IPADDR=$(ifconfig | grep 'en0' -A3 | grep inet | tail -n1 | awk '{print $2}')
echo "##vso[task.setvariable variable=IP_ADDR]$IPADDR"
condition: eq( variables['Agent.OS'], 'Darwin' )
displayName: Get IP on macOS
# Windows
- powershell: |
Set-Variable -Name IPADDR -Value ((Get-NetIPAddress | ?{ $_.AddressFamily -eq "IPv4" -and !($_.IPAddress
-match "169") -and !($_.IPaddress -match "127") } | Select-Object -First 1).IPAddress)
Write-Host "##vso[task.setvariable variable=IP_ADDR]$IPADDR"
condition: eq( variables['Agent.OS'], 'Windows_NT' )
displayName: Get IP on Windows

# now we use the value, no matter where we got it


- script: |
echo The IP address is $(IP_ADDR)
Use a PowerShell script to customize your build
pipeline
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015 | Previous versions (XAML builds)

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

When you are ready to move beyond the basics of compiling and testing your code, use a PowerShell script to add
your team's business logic to your build pipeline.
You can run Windows PowerShell on a Windows build agent. PowerShell Core runs on any platform.
1. Push your script into your repo.
2. Add a pwsh or powershell step:

# for PowerShell Core


steps:
- pwsh: ./my-script.ps1

# for Windows PowerShell


steps:
- powershell: .\my-script.ps1

You can run Windows PowerShell Script on a Windows build agent.


1. Push your script into your repo.
2. Add a PowerShell build task.
3. Drag the build task where you want it to run.
4. Specify the name of the script.

Example: Version your assemblies


For example, to version to your assemblies, copy and upload this script to your project:

# Look for a 0.0.0.0 pattern in the build number.


# If found use it to version the assemblies.
#
# For example, if the 'Build number format' build pipeline parameter
# $(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r)
# then your build numbers come out like this:
# "Build HelloWorld_2013.07.19.1"
# This script would then apply version 2013.07.19.1 to your assemblies.

# Enable -Verbose option


[CmdletBinding()]

# Regular expression pattern to find the version in the build number


# and then apply it to the assemblies
$VersionRegex = "\d+\.\d+\.\d+\.\d+"

# If this script is not running on a build server, remind user to


# set environment variables so that this script can be debugged
if(-not ($Env:BUILD_SOURCESDIRECTORY -and $Env:BUILD_BUILDNUMBER))
{
Write-Error "You must set the following environment variables"
Write-Error "to test this script interactively."
Write-Host '$Env:BUILD_SOURCESDIRECTORY - For example, enter something like:'
Write-Host '$Env:BUILD_SOURCESDIRECTORY = "C:\code\FabrikamTFVC\HelloWorld"'
Write-Host '$Env:BUILD_BUILDNUMBER - For example, enter something like:'
Write-Host '$Env:BUILD_BUILDNUMBER = "Build HelloWorld_0000.00.00.0"'
exit 1
}

# Make sure path to source code directory is available


if (-not $Env:BUILD_SOURCESDIRECTORY)
{
Write-Error ("BUILD_SOURCESDIRECTORY environment variable is missing.")
exit 1
}
elseif (-not (Test-Path $Env:BUILD_SOURCESDIRECTORY))
{
Write-Error "BUILD_SOURCESDIRECTORY does not exist: $Env:BUILD_SOURCESDIRECTORY"
exit 1
}
Write-Verbose "BUILD_SOURCESDIRECTORY: $Env:BUILD_SOURCESDIRECTORY"
Write-Verbose "BUILD_SOURCESDIRECTORY: $Env:BUILD_SOURCESDIRECTORY"

# Make sure there is a build number


if (-not $Env:BUILD_BUILDNUMBER)
{
Write-Error ("BUILD_BUILDNUMBER environment variable is missing.")
exit 1
}
Write-Verbose "BUILD_BUILDNUMBER: $Env:BUILD_BUILDNUMBER"

# Get and validate the version data


$VersionData = [regex]::matches($Env:BUILD_BUILDNUMBER,$VersionRegex)
switch($VersionData.Count)
{
0
{
Write-Error "Could not find version number data in BUILD_BUILDNUMBER."
exit 1
}
1 {}
default
{
Write-Warning "Found more than instance of version data in BUILD_BUILDNUMBER."
Write-Warning "Will assume first instance is version."
}
}
$NewVersion = $VersionData[0]
Write-Verbose "Version: $NewVersion"

# Apply the version to the assembly property files


$files = gci $Env:BUILD_SOURCESDIRECTORY -recurse -include "*Properties*","My Project" |
?{ $_.PSIsContainer } |
foreach { gci -Path $_.FullName -Recurse -include AssemblyInfo.* }
if($files)
{
Write-Verbose "Will apply $NewVersion to $($files.count) files."

foreach ($file in $files) {


$filecontent = Get-Content($file)
attrib $file -r
$filecontent -replace $VersionRegex, $NewVersion | Out-File $file
Write-Verbose "$file.FullName - version applied"
}
}
else
{
Write-Warning "Found no files."
}

Add the build task to your build pipeline.


Specify your build number with something like this:

$(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r)

Use the OAuth token to access the REST API


YAML
Classic
You can use $env:SYSTEM_ACCESSTOKEN in your script in a YAML pipeline to access the OAuth token.

- task: PowerShell@2
inputs:
targetType: inline
script: |
$url =
"$($env:SYSTEM_TEAMFOUNDATIONCOLLECTIONURI)$env:SYSTEM_TEAMPROJECTID/_apis/build/definitions/$($env:SYSTEM_DEF
INITIONID)?api-version=5.0"
Write-Host "URL: $url"
$pipeline = Invoke-RestMethod -Uri $url -Headers @{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
Write-Host "Pipeline = $($pipeline | ConvertTo-Json -Depth 100)"

FAQ
What variables are available for me to use in my scripts?
Use variables
How do I set a variable so that it can be read by subsequent scripts and tasks?
Define and modify your build variables in a script
Define and modify your release variables in a script
Which branch of the script does the build run?
The build runs the script same branch of the code you are building.
What kinds of parameters can I use?
You can use named parameters. Other kinds of parameters, such as switch parameters, are not yet supported and
will cause errors.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Run Git commands in a script
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 |Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

For some workflows you need your build pipeline to run Git commands. For example, after a CI build on a feature
branch is done, the team might want to merge the branch to master.
Git is available on Microsoft-hosted agents and on on-premises agents.

Enable scripts to run Git commands


NOTE
Before you begin, be sure your account's default identity is set with:

git config --global user.email "[email protected]"


git config --global user.name "Your Name"

Grant version control permissions to the build service


Go to the Version Control control panel tab ▼
Azure Repos: https://dev.azure.com/{your-organization}/{your-project}/_admin/_versioncontrol
On-premises: https://{your-server}:8080/tfs/DefaultCollection/{your-project}/_admin/_versioncontrol

If you see this page, select the repo, and then click the link:
On the Version Control tab, select the repository in which you want to run Git commands, and then select Project
Collection Build Ser vice . By default, this identity can read from the repo but cannot push any changes back to it.

Grant permissions needed for the Git commands you want to run. Typically you'll want to grant:
Create branch: Allow
Contribute: Allow
Read: Allow
Create tag: Allow
When you're done granting the permissions, make sure to click Save changes .
Enable your pipeline to run command-line Git
On the variables tab set this variable:
NAME VA L UE

system.prefergit true

Allow scripts to access the system token


YAML
Classic
Add a checkout section with persistCredentials set to true .

steps:
- checkout: self
persistCredentials: true

Learn more about checkout .


On the options tab select Allow scripts to access OAuth token .

Make sure to clean up the local repo


Certain kinds of changes to the local repository are not automatically cleaned up by the build pipeline. So make
sure to:
Delete local branches you create.
Undo git config changes.
If you run into problems using an on-premises agent, make sure the repo is clean:
YAML
Classic
Make sure checkout has clean set to true .

steps:
- checkout: self
clean: true

On the repository tab set Clean to true.


On the variables tab create or modify the Build.Clean variable and set it to source

Examples
List the files in your repo
Make sure to follow the above steps to enable Git.
On the build tab add this task:

TA SK A RGUM EN T S

Tool: git

Utility: Command Line Arguments : ls-files


List the files in the Git repo.
Merge a feature branch to master
You want a CI build to merge to master if the build succeeds.
Make sure to follow the above steps to enable Git.
On the Triggers tab select Continuous integration (CI) and include the branches you want to build.
Create merge.bat at the root of your repo:

@echo off
ECHO SOURCE BRANCH IS %BUILD_SOURCEBRANCH%
IF %BUILD_SOURCEBRANCH% == refs/heads/master (
ECHO Building master branch so no merge is needed.
EXIT
)
SET sourceBranch=origin/%BUILD_SOURCEBRANCH:refs/heads/=%
ECHO GIT CHECKOUT MASTER
git checkout master
ECHO GIT STATUS
git status
ECHO GIT MERGE
git merge %sourceBranch% -m "Merge to master"
ECHO GIT STATUS
git status
ECHO GIT PUSH
git push origin
ECHO GIT STATUS
git status

On the build tab add this as the last task:

TA SK A RGUM EN T S

Path : merge.bat

Utility: Batch Script


Run merge.bat.

FAQ
Can I run Git commands if my remote repo is in GitHub or another Git service such as Bitbucket Cloud?
Yes
Which tasks can I use to run Git commands?
Batch Script
Command Line
PowerShell
Shell Script
How do I avoid triggering a CI build when the script pushes?
Add ***NO_CI*** to your commit message. Here are examples:
git commit -m "This is a commit message ***NO_CI***"
git merge origin/features/hello-world -m "Merge to master ***NO_CI***"

Add [skip ci] to your commit message or description. Here are examples:
git commit -m "This is a commit message [skip ci]"
git merge origin/features/hello-world -m "Merge to master [skip ci]"

You can also use any of the variations below. This is supported for commits to Azure Repos Git, Bitbucket Cloud,
GitHub, and GitHub Enterprise Server.
[skip ci] or [ci skip]
skip-checks: true or skip-checks:true
[skip azurepipelines] or [azurepipelines skip]
[skip azpipelines] or [azpipelines skip]
[skip azp] or [azp skip]
***NO_CI***

How does enabling scripts to run Git commands affect how the build pipeline gets build sources?
When you set system.prefergit to true , the build pipeline uses command-line Git instead of LibGit2Sharp to
clone or fetch the source files.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Pipeline caching
11/2/2020 • 14 minutes to read • Edit Online

Pipeline caching can help reduce build time by allowing the outputs or downloaded dependencies from one run to
be reused in later runs, thereby reducing or avoiding the cost to recreate or redownload the same files again.
Caching is especially useful in scenarios where the same dependencies are downloaded over and over at the start
of each run. This is often a time consuming process involving hundreds or thousands of network calls.
Caching can be effective at improving build time provided the time to restore and save the cache is less than the
time to produce the output again from scratch. Because of this, caching may not be effective in all scenarios and
may actually have a negative impact on build time.
Caching is currently supported in CI and deployment jobs, but not classic release jobs.
When to use artifacts versus caching
Pipeline caching and pipeline artifacts perform similar functions but are designed for different scenarios and
should not be used interchangeably. In general:
Use pipeline ar tifacts when you need to take specific files produced in one job and share them with other
jobs (and these other jobs will likely fail without them).
Use pipeline caching when you want to improve build time by reusing files from previous runs (and not
having these files will not impact the job's ability to run).

Using the Cache task


Caching is added to a pipeline using the Cache pipeline task. This task works like any other task and is added to
the steps section of a job.
When a cache step is encountered during a run, the task will restore the cache based on the provided inputs. If no
cache is found, the step completes and the next step in the job is run. After all steps in the job have run and
assuming a successful job status, a special "save cache" step is run for each "restore cache" step that was not
skipped. This step is responsible for saving the cache.

NOTE
Caches are immutable, meaning that once a cache is created, its contents cannot be changed. See Can I clear a cache? in the
FAQ section for additional details.

Configuring the task


The Cache task has two required inputs: key and path .
Path input
path should be set to the directory to populate the cache from (on save) and to store files in (on restore). It can be
absolute or relative. Relative paths are resolved against $(System.DefaultWorkingDirectory) .
Key input
key should be set to the identifier for the cache you want to restore or save. Keys are composed of a combination
of string values, file paths, or file patterns, where each segment is separated by a | character.
Strings :
fixed value (like the name of the cache or a tool name) or taken from an environment variable (like the
current OS or current job name)
File paths :
path to a specific file whose contents will be hashed. This file must exist at the time the task is run. Keep in
mind that any key segment that "looks like a file path" will be treated like a file path. In particular, this
includes segments containing a . . This could result in the task failing when this "file" does not exist.

TIP
To avoid a path-like string segment from being treated like a file path, wrap it with double quotes, for example:
"my.key" | $(Agent.OS) | key.file

File patterns :
comma-separated list of glob-style wildcard pattern that must match at least one file. For example:
**/yarn.lock : all yarn.lock files under the sources directory
*/asset.json, !bin/** : all asset.json files located in a directory under the sources directory, except under
the bin directory
The contents of any file identified by a file path or file pattern is hashed to produce a dynamic cache key. This is
useful when your project has file(s) that uniquely identify what is being cached. For example, files like
package-lock.json , yarn.lock , Gemfile.lock , or Pipfile.lock are commonly referenced in a cache key since they
all represent a unique set of dependencies.
Relative file paths or file patterns are resolved against $(System.DefaultWorkingDirectory) .
Example :
Here is an example showing how to cache dependencies installed by Yarn:

variables:
YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn

steps:
- task: Cache@2
inputs:
key: 'yarn | "$(Agent.OS)" | yarn.lock'
restoreKeys: |
yarn | "$(Agent.OS)"
yarn
path: $(YARN_CACHE_FOLDER)
displayName: Cache Yarn packages

- script: yarn --frozen-lockfile

In this example, the cache key contains three parts: a static string ("yarn"), the OS the job is running on since this
cache is unique per operating system, and the hash of the yarn.lock file that uniquely identifies the set of
dependencies in the cache.
On the first run after the task is added, the cache step will report a "cache miss" since the cache identified by this
key does not exist. After the last step, a cache will be created from the files in $(Pipeline.Workspace)/.yarn and
uploaded. On the next run, the cache step will report a "cache hit" and the contents of the cache will be downloaded
and restored.
Restore keys
restoreKeys can be used if one wants to query against multiple exact keys or key prefixes. This is used to fallback
to another key in the case that a key does not yield a hit. A restore key will search for a key by prefix and yield the
latest created cache entry as a result. This is useful if the pipeline is unable to find an exact match but wants to use
a partial cache hit instead. To insert multiple restore keys, simply delimit them by using a new line to indicate the
restore key (see the example for more details). The order of which restore keys will be tried against will be from top
to bottom.
Required software on self-hosted agent

A RC H IVE SO F T WA RE /
P L AT F O RM W IN DO W S L IN UX MAC

GNU Tar Required Required No

BSD Tar No No Required

7-Zip Recommended No No

The above executables need to be in a folder listed in the PATH environment variable. Please note that the hosted
agents come with the software included, this is only applicable for self-hosted agents.
Example :
Here is an example on how to use restore keys by Yarn:

variables:
YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn

steps:
- task: Cache@2
inputs:
key: yarn | $(Agent.OS) | yarn.lock
path: $(YARN_CACHE_FOLDER)
restoreKeys: |
yarn | $(Agent.OS)
yarn
displayName: Cache Yarn packages

- script: yarn --frozen-lockfile

In this example, the cache task will attempt to find if the key exists in the cache. If the key does not exist in the
cache, it will try to use the first restore key yarn | $(Agent.OS) . This will attempt to search for all keys that either
exactly match that key or has that key as a prefix. A prefix hit can happen if there was a different yarn.lock hash
segment. For example, if the following key yarn | $(Agent.OS) | old-yarn.lock was in the cache where the old
yarn.lock yielded a different hash than yarn.lock , the restore key will yield a partial hit. If there is a miss on the
first restore key, it will then use the next restore key yarn which will try to find any key that starts with yarn . For
prefix hits, the result will yield the most recently created cache key as the result.

NOTE
A pipeline can have one or more caching task(s). There is no limit on the caching storage capacity, and jobs and tasks from
the same pipeline can access and share the same cache.

Cache isolation and security


To ensure isolation between caches from different pipelines and different branches, every cache belongs to a
logical container called a scope. Scopes provide a security boundary that ensures a job from one pipeline cannot
access the caches from a different pipeline, and a job building a PR has read access to the caches for the PR's target
branch (for the same pipeline), but cannot write (create) caches in the target branch's scope.
When a cache step is encountered during a run, the cache identified by the key is requested from the server. The
server then looks for a cache with this key from the scopes visible to the job, and returns the cache (if available). On
cache save (at the end of the job), a cache is written to the scope representing the pipeline and branch. See below
for more details.
CI, manual, and scheduled runs
SC O P E REA D W RIT E

Source branch Yes Yes

main branch Yes No

Pull request runs


SC O P E REA D W RIT E

Source branch Yes No

Target branch Yes No

Intermediate branch (e.g. Yes Yes


refs/pull/1/merge )

main branch Yes No

Pull request fork runs


B RA N C H REA D W RIT E

Target branch Yes No

Intermediate branch (e.g. Yes Yes


refs/pull/1/merge )

main branch Yes No

TIP
Because caches are already scoped to a project, pipeline, and branch, there is no need to include any project, pipeline, or
branch identifiers in the cache key.

Conditioning on cache restoration


In some scenarios, the successful restoration of the cache should cause a different set of steps to be run. For
example, a step that installs dependencies can be skipped if the cache was restored. This is possible using the
cacheHitVar task input. Setting this input to the name of an environment variable will cause the variable to be set
to true when there is a cache hit, inexact on a restore key cache hit, otherwise it will be set to false . This
variable can then be referenced in a step condition or from within a script.
In the following example, the install-deps.sh step is skipped when the cache is restored:
steps:
- task: Cache@2
inputs:
key: mykey | mylockfile
restoreKeys: mykey
path: $(Pipeline.Workspace)/mycache
cacheHitVar: CACHE_RESTORED

- script: install-deps.sh
condition: ne(variables.CACHE_RESTORED, 'true')

- script: build.sh

Bundler
For Ruby projects using Bundler, override the BUNDLE_PATH environment variable used by Bundler to set the path
Bundler will look for Gems in.
Example :

variables:
BUNDLE_PATH: $(Pipeline.Workspace)/.bundle

steps:
- task: Cache@2
inputs:
key: 'gems | "$(Agent.OS)" | my.gemspec'
restoreKeys: |
gems | "$(Agent.OS)"
gems
path: $(BUNDLE_PATH)
displayName: Cache gems

- script: bundle install

ccache (C/C++)
ccache is a compiler cache for C/C++. To use ccache in your pipeline make sure ccache is installed, and optionally
added to your PATH (see ccache run modes). Set the CCACHE_DIR environment variable to a path under
$(Pipeline.Workspace) and cache this directory.

Example :

variables:
CCACHE_DIR: $(Pipeline.Workspace)/ccache

steps:
- bash: |
sudo apt-get install ccache -y
echo "##vso[task.prependpath]/usr/lib/ccache"
displayName: Install ccache and update PATH to use linked versions of gcc, cc, etc

- task: Cache@2
inputs:
key: 'ccache | "$(Agent.OS)"'
path: $(CCACHE_DIR)
displayName: ccache
NOTE
In this example, the key is a fixed value (the OS name) and because caches are immutable, once a cache with this key is
created for a particular scope (branch), the cache cannot be updated. This means subsequent builds for the same branch will
not be able to update the cache even if the cache's contents have changed. This problem will be addressed in an upcoming
feature: 10842: Enable fallback keys in Pipeline Caching

See ccache configuration settings for more options, including settings to control compression level.

Gradle
Using Gradle's built-in caching support can have a significant impact on build time. To enable, set the
GRADLE_USER_HOME environment variable to a path under $(Pipeline.Workspace) and either pass --build-cache on
the command line or set org.gradle.caching=true in your gradle.properties file.
Example :

variables:
GRADLE_USER_HOME: $(Pipeline.Workspace)/.gradle

steps:
- task: Cache@2
inputs:
key: 'gradle | "$(Agent.OS)"'
restoreKeys: gradle
path: $(GRADLE_USER_HOME)
displayName: Gradle build cache

- script: |
./gradlew --build-cache build
# stop the Gradle daemon to ensure no files are left open (impacting the save cache operation later)
./gradlew --stop
displayName: Build

NOTE
In this example, the key is a fixed value (the OS name) and because caches are immutable, once a cache with this key is
created for a particular scope (branch), the cache cannot be updated. This means subsequent builds for the same branch will
not be able to update the cache even if the cache's contents have changed. This problem will be addressed in an upcoming
feature: 10842: Enable fallback keys in Pipeline Caching.

Maven
Maven has a local repository where it stores downloads and built artifacts. To enable, set the maven.repo.local
option to a path under $(Pipeline.Workspace) and cache this folder.
Example :
variables:
MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'

steps:
- task: Cache@2
inputs:
key: 'maven | "$(Agent.OS)" | **/pom.xml'
restoreKeys: |
maven | "$(Agent.OS)"
maven
path: $(MAVEN_CACHE_FOLDER)
displayName: Cache Maven local repo

- script: mvn install -B -e

If you are using a Maven task, make sure to also pass the MAVEN_OPTS variable because it gets overwritten
otherwise:

- task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
mavenOptions: '-Xmx3072m $(MAVEN_OPTS)'

.NET/NuGet
If you use PackageReferences to manage NuGet dependencies directly within your project file and have
packages.lock.json file(s), you can enable caching by setting the NUGET_PACKAGES environment variable to a path
under $(Pipeline.Workspace) and caching this directory.
Example :

variables:
NUGET_PACKAGES: $(Pipeline.Workspace)/.nuget/packages

steps:
- task: Cache@2
inputs:
key: 'nuget | "$(Agent.OS)" | **/packages.lock.json,!**/bin/**'
restoreKeys: |
nuget | "$(Agent.OS)"
path: $(NUGET_PACKAGES)
displayName: Cache NuGet packages

TIP
Environment variables always override any settings in the NuGet.Config file. If your pipeline failed with the error:
Information, There is a cache miss. , you must create a pipeline variable for NUGET_PACKAGES to point to the new local
path on the agent (exp d:\a\1). Your pipeline should pick up the changes then and continue the task successfully.

See Package reference in project files for more details.

Node.js/npm
There are different ways to enable caching in a Node.js project, but the recommended way is to cache npm's shared
cache directory. This directory is managed by npm and contains a cached version of all downloaded modules.
During install, npm checks this directory first (by default) for modules which can reduce or eliminate network calls
to the public npm registry or to a private registry.
Because the default path to npm's shared cache directory is not the same across all platforms, it is recommended
to override the npm_config_cache environment variable to a path under $(Pipeline.Workspace) . This also ensures
the cache is accessible from container and non-container jobs.
Example :

variables:
npm_config_cache: $(Pipeline.Workspace)/.npm

steps:
- task: Cache@2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
path: $(npm_config_cache)
displayName: Cache npm

- script: npm ci

If your project does not have a package-lock.json file, reference the package.json file in the cache key input
instead.

TIP
Because npm ci deletes the node_modules folder to ensure that a consistent, repeatable set of modules is used, you
should avoid caching node_modules when calling npm ci .

Node.js/Yarn
Like with npm, there are different ways to cache packages installed with Yarn. The recommended way is to cache
Yarn's shared cache folder. This directory is managed by Yarn and contains a cached version of all downloaded
packages. During install, Yarn checks this directory first (by default) for modules, which can reduce or eliminate
network calls to public or private registries.
Example :

variables:
YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn

steps:
- task: Cache@2
inputs:
key: 'yarn | "$(Agent.OS)" | yarn.lock'
restoreKeys: |
yarn | "$(Agent.OS)"
path: $(YARN_CACHE_FOLDER)
displayName: Cache Yarn packages

- script: yarn --frozen-lockfile

Python/pip
For Python projects that use pip or Poetry, override the PIP_CACHE_DIR environment variable. If you use Poetry, in
the key field, replace requirements.txt with poetry.lock .
Example

variables:
PIP_CACHE_DIR: $(Pipeline.Workspace)/.pip

steps:
- task: Cache@2
inputs:
key: 'python | "$(Agent.OS)" | requirements.txt'
restoreKeys: |
python | "$(Agent.OS)"
python
path: $(PIP_CACHE_DIR)
displayName: Cache pip packages

- script: pip install -r requirements.txt

Python/Pipenv
For Python projects that use Pipenv, override the PIPENV_CACHE_DIR environment variable.
Example

variables:
PIPENV_CACHE_DIR: $(Pipeline.Workspace)/.pipenv

steps:
- task: Cache@2
inputs:
key: 'python | "$(Agent.OS)" | Pipfile.lock'
restoreKeys: |
python | "$(Agent.OS)"
python
path: $(PIPENV_CACHE_DIR)
displayName: Cache pipenv packages

- script: pipenv install

PHP/Composer
For PHP projects using Composer, override the COMPOSER_CACHE_DIR environment variable used by Composer.
Example :

variables:
COMPOSER_CACHE_DIR: $(Pipeline.Workspace)/.composer

steps:
- task: Cache@2
inputs:
key: 'composer | "$(Agent.OS)" | composer.lock'
restoreKeys: |
composer | "$(Agent.OS)"
composer
path: $(COMPOSER_CACHE_DIR)
displayName: Cache composer

- script: composer install


Docker images
Caching docker images will dramatically reduce the time it takes to run your pipeline.

pool:
vmImage: ubuntu-16.04

steps:
- task: Cache@2
inputs:
key: 'docker | "$(Agent.OS)" | caching-docker.yml'
path: $(Pipeline.Workspace)/docker
cacheHitVar: DOCKER_CACHE_RESTORED
displayName: Caching Docker image

- script: |
docker load $(Pipeline.Workspace)/docker/cache.tar
condition: and(not(canceled()), eq(variables.DOCKER_CACHE_RESTORED, 'true'))

- script: |
mkdir -p $(Pipeline.Workspace)/docker
docker pull ubuntu
docker save ubuntu > $(Pipeline.Workspace)/docker/cache.tar
condition: and(not(canceled()), or(failed(), ne(variables.DOCKER_CACHE_RESTORED, 'true')))

Known issues and feedback


If you experience problems enabling caching for your project, first check the list of pipeline caching issues in the
microsoft/azure-pipelines-tasks repo. If you don't see your issue listed, create a new issue.

FAQ
Can I clear a cache?
Clearing a cache is currently not supported. However you can add a string literal (e.g. version2 ) to your existing
cache key to change the key in a way that avoids any hits on existing caches. For example, change the following
cache key from this:

key: 'yarn | "$(Agent.OS)" | yarn.lock'

to this:

key: 'version2 | yarn | "$(Agent.OS)" | yarn.lock'

When does a cache expire?


A cache will expire after 7 days of no activity.
Is there a limit on the size of a cache?
There is no enforced limit on the size of individual caches or the total size of all caches in an organization.
Configure run or build numbers
11/2/2020 • 2 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

You can customize how your pipeline runs are numbered. The default value for run number is
$(Date:yyyyMMdd).$(Rev:r) .

YAML
Classic
In YAML, this property is called name and is at the root level of a pipeline. If not specified, your run is given a
unique integer as its name. You can give runs much more useful names that are meaningful to your team. You can
use a combination of tokens, variables, and underscore characters.

name: $(TeamProject)_$(Build.DefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r)

steps:
- script: echo $(Build.BuildNumber) # outputs customized build number like project_def_master_20200828.1

YAML builds are not yet available on TFS.

Example
At the time a run is started:
Project name: Fabrikam
Pipeline name: CIBuild
Branch: master
Build ID/Run ID: 752
Date: May 5, 2019.
Time: 9:07:03 PM.
One run completed earlier today.
If you specify this build number format:

$(TeamProject)_$(Build.DefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r)

Then the second run on this day would be named: Fabrikam_CIBuild_master_20190505.2

Tokens
The following table shows how each token is resolved based on the previous example. You can use these tokens
only to define a run number; they don't work anywhere else in your pipeline.

TO K EN EXA M P L E REP L A C EM EN T VA L UE

$(Build.DefinitionName) CIBuild

Note: The pipeline name must not contain invalid or


whitespace characters.

$(BuildID) 752

$(BuildID) is an internal immutable ID that is also referred to


as the Run ID. It is unique across the organization.

$(DayOfMonth) 5

$(DayOfYear) 217

$(Hours) 21

$(Minutes) 7

$(Month) 8

$(Rev:r) 2 (The third run on this day will be 3, and so on.)

Use $(Rev:r) to ensure that every completed build has a


unique name. When a build is completed, if nothing else in the
build number has changed, the Rev integer value is
incremented by one.

If you want to show prefix zeros in the number, you can add
additional 'r' characters. For example, specify $(Rev:rr) if you
want the Rev number to begin with 01, 02, and so on.

$(Date:yyyyMMdd) 20090824

You can specify other date formats such as


$(Date:MMddyy)

$(Seconds) 3

$(SourceBranchName) master

$(TeamProject) Fabrikam

$(Year:yy) 09

$(Year:yyyy) 2009

Variables
You can also use user-defined and predefined variables that have a scope of "All" in your number. For example, if
you've defined My.Variable , you could specify the following number format:
$(Build.DefinitionName)_$(Build.DefinitionVersion)_$(Build.RequestedFor)_$(Build.BuildId)_$(My.Variable)

The first four variables are predefined. My.Variable is defined by you on the variables tab.

FAQ
How large can a run number be?
Runs may be up to 255 characters.
In what time zone are the build number time values expressed?
The time zone is UTC.
The time zone is the same as the time zone of the operating system of the machine where you are running your
application tier server.
How can you reference the run number variable within a script?
The run number variable can be called with $(Build.BuildNumber) . You can define a new variable that includes the
run number or call the run number directly. In this example, $(MyRunNumber) is a new variable that includes the run
number.

# Set MyRunNumber
variables:
MyRunNumber: '1.0.0-CI-$(Build.BuildNumber)'

steps:
- script: echo $(MyRunNumber) # display MyRunNumber
- script: echo $(Build.BuildNumber) #display Run Number
Build options
11/2/2020 • 2 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Create a work item on failure


If the build pipeline fails, you can automatically create a work item to track getting the problem fixed. You can
specify the work item type.
You can also select if you want to assign the work item to the requestor. For example, if this is a CI build, and a
team member checks in some code that breaks the build, then the work item is assigned to that person.
Additional Fields: You can set the value of work item fields. For example:

F IEL D VA L UE

System.Title Build $(Build.BuildNumber) failed

System.Reason Build failure

Q: What other work item fields can I set? A: Work item field index

Allow scripts to access the OAuth token


Select this check box if you want to enable your script to use the build pipeline OAuth token.
For an example, see Use a script to customize your build pipeline.

Default agent pool


TFS 2017.1 and older
This section is available under General tab.

Select the pool that's attached to the pool that contains the agents you want to run this pipeline.

TIP
If your code is in Azure Pipelines and you run your builds on Windows, in many cases the simplest option is to use the
Hosted pool.

Build job authorization scope


TFS 2017.1 and older
This section is available under General tab.

Specify the authorization scope for a build job. Select:


Project Collection if the build needs access to multiple projects.
Current Project if you want to restrict this build to have access only the resources in the current project.
For more information, see Understand job access tokens.

Build (run) number


This documentation has moved to Build (run) number.
About pipeline tests
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
This article describes commonly used terms used in pipeline test report and test analytics.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

T ERM DEF IN IT IO N

Duration Time elapsed in execution of a test , test run , or entire


test execution in a build or release pipeline.

Owner Owner of a test or test run . The test owner is typically


specified as an attribute in the test code. See Publish Test
Results task to view the mapping of the Owner attribute for
supported test result formats.

Failing build Reference to the build having the first occurrence of


consecutive failures of a test case.

Failing release Reference to the release having the first occurrence of


consecutive failures of a test case.

Outcome There are 15 possible outcomes for a test result: Aborted,


Blocked, Error, Failed, Inconclusive, In progress, None, Not
applicable, Not executed, Not impacted, Passed, Paused,
Timeout, Unspecified, and Warning.
Some of the commonly used outcomes are:
- Abor ted : Test execution terminated abruptly due to
internal or external factors, e.g., bad code, environment
issues.
- Failed : Test not meeting the desired outcome.
- Inconclusive : Test without a definitive outcome.
- Not executed : Test marked as skipped for execution.
- Not impacted : Test not impacted by the code change
that triggered the pipeline.
- Passed : Test executed successfully.
- Timeout : Test execution duration exceeding the specified
threshold.

Flaky test A test with non-deterministic behavior. For example, the test
may result in different outcomes for the same configuration,
code, or inputs.

Filter Mechanism to search for the test results within the result
set, using the available attributes. Learn more.
T ERM DEF IN IT IO N

Grouping An aid to organizing the test results view based on available


attributes such as Requirement , Test files , Priority , and
more. Both test report and test analytics provide support for
grouping test results.

Pass percentage Measure of the success of test outcome for a single instance
of execution or over a period of time.

Priority Specifies the degree of importance or criticality of a test.


Priority is typically specified as an attribute in the test code.
See Publish Test Results task to view the mapping of the
Priority attribute for supported test result formats.

Test analytics A view of the historical test data to provide meaningful


insights.

Test case Uniquely identifies a single test within the specified branch.

Test files Group tests based on the way they are packaged; such as
files, DLLs, or other formats.

Test repor t A view of single instance of test execution in the pipeline


that contains details of status and help for troubleshooting,
traceability, and more.

Test result Single instance of execution of a test case with a specific


outcome and details.

Test run Logical grouping of test results based on:


- Test executed using built-in tasks : All tests executed
using a single task such as Visual Studio Test, Ant, Maven,
Gulp, Grunt or Xcode will be reported under a single test run
- Results published using Publish Test Results task :
Provides an option to group all test results from one or
more test results files into a single run, or individual runs per
file
- Tests results published using API(s) : API(s) provide
the flexibility to create test runs and organize test results for
each run as required.

Traceability Ability to trace forward or backward to a requirement, bug,


or source code from a test result.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Run tests in parallel using the Visual Studio Test task
11/2/2020 • 9 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
For TFS, this topic applies to only TFS 2017 Update 1 and later.

Running tests to validate changes to code is key to maintaining quality. For continuous integration practice to be
successful, it is essential you have a good test suite that is run with every build. However, as the codebase grows,
the regression test suite tends to grow as well and running a full regression test can take a long time. Sometimes,
tests themselves may be long running - this is typically the case if you write end-to-end tests. This reduces the
speed with which customer value can be delivered as pipelines cannot process builds quickly enough.
Running tests in parallel is a great way to improve the efficiency of CI/CD pipelines. This can be done easily by
employing the additional capacity offered by the cloud. This article discusses how you can configure the Visual
Studio Test task to run tests in parallel by using multiple agents.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Pre-requisite
Familiarize yourself with the concepts of agents and jobs. To run multiple jobs in parallel, you must configure
multiple agents. You also need sufficient parallel jobs.

Test slicing
The Visual Studio Test task (version 2) is designed to work seamlessly with parallel job settings. When a pipeline job
that contains the Visual Studio Test task (referred to as the "VSTest task" for simplicity) is configured to run on
multiple agents in parallel, it automatically detects that multiple agents are involved and creates test slices that can
be run in parallel across these agents.
The task can be configured to create test slices to suit different requirements such as batching based on the
number of tests and agents, the previous test running times, or the location of tests in assemblies.

These options are explained in the following sections.


Simple slicing based on the number of tests and agents
This setting uses a simple slicing algorithm to divide up the number of tests 'T' across 'N' agents so that each agent
runs T/N tests. For example, if your test suite contains 1000 tests, and you use two agents for parallel jobs, each
agent will run 500 tests. Or you can reduce the amount of time taken to run the tests even further by using eight
agents, in which case each agent runs 125 tests in parallel.
This option is typically used when all tests have similar running times. If test running times are not similar, agents
may not be utilized effectively because some agents may receive slices with several long-running tests, while other
agents may receive slices with short-running tests and finish much earlier than the rest of the agents.
Slicing based on the past running time of tests
This setting considers past running times to create slices of tests so that each slice has approximately the same
running time. Short-running tests will be batched together, while long-running tests will be allocated to separate
slices.
This option should be used when tests within an assembly do not have dependencies, and do not need to run on
the same agent. This option results in the most efficient utilization of agents because every agent gets the same
amount of 'work' and all finish at approximately the same time.
Slicing based on test assemblies
This setting uses a simple slicing algorithm that divides up the number of test assemblies (or files) 'A' across 'N'
agents, so that each agent runs tests from A/N assemblies. The number of tests within an assembly is not taken
into account when using this option. For example, if your test suite contains ten test assemblies and you use two
agents for parallel jobs, each agent will receive five test assemblies to run. You can reduce the amount of time taken
to run the tests even further by using five agents, in which case each agent gets two test assemblies to run.
This option should be used when tests within an assembly have dependencies or utilize AssemblyInitialize and
AssemblyCleanup , or ClassInitialize and ClassCleanup methods, to manage state in your test code.

Run tests in parallel in build pipelines


If you have a large test suite or long-running integration tests to run in your build pipeline, use the following steps.

NOTE
To use the multi-agent capability in build pipelines with on-premises TFS server, you must use TFS 2018 Update 2 or a later
version.

1. Build job using a single agent . Build Visual Studio projects and publish build artifacts using the tasks
shown in the following image. This uses the default job settings (single agent, no parallel jobs).
2. Run tests in parallel using multiple agents :
Add an agent job

Configure the job to use multiple agents in parallel . The example here uses three agents.
TIP
For massively parallel testing, you can specify as many as 99 agents.

Add a Download Build Ar tifacts task to the job. This step is the link between the build job and the
test job, and is necessary to ensure that the binaries generated in the build job are available on the
agents used by the test job to run tests. Ensure that the task is set to download artifacts produced by
the 'Current build' and the artifact name is the same as the artifact name used in the Publish Build
Ar tifacts task in the build job.
Add the Visual Studio Test task and configure it to use the required slicing strategy.

Setting up jobs for parallel testing in YAML pipelines


Specify the parallel strategy in the job and indicate how many jobs should be dispatched. You can specify as
many as 99 agents to scale up testing for large test suites.

jobs:
- job: ParallelTesting
strategy:
parallel: 2

For more information, see YAML schema - Job.

Run tests in parallel in release pipelines


Use the following steps if you have a large test suite or long-running functional tests to run after deploying your
application. For example, you may want to deploy a web-application and run Selenium tests in a browser to
validate the app functionality.

NOTE
To use the multi-agent capability in release pipelines with on-premises TFS server, you must use TFS 2017 Update 1 or a later
version.
1. Deploy app using a single agent . Use the tasks shown in the image below to deploy a web app to Azure
App Services. This uses the default job settings (single agent, no parallel jobs).

2. Run tests in parallel using multiple agents :


Add an agent job

Configure the job to use multiple agents in parallel . The example here uses three agents.
TIP
For massively parallel testing, you can specify as many as 99 agents.

Add any additional tasks that must run before the Visual Studio test task is run. For example, run a
PowerShell script to set up any data required by your tests.

TIP
Jobs in release pipelines download all artifacts linked to the release pipeline by default. To save time, you can
configure the job to download only the test artifacts required by the job. For example, web app binaries are
not required to run Selenium tests and downloading these can be skipped if the app and test artifacts are
published separately by your build pipeline.

Add the Visual Studio Test task and configure it to use the required slicing strategy.

TIP
If the test machines do not have Visual Studio installed, you can use the Visual Studio Test Platform Installer
task to acquire the required version of the test platform.

Massively parallel testing by combining parallel pipeline jobs with


parallel test execution
When parallel jobs are used in a pipeline, it employs multiple machines (agents) to run each job in parallel. Test
frameworks and runners also provide the capability to run tests in parallel on a single machine, typically by
creating multiple processes or threads that are run in parallel. Parallelism features can be combined in a layered
fashion to achieve massively parallel testing. In the context of the Visual Studio Test task, parallelism can be
combined in the following ways:
1. Parallelism offered by test frameworks . All modern test frameworks such as MSTest v2, NUnit, xUnit,
and others provide the ability to run tests in parallel. Typically, tests in an assembly are run in parallel. These
test frameworks interface with the Visual Studio Test platform using a test adapter and the test framework,
together with the corresponding adapter, and work within a test host process that the Visual Studio Test
Platform creates when tests are run. Therefore, parallelization at this layer is within a process for all
frameworks and adapters.
2. Parallelism offered by the Visual Studio Test Platform (vstest.console.exe) . Visual Studio Test
Platform can run test assemblies in parallel. Users of vstest.console.exe will recognize this as the /parallel
switch. It does so by launching a test host process on each available core, and handing it tests in an
assembly to execute. This works for any framework that has a test adapter for the Visual Studio test platform
because the unit of parallelization is a test assembly or test file. This, when combined with the parallelism
offered by test frameworks (described above), provides the maximum degree of parallelization when tests
run on a single agent in the pipeline.
3. Parallelism offered by the Visual Studio Test (VSTest) task . The VSTest task supports running tests in
parallel across multiple agents (or machines). Test slices are created, and each agent executes one slice at a
time. The three different slicing strategies, when combined with the parallelism offered by the test platform
and test framework (as described above), result in the following:
Slicing based on the number of tests and agents. Simple slicing where tests are grouped in equally
sized slices. A slice contains tests from one or more assemblies. Test execution on the agent then
conforms to the parallelism described in 1 and 2 above.
Slicing based on past running time. Based on the previous timings for running tests, and the number
of available agents, tests are grouped into slices such that each slice requires approximately equal
execution time. A slice contains tests from one or more assemblies. Test execution on the agent then
conforms to the parallelism described in 1 and 2 above.
Slicing based on assemblies. A slice is a test assembly, and so contains tests that all belong to the
same assembly. Execution on the agent then conforms to the parallelism described in 1 and 2 above.
However, 2 may not occur if an agent receives only one assembly to run.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Run tests in parallel for any test runner
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
Running tests to validate changes to code is key to maintaining quality. For continuous integration practice to be
successful, it is essential you have a good test suite that is run with every build. However, as the codebase grows,
the regression test suite tends to grow as well and running a full regression test can take a long time. Sometimes,
tests themselves may be long running - this is typically the case if you write end-to-end tests. This reduces the
speed with which customer value can be delivered as pipelines cannot process builds quickly enough.
Running tests in parallel is a great way to improve the efficiency of CI/CD pipelines. This can be done easily by
employing the additional capacity offered by the cloud. This article discusses how you can parallelize tests by using
multiple agents to process jobs.

Pre-requisite
Familiarize yourself with the concepts of agents and jobs. Each agent can run only one job at a time. To run multiple
jobs in parallel, you must configure multiple agents. You also need sufficient parallel jobs.

Setting up parallel jobs


Specify 'parallel' strategy in the YAML and indicate how many jobs should be dispatched. The variables
System.JobPositionInPhase and System.TotalJobsInPhase are added to each job.

jobs:
- job: ParallelTesting
strategy:
parallel: 2

TIP
You can specify as many as 99 agents to scale up testing for large test suites.

Slicing the test suite


To run tests in parallel you must first slice (or partition) the test suite so that each slice can be run independently.
For example, instead of running a large suite of 1000 tests on a single agent, you can use two agents and run 500
tests in parallel on each agent. Or you can reduce the amount of time taken to run the tests even further by using 8
agents and running 125 tests in parallel on each agent.
The step that runs the tests in a job needs to know which test slice should be run. The variables
System.JobPositionInPhase and System.TotalJobsInPhase can be used for this purpose:

System.TotalJobsInPhase indicates the total number of slices (you can think of this as "totalSlices")
System.JobPositionInPhase identifies a particular slice (you can think of this as "sliceNum")

If you represent all test files as a single dimensional array, each job can run a test file indexed at [sliceNum +
totalSlices], until all the test files are run. For example, if you have six test files and two parallel jobs, the first job
(slice0) will run test files numbered 0, 2, and 4, and second job (slice1) will run test files numbered 1, 3, and 5.
If you use three parallel jobs instead, the first job (slice0) will run test files numbered 0 and 3, the second job
(slice1) will run test files numbered 1 and 4, and the third job (slice2) will run test files numbered 2 and 5.

Sample code
This .NET Core sample uses --list-tests and --filter parameters of dotnet test to slice the tests. The tests are
run using NUnit. Test results created by DotNetCoreCLI@2 test task are then published to the server. Import (into
Azure Repos or Azure DevOps Server) or fork (into GitHub) this repo:

https://github.com/idubnori/ParallelTestingSample-dotnet-core

This Python sample uses a PowerShell script to slice the tests. The tests are run using pytest. JUnit-style test results
created by pytest are then published to the server. Import (into Azure Repos or Azure DevOps Server) or fork (into
GitHub) this repo:

https://github.com/PBoraMSFT/ParallelTestingSample-Python

This JavaScript sample uses a bash script to slice the tests. The tests are run using the mocha runner. JUnit-style test
results created by mocha are then published to the server. Import (into Azure Repos or Azure DevOps Server) or
fork (into GitHub) this repo:

https://github.com/PBoraMSFT/ParallelTestingSample-Mocha

The sample code includes a file azure-pipelines.yml at the root of the repository that you can use to create a
pipeline. Follow all the instructions in Create your first pipeline to create a pipeline and see test slicing in action.

Combine parallelism for massively parallel testing


When parallel jobs are used in a pipeline, the pipeline employs multiple machines to run each job in parallel. Most
test runners provide the capability to run tests in parallel on a single machine (typically by creating multiple
processes or threads that are run in parallel). The two types of parallelism can be combined for massively parallel
testing, which makes testing in pipelines extremely efficient.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Speed up testing by using Test Impact Analysis (TIA)
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015 |
Visual Studio 2017 | Visual Studio 2015

NOTE
Applies only to TFS 2017 Update 1 and later, and Visual Studio 2015 Update 3 and later.

Continuous Integration (CI) is a key practice in the industry. Integrations are frequent, and verified with an
automated build that runs regression tests to detect integration errors as soon as possible. However, as the
codebase grows and matures, its regression test suite tends to grow as well - to the extent that running a full
regression test might require hours. This slows down the frequency of integrations, and ultimately defeats the
purpose of continuous integration. In order to have a CI pipeline that completes quickly, some teams defer the
execution of their longer running tests to a separate stage in the pipeline. However, this only serves to further
defeat continuous integration.
Instead, enable Test Impact Analysis (TIA) when using the Visual Studio Test task in a build pipeline. TIA performs
incremental validation by automatic test selection. It will automatically select only the subset of tests required to
validate the code being committed. For a given code commit entering the CI/CD pipeline, TIA will select and run
only the relevant tests required to validate that commit. Therefore, that test run will complete more quickly, if there
is a failure you will get to know about it sooner, and because it is all scoped by relevance, analysis will be faster as
well.

Test Impact Analysis has:


A robust test selection mechanism . It includes existing impacted tests, previously failing tests, and newly
added tests.
Safe fallback . For commits and scenarios that TIA cannot understand, it will fall back to running all tests. TIA is
currently scoped to only managed code, and single machine topology. So, for example, if the code commit
contains changes to HTML or CSS files, it cannot reason about them and will fall back to running all tests.
Configurable overrides . You can run all tests at a configured periodicity.
However, be aware of the following caveats when using TIA with Visual Studio 2015:
Running tests in parallel . In this case, tests will run serially.
Running tests with code coverage enabled . In this case, code coverage data will not get collected.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Test Impact Analysis supported scenarios


At present, TIA is supported for:
TFS 2017 Update 1 onwards, and Azure Pipelines
Version 2.* of the Visual Studio Test task in the build pipeline
Build vNext, with multiple VSTest Tasks
VS2015 Update 3 onwards on the build agent
Local and hosted build agents
CI and in PR workflows
Git, GitHub, Other Git, TFVC repos (including partially mapped TFVC repositories with a workaround)
IIS interactions (over REST, SOAP APIs), using HTTP/HTTPS protocols
Automated Tests
Single machine topology. Tests and app (SUT) must be running on the same machine.
Managed code (any .NET Framework app, any .NET service)
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
More information about TIA scope and applications

Enable Test Impact Analysis


TIA is supported through Version 2.* of the Visual Studio Test task. If your app is a single tier application, all you
need to do is to check Run only impacted tests in the task UI. The Test Impact data collector is automatically
configured. No additional steps are required.
If your application interacts with a service in the context of IIS, you must also configure the Test Impact data
collector to run in the context of IIS by using a .runsettings file. Here is a sample that creates this configuration:

<?xml version="1.0" encoding="utf-8"?>


<RunSettings>
<DataCollectionRunSettings>
<DataCollectors>
<!-- This is the TestImpact data collector.-->
<DataCollector uri="datacollector://microsoft/TestImpact/1.0"
assemblyQualifiedName="Microsoft.VisualStudio.TraceCollector.TestImpactDataCollector,
Microsoft.VisualStudio.TraceCollector, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
friendlyName="Test Impact">
<Configuration>
<!-- enable IIS data collection-->
<InstrumentIIS>True</InstrumentIIS>
<!-- file level data collection -->
<ImpactLevel>file</ImpactLevel>
<!-- any job agent related executable or any other service that the test is using needs to be
profiled. -->
<ServicesToInstrument>
<Name>TeamFoundationSshService</Name>
</ServicesToInstrument>
</Configuration>
</DataCollector>
</DataCollectors>
</DataCollectionRunSettings>
</RunSettings>
View Test Impact Analysis outcome
TIA is integrated into existing test reporting at both the summary and details levels, including notification emails.

More information about TIA and Azure Pipelines integration

Manage Test Impact Analysis behavior


You can influence the way that tests are either included or ignored during a test run:
Through the VSTest task UI . TIA can be conditioned to run all tests at a configured periodicity. Setting this
option is recommended, and is the means to regulate test selection.
By setting a build variable . Even after TIA has been enabled in the VSTest task, it can be disabled for a
specific build by setting the variable DisableTestImpactAnalysis to true . This override will force TIA to run all
tests for that build. In subsequent builds, TIA will go back to optimized test selection.
When TIA opens a commit and sees an unknown file type, it falls back to running all tests. While this is good from a
safety perspective, tuning this behavior might be useful in some cases. For example:
Set the TI_IncludePathFilters variable to specific paths to include only these paths in a repository for which
you want TIA to apply. This is useful when teams use a shared repository. Setting this variable disables TIA for all
other paths not included in the setting.
Set the TIA_IncludePathFilters variable to specify file types that do not influence the outcome of tests and for
which changes should be ignored. For example, to ignore changes to .csproj files set the variable to the value
!**\*.csproj .

Use the minimatch pattern when setting variables, and separate multiple items with a semicolon.

To evaluate whether TIA is selecting the appropriate tests:


Manually validate the selection. A developer who knows how the SUT and tests are architected could manually
validate the test selection using the TIA reporting capabilities.
Run TIA selected tests and then all tests in sequence. In a build pipeline, use two test tasks - one that runs only
impacted Tests (T1) and one that runs all tests (T2). If T1 passes, check that T2 passes as well. If there was a
failing test in T1, check that T2 reports the same set of failures.
More information about TIA advanced configuration

Provide custom dependency mappings


TIA uses dependency maps of the following form.

TestMethod1
dependency1
dependency2
TestMethod2
dependency1
dependency3

TIA can generate such a dependencies map for managed code execution. Where such dependencies reside in .cs
and .vb files, TIA can automatically watch for commits into such files and then run tests that had these source files
in their list of dependencies.
You can extend the scope of TIA by explicitly providing the dependencies map as an XML file. For example, you
might want to support code in other languages such as JavaScript or C++, or support the scenario where tests and
product code are running on different machines. The mapping can even be approximate, and the set of tests you
want to run can be specified in terms of a test case filter such as you would typically provide in the VSTest task
parameters.
The XML file should be checked into your repository, typically at the root level. Then set the build variable
TIA.UserMapFile to point to it. For example, if the file is named TIAmap.xml , set the variable to
$(System.DefaultWorkingDirector y)/TIAmap.xml .
For an example of the XML file format, see TIA custom dependency mapping.

See Also
TIA overview and VSTS integration
TIA scope and applications
TIA advanced configuration
TIA custom dependency mapping

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Manage flaky tests
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Productivity for developers relies on the ability of tests to find real problems with the code under development or
update in a timely and reliable fashion. Flaky tests present a barrier to finding real problems, since the failures often
don't relate to the changes being tested. A flaky test is a test that provides different outcomes, such as pass or fail,
even when there are no changes in the source code or execution environment. Flaky tests also impact the quality of
shipped code.

NOTE
This feature is only available on Azure DevOps Services. Typically, new features are introduced in the cloud service first, and
then made available on-premises in the next major version or update of Azure DevOps Server. To learn more, see Azure
DevOps Feature Timeline.

The goal of bringing flaky test management in-product is to reduce developer pain cause by flaky tests and cater to
the whole workflow. Flaky test management provides the following benefits.
Detection - Auto detection of flaky test with rerun or extensibility to plug in your own custom detection
method
Management of flakiness - Once a test is marked as flaky, the data is available for all pipelines for that
branch
Repor t on flaky tests - Ability to choose if you want to prevent build failures caused by flaky tests, or use
the flaky tag only for troubleshooting
Resolution - Manual bug-creation or manual marking and unmarking test as flaky based on your analysis
Close the loop - Reset flaky test as a result of bug resolution / manual input
Enable flaky test management
To configure flaky test management, choose Project settings , and select Test management in the Pipelines
section.
Slide the On/Off button to On .
The default setting for all projects is to use flaky tests for troubleshooting.
Flaky test detection
Flaky test management supports system and custom detection.
System detection : The in-product flaky detection uses test rerun data. The detection is via VSTest task
rerunning of failed tests capability or retry of stage in the pipeline. You can select specific pipelines in the
project for which you would like to detect flaky tests.

NOTE
Once a test is marked as flaky, the data is available for all pipelines for that branch to assist with troubleshooting in
every pipeline.

Custom detection : You can integrate your own flaky detection mechanism with Azure Pipelines and use
the reporting capability. With custom detection, you need to update the test results metadata for flaky tests.
For details, see Test Results, Result Meta Data - Update REST API.
Flaky test options
The Flaky test options specify how flaky tests are available in test reporting as well as resolution capabilities, as
described in the following sections.

Flaky test management and reporting


On the Test management page under Flaky test options , you can set options for how flaky tests are included in
the Test Summary report. Flaky test data for both passed and failed test is available in Test results. The Flaky tag
helps you identify flaky tests. By default, flaky tests are included in the Test Summary. However, if you want to
ensure flaky test failures don't fail your pipeline, you can choose to not include them in your test summary and
suppress the test failure. This option ensures flaky tests (both passed and failed) are removed from the pass
percentage and shown in Tests not repor ted , as shown in the following screenshot.
NOTE
The Test summary report is updated only for Visual Studio Test task and Publish Test Results task. You may need to add a
custom script to suppress flaky test failure for other scenarios.

Tests marked as flaky


You can mark or unmark a test as flaky based on analysis or context, by choosing Flaky (or UnFlaky , depending on
whether the test is already marked as flaky.)

When a test is marked flaky or unflaky in a pipeline, no changes are made in the current pipeline. Only on future
executions of that test is the changed flaky setting evaluated. Tests marked as flaky have the Marked flaky tag in the
user interface.
Help and support
See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page

Related articles
Review test results
Visual Studio Test task
Publish Test Results task
Test Results, Result Meta Data - Update REST API
UI testing considerations
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
When running automated tests in the CI/CD pipeline, you may need a special configuration in order to run UI tests
such as Selenium, Appium or Coded UI tests. This topic describes the typical considerations for running UI tests.

NOTE
Applies only to TFS 2017 Update 1 and later.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Prerequisites
Familiarize yourself with agents and deploying an agent on Windows.

Headless mode or visible UI mode?


When running Selenium tests for a web app, you can launch the browser in two ways:
1. Headless mode . In this mode, the browser runs as normal but without any UI components being visible.
While this mode is obviously not useful for browsing the web, it is useful for running automated tests in an
unattended manner in a CI/CD pipeline. Chrome and Firefox browsers can be run in headless mode.
This mode generally consumes less resources on the machine because the UI is not rendered and tests run
faster. As a result, potentially more tests can be run in parallel on the same machine to reduce the total test
execution time.
Screenshots can be captured in this mode and used for troubleshooting failures.

NOTE
Microsoft Edge browser currently cannot be run in the headless mode.

2. Visible UI mode . In this mode, the browser runs normally and the UI components are visible. When
running tests in this mode on Windows, special configuration of the agents is required.
If you are running UI tests for a desktop application, such as Appium tests using WinAppDriver or Coded UI tests, a
special configuration of the agents is required.
TIP
End-to-end UI tests generally tend to be long-running. When using the visible UI mode, depending on the test framework,
you may not be able to run tests in parallel on the same machine because the app must be in focus to receive keyboard and
mouse events. In this scenario, you can speed up testing cycles by running tests in parallel on different machines. See run
tests in parallel for any test runner and run tests in parallel using Visual Studio Test task.

UI testing in visible UI mode


A special configuration is required for agents to run UI tests in visible UI mode.
Visible UI testing using Microsoft-hosted agents
Microsoft-hosted agents are pre-configured for UI testing and UI tests for both web apps and desktop apps.
Microsoft-hosted agents are also pre-configured with popular browsers and matching web-driver versions that can
be used for running Selenium tests. The browsers and corresponding web-drivers are updated on a periodic basis.
To learn more about running Selenium tests, see UI test with Selenium
Visible UI testing using self-hosted Windows agents
Agents that are configured to run as service can run Selenium tests only with headless browsers. If you are not
using a headless browser, or if you are running UI tests for desktop apps, Windows agents must be configured to
run as an interactive process with auto-logon enabled.
When configuring agents, select 'No' when prompted to run as a service. Subsequent steps then allow you to
configure the agent with auto-logon. When your UI tests run, applications and browsers are launched in the
context of the user specified in the auto-logon settings.
If you use Remote Desktop to access the computer on which an agent is running with auto-logon, simply
disconnecting the Remote Desktop causes the computer to be locked and any UI tests that run on this agent may
fail. To avoid this, use the tscon command on the remote computer to disconnect from Remote Desktop. For
example:
%windir%\System32\tscon.exe 1 /dest:console

In this example, the number '1' is the ID of the remote desktop session. This number may change between remote
sessions, but can be viewed in Task Manager. Alternatively, to automate finding the current session ID, create a
batch file containing the following code:

for /f "skip=1 tokens=3" %%s in ('query user %USERNAME%') do (


%windir%\System32\tscon.exe %%s /dest:console
)

Save the batch file and create a desktop shortcut to it, then change the shortcut properties to 'Run as
administrator'. Running the batch file from this shortcut disconnects from the remote desktop but preserves the UI
session and allows UI tests to run.

Provisioning agents in Azure VMs for UI testing


If you are provisioning virtual machines (VMs) on Azure, agent configuration for UI testing is available through the
Agent artifact for DevTest Labs.
Setting screen resolution
Before running UI tests you may need to adjust the screen resolution so that apps render correctly. For this, a
screen resolution utility task is available from Marketplace. Use this task in your pipeline to set the screen
resolution to a value that is supported by the agent machine. By default, this utility sets the resolution to the
optimal value supported by the agent machine.
If you encounter failures using the screen resolution task, ensure that the agent is configured to run with auto-
logon enabled and that all remote desktop sessions are safely disconnected using the tscon command as
described above.

NOTE
The screen resolution utility task runs on the unified build/release/test agent, and cannot be used with the deprecated Run
Functional Tests task.

Troubleshooting failures in UI tests


When you run UI tests in an unattended manner, capturing diagnostic data such as screenshots or video is useful
for discovering the state of the application when the failure was encountered.
Capture screenshots
Most UI testing frameworks provide the ability to capture screenshots. The screenshots collected are available as
an attachment to the test results when these results are published to the server.
If you use the Visual Studio test task to run tests, captured screenshots must be added as a result file in order to be
available in the test report. For this, use the following code:
MSTest
NUnit
First, ensure that TestContext is defined in your test class. For example:
public TestContext TestContext { get; set; }

Add the screenshot file using TestContext.AddResultFile(fileName); //Where fileName is the name of the file.

If you use the Publish Test Results task to publish results, test result attachments can only be published if you are
using the VSTest (TRX) results format or the NUnit 3.0 results format.
Result attachments cannot be published if you use JUnit or xUnit test results. This is because these test result
formats do not have a formal definition for attachments in the results schema. You can use one of the below
approaches to publish test attachments instead.
If you are running tests in the build (CI) pipeline, you can use the Copy and Publish Build Artifacts task to
publish any additional files created in your tests. These will appear in the Ar tifacts page of your build
summary.
Use the REST APIs to publish the necessary attachments. Code samples can be found in this GitHub
repository.
Capture video
If you use the Visual Studio test task to run tests, video of the test can be captured and is automatically available as
an attachment to the test result. For this, you must configure the video data collector in a .runsettings file and this
file must be specified in the task settings.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
UI test with Selenium
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015 |
Visual Studio 2017 | Visual Studio 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Performing user interface (UI) testing as part of the release pipeline is a great way of detecting unexpected
changes, and need not be difficult. This topic describes using Selenium to test your website during a continuous
deployment release and test automation. Special considerations that apply when running UI tests are discussed in
UI testing considerations.

Typically you will run unit tests in your build workflow, and functional (UI) tests in your release workflow after
your app is deployed (usually to a QA environment).

For more information about Selenium browser automation, see:


Selenium
Selenium documentation

Create your test project


As there is no template for Selenium testing, the easiest way to get started is to use the Unit Test template. This
automatically adds the test framework references and enables you run and view the results from Visual Studio Test
Explorer.
1. In Visual Studio, open the File menu and choose New Project , then choose Test and select Unit Test
Project . Alternatively, open the shortcut menu for the solution and choose Add then New Project and
then Unit Test Project .
2. After the project is created, add the Selenium and browser driver references used by the browser to execute
the tests. Open the shortcut menu for the Unit Test project and choose Manage NuGet Packages . Add the
following packages to your project:
Selenium.WebDriver
Selenium.Firefox.WebDriver
Selenium.WebDriver.ChromeDriver
Selenium.WebDriver.IEDriver
3. Create your tests. For example, the following code creates a default class named MySeleniumTests that
performs a simple test on the Bing.com website. Replace the contents of the TheBingSearchTest function
with the Selenium code required to test your web app or website. Change the browser assignment in the
SetupTest function to the browser you want to use for the test.

using System;
using System.Text;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;
using OpenQA.Selenium.Chrome;
using OpenQA.Selenium.IE;

namespace SeleniumBingTests
{
/// <summary>
/// Summary description for MySeleniumTests
/// </summary>
[TestClass]
public class MySeleniumTests
{
private TestContext testContextInstance;
private IWebDriver driver;
private string appURL;

public MySeleniumTests()
{
}

[TestMethod]
[TestCategory("Chrome")]
public void TheBingSearchTest()
{
driver.Navigate().GoToUrl(appURL + "/");
driver.FindElement(By.Id("sb_form_q")).SendKeys("Azure Pipelines");
driver.FindElement(By.Id("sb_form_go")).Click();
driver.FindElement(By.XPath("//ol[@id='b_results']/li/h2/a/strong[3]")).Click();
Assert.IsTrue(driver.Title.Contains("Azure Pipelines"), "Verified title of the page");
}

/// <summary>
///Gets or sets the test context which provides
///information about and functionality for the current test run.
///</summary>
public TestContext TestContext
{
{
get
{
return testContextInstance;
}
set
{
testContextInstance = value;
}
}

[TestInitialize()]
public void SetupTest()
{
appURL = "http://www.bing.com/";

string browser = "Chrome";


switch(browser)
{
case "Chrome":
driver = new ChromeDriver();
break;
case "Firefox":
driver = new FirefoxDriver();
break;
case "IE":
driver = new InternetExplorerDriver();
break;
default:
driver = new ChromeDriver();
break;
}

[TestCleanup()]
public void MyTestCleanup()
{
driver.Quit();
}
}
}

4. Run the Selenium test locally using Test Explorer and check that it works.

Define your build pipeline


You'll need a continuous integration (CI) build pipeline that builds your Selenium tests. For more details, see Build
your .NET desktop app for Windows.

Create your web app


You'll need a web app to test. You can use an existing app, or deploy one in your continuous deployment (CD)
release pipeline. The example code above runs tests against Bing.com. For details of how to set up your own
release pipeline to deploy a web app, see Deploy to Azure Web Apps.

Decide how you will deploy and test your app


You can deploy and test your app using either the Microsoft-hosted agent in Azure, or a self-hosted agent that you
install on the target servers.
When using the Microsoft-hosted agent , you should use the Selenium web drivers that are pre-installed
on the Windows agents (agents named Hosted VS 20xx ) because they are compatible with the browser
versions installed on the Microsoft-hosted agent images. The paths to the folders containing these drivers
can be obtained from the environment variables named IEWebDriver (Internet Explorer), ChromeWebDriver
(Google Chrome), and GeckoWebDriver (Firefox). The drivers are not pre-installed on other agents such as
Linux, Ubuntu, and macOS agents. Also see UI testing considerations.
When using a self-hosted agent that you deploy on your target servers, agents must be configured to run
interactively with auto-logon enabled. See Build and release agents and UI testing considerations.

Include the test in a release


NOTE: This example uses the Visual Studio Test Platform Installer task and the latest version of the Visual
Studio Test task. These tasks are not available in TFS 2015 or TFS 2017. To run Selenium tests in these versions of
TFS, you must use the Visual Studio Test Agent Deployment and Run Functional Tests tasks instead.
1. If you don't have an existing release pipeline that deploys your web app:
Open the Releases page in the Azure Pipelines section in Azure DevOps or the Build & Release
hub in TFS (see Web portal navigation) and choose the + icon, then choose Create release
pipeline .

Select the Azure App Ser vice Deployment template and choose Apply .
In the Ar tifacts section of the Pipeline tab, choose + Add . Select your build artifacts and choose
Add .
Choose the Continuous deployment trigger icon in the Ar tifacts section of the Pipeline tab. In
the Continuous deployment trigger pane, enable the trigger so that a new release is created from
every build. Add a filter for the default branch.

Open the Tasks tab, select the Stage 1 section, and enter your subscription information and the
name of the web app where you want to deploy the app and tests. These settings are applied to the
Deploy Azure App Ser vice task.
2. If you are deploying your app and tests to environments where the target machines that host the agents do
not have Visual Studio installed:
In the Tasks tab of the release pipeline, choose the + icon in the Run on agent section. Select the
Visual Studio Test Platform Installer task and choose Add . Leave all the settings at the default
values.

You can find a task more easily by using the search textbox.
3. In the Tasks tab of the release pipeline, choose the + icon in the Run on agent section. Select the Visual
Studio Test task and choose Add .
4. If you added the Visual Studio Test Platform Installer task to your pipeline, change the Test platform
version setting in the Execution options section of the Visual Studio Test task to Installed by Tools
Installer .

How do I pass parameters to my test code from a build pipeline?


5. Save the release pipeline and start a new release. You can do this by queuing a new CI build, or by choosing
Create release from the Release drop-down list in the release pipeline.

6. To view the test results, open the release summary from the Releases page and choose the Tests link.

Next steps
Review your test results
Requirements traceability
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Requirements traceability is the ability to relate and document two or more phases of a development process,
which can then be traced both forward or backward from its origin. Requirements traceability help teams to get
insights into indicators such as quality of requirements or readiness to ship the requirement . A
fundamental aspect of requirements traceability is association of the requirements to test cases, bugs and code
changes.

Read the glossary to understand test report terminology.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Agile teams running automated tests


Agile teams have characteristics including, but not limited to the following
Faster release cycles
Continuous testing in a pipeline
Negligible manual testing footprint; limited to exploratory testing
High degree of automation
The following sections explore traceability from Quality , Bug and Source standpoints for Agile teams.
Quality traceability
To ensure user requirements meet the quality goals, the requirements in a project can be linked to test results,
which can then be viewed on the team's dashboard. This enables end-to-end traceability with a simple way to
monitor test results. To link automated tests with requirements, visit test report in build or release.
1. In the results section under Tests tab of a build or release summary, select the test(s) to be linked to
requirements and choose Link .
2. Choose a work item to be linked to the selected test(s) in one of the specified way:
Choose an applicable work item from the list of suggested work items. The list is based on the most
recently viewed and updated work items.
Specify a work item ID.
Search for a work item based on the title text.

The list shows only work items belonging to the Requirements category.

3. After the requirements have been linked to the test results you can view the test results grouped by
requirement. Requirement is one of the many "Group by" options provided to make it easy to navigate the
test results.
4. Teams often want to pin the summarized view of requirements traceability to a dashboard. Use the
Requirements quality widget for this.

5. Configure the Requirements quality widget with the required options and save it.
Requirements quer y : Select a work item query that captures the requirements, such as the user stories
in the current iteration.
Quality data : Specify the stage of the pipeline for which the requirements quality should be traced.
6. View the widget in the team's dashboard. It lists all the Requirements in scope, along with the Pass Rate
for the tests and count of Failed tests. Selecting a Failed test count opens the Tests tab for the selected build
or release. The widget also helps to track the requirements without any associated test(s).

To ensure user requirements meet the quality goals, the requirements in a project can be linked to test results,
which can then be viewed on the team's dashboard. This enables end-to-end traceability with a simple way to
monitor test results. To link automated tests with requirements, visit test report in build or release.
1. In the results section under Tests tab of a build or release summary, select the test(s) to be linked to
requirements and choose Link .
2. Choose a work item to be linked to the selected test(s) in one of the specified way:
Choose an applicable work item from the list of suggested work items. The list is based on the most
recently viewed and updated work items.
Specify a work item ID.
Search for a work item based on the title text.

The list shows only work items belonging to the Requirements category.

3. Teams often want to pin the summarized view of requirements traceability to a dashboard. Use the
Requirements quality widget for this.
4. Configure the Requirements quality widget with the required options and save it.
Requirements quer y : Select a work item query that captures the requirements, such as the user stories
in the current iteration.
Quality data : Specify the stage of the pipeline for which the requirements quality should be traced.

5. View the widget in the team's dashboard. It lists all the Requirements in scope, along with the Pass Rate
for the tests and count of Failed tests. Selecting a Failed test count opens the Tests tab for the selected build
or release. The widget also helps to track the requirements without any associated test(s).
Bug traceability
Testing gives a measure of the confidence to ship a change to users. A test failure signals an issues with the change.
Failures can happen for many reasons such as errors in the source under test, bad test code, environmental issues,
flaky tests, and more. Bugs provide a robust way to track test failures and drive accountability in the team to take
the required remedial actions. To associate bugs with test results, visit test report in build or release.
1. In the results section of the Tests tab select the tests against which the bug should be created and choose
Bug . Multiple test results can be mapped to a single bug. This is typically done when the reason for the
failures is attributable to a single cause such as the unavailability of a dependent service, a database
connection failure, or similar issues.

2. Open the work item to see the bug. It captures the complete context of the test results including key
information such as the error message, stack trace, comments, and more.
3. View the bug with the test result, directly in context, within the Tests tab. The Work Items tab also lists any
linked requirements for the test result.

4. From a work item, navigate directly to the associated test results. Both the test case and the specific test
result are linked to the bug.
5. In the work item, select Test case or Test result to go directly to the Tests page for the selected build or
release. You can troubleshoot the failure, update your analysis in the bug, and make the changes required to
fix the issue as applicable. While both the links take you to the Tests tab , the default section shown are
Histor y and Debug respectively.

Source traceability
When troubleshooting test failures that occur consistently over a period of time, it is important to trace back to the
initial set of changes - where the failure originated. This can help significantly to narrow down the scope for
identifying the problematic test or source under test. To discover the first instance of test failures and trace it back
to the associated code changes, visit Tests tab in build or release.
1. In the Tests tab, select a test failure to be analyzed. Based on whether it's a build or release, choose the
Failing build or Failing release column for the test.

2. This opens another instance of the Tests tab in a new window, showing the first instance of consecutive
failures for the test.

3. Based on the build or release pipeline, you can choose the timeline or pipeline view to see what code
changes were committed. You can analyze the code changes to identify the potential root cause of the test
failure.
Traditional teams using planned testing
Teams that are moving from manual testing to continuous (automated) testing, and have a subset of tests already
automated, can execute them as part of the pipeline or on demand (see test report). Referred to as Planned
testing , automated tests can be associated to the test cases in a test plan and executed from Azure Test Plans .
Once associated, these tests contribute towards the quality metrics of the corresponding requirements.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Review test results
11/2/2020 • 10 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Automated tests can be configured to run as part of a build or release for various languages. Test reports
provide an effective and consistent way to view the tests results executed using different test frameworks, in
order to measure pipeline quality, review traceability, troubleshoot failures and drive failure ownership. In
addition, it provides many advanced reporting capabilities explored in the following sections.
You can also perform deeper analysis of test results by using the Analytics Service. For an example of using this
with your build and deploy pipelines, see Analyze test results.
Read the glossary to understand test report terminology.

NOTE
Test report is available in TFS 2015 and above, however the new experience described in this topic is currently available
only in Azure Pipelines.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.

Published test results can be viewed in the Tests tab in a build or release summary.

Surface test results in the Tests tab


Test results can be surfaced in the Tests tab using one of the following options:
Automatically inferred test results . By default, your pipeline can automatically infer the test output
for a few popular test runners. This is done by parsing the error logs generated during the build
operation and then checking for signatures of test failures. Currently, Azure DevOps supports the
following languages and test runners for automatically inferring the test results:
Javascript - Mocha, Jest and Jasmine
Python- Unittest

NOTE
This inferred test report is a limited experience. Some features available in fully-formed test reports are
not present here (more details). We recommend that you publish a fully-formed test report to get the full
Test and Insights experience in Pipelines. Also see:

Publishing fully-formed test reports for JavaScript test runners


Publishing fully-formed test reports for Python test runners
Test execution tasks . Built-in test execution tasks such as Visual Studio Test that automatically publish
test results to the pipeline, or others such as Ant, Maven, Gulp, Grunt, and Xcode that provide this
capability as an option within the task.
Publish Test Results task . Task that publishes test results to Azure Pipelines or TFS when tests are
executed using your choice of runner, and results are available in any of the supported test result
formats.
API(s) . Test results published directly by using the Test Management API(s).

Surface test information beyond the Tests tab


The Tests tab provides a detailed summary of the test execution. This is helpful in tracking the quality of the
pipeline, as well as for troubleshooting failures. Azure DevOps also provides other ways to surface the test
information:
The Dashboard provides visibility of your team's progress. Add one or more widgets that surface test
related information:
Requirements quality
Test results trend
Deployment status
Test analytics provides rich insights into test results measured over a period of time. It can help identify
problematic areas in your test by providing data such as the top failing tests, and more.

View test results in build


The build summary provides a timeline view of the key steps executed in the build. If tests were executed and
reported as part of the build, a test milestone appears in the timeline view. The test milestone provides a
summary of the test results as a measure of pass percentage along with indicators for failures and abor ts if
these exist.
View test results in release
In the pipeline view you can see all the stages and associated tests. The view provides a summary of the test
results as a measure of pass percentage along with indicators for failures and abor ts if these exist. These
indicators are same as in the build timeline view, giving a consistent experience across build and release.
Tests tab
Both the build and release summaries provide details of test execution. Choose Test summar y to view the
details in the Tests tab. This page has the following sections
Summar y : provides key quantitative metrics for the test execution such as the total test count, failed
tests, pass percentage, and more. It also provides differential indicators of change compared to the
previous execution.
Results : lists all tests executed and reported as part of the current build or release. The default view
shows only the failed and aborted tests in order to focus on tests that require attention. However, you
can choose other outcomes using the filters provided.
Details : A list of tests that you can sort, group, search, and filter to find the test results you need.
Select any test run or result to view the details pane that displays additional information required for
troubleshooting such as the error message, stack trace, attachments, work items, historical trend, and more.

TIP
If you use the Visual Studio Test task to run tests, diagnostic output logged from tests (using any of Console.WriteLine,
Trace.WriteLine or TestContext.WriteLine methods), will appear as an attachment for a failed test.

The following capabilities of the Tests tab help to improve productivity and troubleshooting experience.
Filter large test results
Over time, tests accrue and, for large applications, can easily grow to tens of thousands of tests. For these
applications with very many tests, it can be hard to navigate through the results to identify test failures,
associate root causes, or get ownership of issues. Filters make it easy to quickly navigate to the test results of
your interest. You can filter on Test Name , Outcome (failed, passed, and more), Test Files (files holding tests)
and Owner (for test files). All of the filter criteria are cumulative in nature.

Additionally, with multiple Grouping options such as Test run , Test file , Priority , Requirement , and more,
you can organize the Results view exactly as you require.
Test debt management with bugs
To manage your test debt for failing or long running tests you can create a bug or add data to exisiting bug and
all view all associated work items in the work item tab.
Immersive troubleshooting experience
Error messages and stack traces are lengthy in nature and need enough real estate to view the details during
troubleshooting. To provide an immersive troubleshooting experience, the Details view can be expanded to full
page view while still being able to perform the required operations in context, such as bug creation or
requirement association for the selected test result.

Troubleshooting data for Test failure


For the test failures, the error messages and stack traces are available for troubleshooting. You can also view all
attachments associated with the test failure in the Attachments tab.
Test debt management
You can create or add to an existing bug to manage test debt for failures or long running tests. The Work Items
tab details all bugs and requirements associated with a Test to help you analyze the requirement impact as well
know status and who is working on the bug.
Test trends with historical data
History of test execution can provide meaningful insights into reliability or performance of tests. When
troubleshooting a failure, it is valuable to know how a test has performed in the past. The Tests tab provides
test history in context with the test results. The test history information is exposed in a progressive manner
starting with the current build pipeline to other branches, or the current stage to other stages, for build and
release respectively.

View execution of in-progress tests


Tests, such as integration and functional tests, can run for a long time. Therefore, it is important to see the
current or near real-time status of test execution at any given time. Even for cases where tests run quickly, it's
useful to know the status of the relevant test result(s) as early as possible; especially when failures occur. The
in-progress view eliminates the need to wait for test execution to finish. Results are available in near real-time
as execution progresses, helping you to take actions faster. You can debug a failure, file a bug, or abort the
pipeline.
NOTE
The feature is currently available for both build and release, using Visual Studio Test task in a Multi Agent job. It will be
available for Single Agent jobs in a future release.

The view below shows the in-progress test summary in a release, reporting the total test count and the
number of test failures at a given point in time. The test failures are available for troubleshooting, creating
bug(s), or to take any other appropriate action.

View summarized test results


During test execution, a test might spawn multiple instances or tests that contribute to the overall outcome.
Some examples are, tests that are rerun, tests composed of an ordered combination of other tests (ordered
tests) or tests having different instances based on an input parameter (data driven tests).
As these tests are related, they must be reported together with the overall outcome derived from the individual
instances or tests. These test results are reported as a summarized test result in the Tests tab:
Rerun failed tests : The ability to rerun failed tests is available in the latest version of the Visual Studio
Test task. During a rerun, multiple attempts can be made for a failed test, and each failure could have a
different root cause due to the non-deterministic behavior of the test. Test reports provide a combined
view for all the attempts of a rerun, along with the overall test outcome as a summarized unit.
Additionally the Test Management API(s) now support the ability to publish and query summarized test
results.

Data driven tests : Similar to the rerun of failed tests, all iterations of data driven tests are reported
under that test. The summarized result view for data driven tests depends on the behavior of the test
framework. If the framework produces a hierarchy of results (for example, MSTest v1 and v2) they will be
reported in a summarized view. If the framework produces individual results for each iteration (for
example, xUnit) they will not be grouped together. The summarized view is also available for ordered
tests (.orderedtest in Visual Studio).
NOTE
Metrics in the test summary section, such as the total number of tests, passed, failed, or other are computed using the
root level of the summarized test result.

View aborted tests


Test execution can abort due to several reasons such as bad test code, errors in the source under test, or
environmental issues. Irrespective of the reason for the abort, it is important to be able to diagnose the
behavior and identify the root cause. The aborted tests and test runs can be viewed alongside the completed
runs in the Tests tab.
NOTE
The feature is currently available for both build and release, using the Visual Studio Test task in a Multi Agent job or
publishing test results using the Test Management API(s). It will be available for Single Agent jobs in a future release.

Automatically inferred test results


Azure DevOps can automatically infer the output of tests that are running in your pipelines for a few supported
test frameworks. These automatically inferred test reports require no specific configuration of your pipelines,
and are a zero-effort way to get started using Test Reporting.

See the list of runners for which test results are automatically inferred.
As only limited test metadata is present in such inferred reports, they are limited in features and capabilities.
The following features are not available for inferred test reports:
Group the test results by test file, owner, priority, and other fields
Search and filter the test results
Check details of passed tests
Preview any attachments generated during the tests within the web UI itself
Associate a test failure with a new bug, or see list of associated work items for this failure
See build-on-build analytics for testing in Pipelines
NOTE
Some runners such as Mocha have multiple built-in console reporters such as dot-matrix and progress-bar. If you have
configured a non-default console output for your test runner, or you are using a custom reporter, Azure DevOps will not
be able to infer the test results. It can only infer the results from the default reporter.

Related articles
Analyze test results
Trace test requirements
Review code coverage results

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Test Analytics
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Tracking test quality over time and improving test collateral is key to maintaining a healthy DevOps pipeline. Test
analytics provides near real-time visibility into your test data for builds and releases. It helps improve the
efficiency of your pipeline by identifying repetitive, high impact quality issues.

NOTE
Test analytics is currently available only with Azure Pipelines.

Read the glossary to understand test reports terminology.

Install the Analytics extension if required


For more information, see The Analytics Marketplace extension.

View test analytics for builds


To help teams find and fix tests that fail frequently or intermittently, use the top failing tests report. The build
summary includes the Analytics page that hosts this report. The top-level view provides a summary of the test
pass rate and results for the selected build pipeline, for the specified period. The default range is 14 days.

View test analytics for releases


For tests executing as part of release, access test analytics from the Analytics link at the top right corner. As with
build, the summary provides an aggregated view of the test pass rate and results for the specified period.
Test Failures
Open a build or release summary to view the top failing tests report. This report provides a granular view of the
top failing tests in the pipeline, along with the failure details.

The detailed view contains two sections:


Summar y : Provides key quantitative metrics for the tests executed in build or release over the specified
period. The default view shows data for 14 days.
Pass rate and results: Shows the pass percentage, along with the distribution of tests across various
outcomes.

Failing tests: Provides a distinct count of tests that failed during the specified period. In the example
above, 986 test failures originated from 124 tests.

Chart view: A trend of the total test failures and average pass rate on each day of the specified
period.

Results : List of top failed tests based on the total number of failures. Helps to identify problematic tests
and lets you drill into a detailed summary of results.

Group test failures


The report view can be organized in several different ways using the group by option. Grouping test results can
provide deep insights into various aspects of the top failing tests. In the example below, the test results are
grouped based on the test files they belong to. It shows the test files and their respective contribution towards the
total of test failures, during the specified period to help you easily identify and prioritize your next steps.
Additionally, for each test file, it shows the tests that contribute to these failures.
Drill down to individual tests
After you have identified one or more tests in the Details section, select the individual test you want to analyze.
This provides a drill-down view of the selected test with a stacked chart of various outcomes such as passed or
failed instances of the test, for each day in the specified period. This view helps you infer hidden patterns and take
actions accordingly.
The corresponding grid view lists all instances of execution of the selected test during that period.
Failure analysis
To perform failure analysis for root causes, choose one or more instances of test execution in the drill-down view
to see failure details in context.
Infer hidden patterns
When looking at the test failures for a single instance of execution, it is often difficult to infer any pattern. In the
example below, the test failures occurred during a specific period, and knowing this can help narrow down the
scope of investigation.
Another example is tests that exhibit non-deterministic behavior (often referred to as flaky tests). Looking at an
individual instance of test execution may not provide any meaningful insights into the behavior. However,
observing test execution trends for a period can help infer hidden patterns, and help you resolve the failures.

Report information source


The source of information for test analytics is the set of published test results for the build or release pipeline.
These result are accrued over a period of time, and form the basis of the rich insights that test analytics provides.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Review code coverage results
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Code coverage helps you determine the proportion of your project's code that is actually being tested by tests such
as unit tests. To increase your confidence of the code changes, and guard effectively against bugs, your tests should
exercise - or cover - a large proportion of your code.
Reviewing the code coverage result helps to identify code path(s) that are not covered by the tests. This
information is important to improve the test collateral over time by reducing the test debt.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Example
To view an example of publishing code coverage results for your choice of language, see the Ecosystems section
of the Pipelines topics. For example, collect and publish code coverage for JavaScript using Istanbul.

View results
The code coverage summary can be viewed in the build timeline view. The summary shows the overall percentage
of line coverage.
NOTE
Merging code coverage results from multiple test runs is limited to .NET and .NET Core at present. This will be supported for
other formats in a future release.

The code coverage summary can be viewed on the Summar y tab on the pipeline run summary.

The results can be viewed and downloaded on the Code coverage tab.
NOTE
Merging code coverage results from multiple test runs is limited to .NET and .NET Core at present. This will be supported for
other formats in a future release.

Artifacts
The code coverage artifacts published during the build can be viewed under the Build ar tifacts published
milestone in the timeline view.

The code coverage artifacts published during the build can be viewed under the Summar y tab on the pipeline run
summary.

If you use the Visual Studio Test task to collect coverage for .NET and .NET Core apps, the artifact contains
.coverage files that can be downloaded and used for further analysis in Visual Studio.
If you publish code coverage using Cobertura or JaCoCo coverage formats, the code coverage artifact
contains an HTML file that can be viewed offline for further analysis.

NOTE
For .NET and .NET Core, the link to download the artifact is available by choosing the code coverage milestone in the build
summary.

Tasks
Publish Code Coverage Results publishes code coverage results to Azure Pipelines or TFS, which were produced
by a build in Cobertura or JaCoCo format.
Built-in tasks such as Visual Studio Test, .NET Core, Ant, Maven, Gulp, Grunt, and Gradle provide the option to
publish code coverage data to the pipeline.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Code coverage for pull requests
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines
Code coverage is an important quality metric and helps you measure the percentage of your project's code that is
being tested. To ensure that quality for your project improves over time (or at the least, does not regress), it is
essential that new code being brought into the system is well tested. This means that when developers raise pull
requests, knowing whether their changes are covered by tests would help plug any testing holes before the
changes are merged into the target branch. Repo owners may also want to set policies to prevent merging large
untested changes.
Full coverage, diff coverage
Typically, coverage gets measured for the entire codebase of a project. This is full coverage . However, in the
context of pull requests, developers are focused on the changes they are making and want to know whether the
specific lines of code they have added or changed are covered. This is diff coverage .

Prerequisites
In order to get coverage metrics for a pull request, first configure a pipeline that validates pull requests. In this
pipeline, configure the test tool you are using to collect code coverage metrics. Coverage results must then be
published to the server for reporting.
To learn more about collecting and publishing code coverage results for the language of your choice, see the
Ecosystems section. For example, collect and publish code coverage for .NET core apps.

NOTE
While you can collect and publish code coverage results for many different languages using Azure Pipelines, the code
coverage for pull requests feature discussed in this document is currently available only for .NET and .NET core projects
using the Visual Studio code coverage results format (file extension .coverage). Support for other languages and coverage
formats will be added in future milestones.

Coverage status, details and indicators


Once you have configured a pipeline that collects and publishes code coverage, it posts a code coverage status
when a pull request is raised. By default, the server checks for atleast 70% of changed lines being covered by tests.
The diff coverage threshold target can be changed to a value of your choice. See the settings configuration section
below to learn more about this.
The status check evaluates the diff coverage value for all the code files in the pull request. If you would like to view
the % diff coverage value for each of the files, you can turn on details as mentioned in the configuration section.
Turning on details posts details as a comment in the pull request.

In the changed files view of a pull request, lines that are changed are also annotated with coverage indicators to
show whether those lines are covered.

NOTE
While you can build code from a wide variety of version control systems that Azure Pipelines supports, the code coverage
for pull requests feature discussed in this document is currently available only for Azure Repos.

Configuring coverage settings


If you would like to change the default settings of the code coverage experience for pull requests, you must include
a configuration YAML file named azurepipelines-coverage.yml at the root of your repo. Set the desired values in this
file and it will be used automatically the next time the pipeline runs.
The settings that can be changed are:
SET T IN G DESC RIP T IO N DEFA ULT P ERM ISSIB L E VA L UES

status Indicates whether code on on, off


coverage status check
should be posted on pull
requests.
Turning this off will not post
any coverage checks and
coverage annotations will
not appear in the changed
files view.

target Target threshold value for 70% Desired % number


diff coverage must be met
for a successful coverage
status to be posted.

comments Indicates whether a off on, off


comment containing
coverage details for each
code file should be posted in
the pull request

Sample YAML files for different coverage settings can be found in the code coverage YAML samples repo.

NOTE
Coverage indicators light up in the changed files view regardless of whether the pull request comment details are turned on.

TIP
The coverage settings YAML is different from a YAML pipeline. This is because the coverage settings apply to your repo and
will be used regardless of which pipeline builds your code. This separation also means that if you are using the classic
designer-based build pipelines, you will get the code coverage status check for pull requests.

Protect a branch using a code coverage policy


Code coverage status check for pull requests is only a suggestion for developers and it does not prevent pull
requests with low code coverage from being merged into the target branch. If you maintain a repo where you
would like to prevent developers from merging changes that do not meet a coverage threshold, you must
configure a branch policy using the coverage status check.

TIP
Code coverage status posted from a pipeline follows the naming convention {name-of-your-pipeline/codecoverage} .

NOTE
Branch policies in Azure Repos (even optional policies) prevent pull requests from completing automatically if they fail. This
behavior is not specific to code coverage policy.

FAQ
Which coverage tools and result formats can be used for validating code coverage in pull requests?
Code coverage for pull requests capability is currently only available for Visual Studio code coverage (.coverage)
formats. This can be used if you publish code coverage using the Visual Studio Test task, the test verb of dotnet core
task and the TRX option of the publish test results task. Support for other coverage tools and result formats will be
added in future milestones.
If multiple pipelines are triggered when a pull request is raised, will coverage be merged across the pipelines?
If multiple pipelines are triggered when a pull request is raised, code coverage will not be merged. The capability is
currently designed for a single pipeline that collects and publishes code coverage for pull requests. If you need the
ability to merge coverage data across pipelines, please file a feature request on developer community.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Create and target an environment
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020


An environment is a collection of resources, such as Kubernetes clusters and virtual machines, that can be
targeted by deployments from a pipeline. Typical examples of environment names are Dev, Test, QA, Staging,
and Production.
The advantages of using environments include the following.
Deployment histor y - Pipeline name and run details are recorded for deployments to an environment and
its resources. In the context of multiple pipelines targeting the same environment or resource, deployment
history of an environment is useful to identify the source of changes.
Traceability of commits and work items - View jobs within the pipeline run that target an environment.
You can also view the commits and work items that were newly deployed to the environment. Traceability
also allows one to track whether a code change (commit) or feature/bug-fix (work items) reached an
environment.
Diagnose resource health - Validate whether the application is functioning at its desired state.
Permissions - Secure environments by specifying which users and pipelines are allowed to target an
environment.

Resources
While environment at its core is a grouping of resources, the resources themselves represent actual deployment
targets. The Kubernetes resource and virtual machine resource types are currently supported.

Create an environment
1. Sign in to your Azure DevOps organization and navigate to your project.
2. In your project, navigate to the Pipelines page. Then choose Environments and click on Create
Environment .

3. After adding the name of an environment (required) and the description (optional), you can create an
environment. Resources can be added to an existing environment later as well.
TIP
It is possible to create an empty environment and reference it from deployment jobs. This will let you record the
deployment history against the environment.

NOTE
You can use a Pipeline to create, and deploy to environments as well. To learn more, see the how to guide

Target an environment from a deployment job


A deployment job is a collection of steps to be run sequentially. A deployment job can be used to target an entire
environment (group of resources) as shown in the following YAML snippet.

- stage: deploy
jobs:
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-latest'
# creates an environment if it doesn't exist
environment: 'smarthotel-dev'
strategy:
runOnce:
deploy:
steps:
- script: echo Hello world

NOTE
If the specified environment doesn't already exist, an empty environment is created using the environment name
provided.

Target a specific resource within an environment from deployment


job
You can scope the target of deployment to a particular resource within the environment. This allows you to
record deployment history on a specific resource within the environment. The steps of the deployment job
automatically inherit the service connection details from resource targeted by the deployment job.

environment: 'smarthotel-dev.bookings'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespace)
manifests: $(System.ArtifactsDirectory)/manifests/*
imagePullSecrets: $(imagePullSecret)
containers: $(containerRegistry)/$(imageRepository):$(tag)
# value for kubernetesServiceConnection input automatically passed down to task by
environment.resource input
Environment in run details
All environments targeted by deployment jobs of a specific run of a pipeline can be found under the
Environments tab of pipeline run details.

If you're using an AKS private cluster, the Workloads tab isn't available.

Approvals
You can manually control when a stage should run using approval checks. You can use approval checks to
control deployments to production environments. Checks are a mechanism available to the resource owner to
control when a stage in a pipeline consumes resource. As the owner of a resource, such as an environment, you
can define approvals and checks that must be satisfied before a stage consuming that resource starts.
Currently, manual approval checks are supported on environments. For more information, see Approvals.

Deployment history within environments


The deployment history view within environments provides the following advantages.
1. View jobs from all pipelines that are targeting a specific environment. Consider the scenario where two
microservices, each having its own pipeline, are deploying to the same environment. In that case, the
deployment history listing helps identify all pipelines that are impacting this environment and also helps
visualize the sequence of deployments by each pipeline.

2. Drill down into the job details reveals the listing of commits and work items that were newly deployed to
the environment.
Security
User permissions
You can control who can create, view, use, and manage the environments with user permissions. There are four
roles - Creator (scope: all environments), Reader, User, and Administrator. In the specific environment's user
permissions panel, you can set the permissions that are inherited and you can override the roles for each
environment.
Navigate to the specific Environment that you would like to authorize.
Click on overflow menu button located at the top-right part of the page next to "Add resource" and choose
Security to view the settings.
In the User permissions blade, click on +Add to add a User or group and select a suitable Role .

RO L E O N A N EN VIRO N M EN T P URP O SE

Creator Global role, available from environments hub security option.


Members of this role can create the environment in the
project. Contributors are added as members by default. Not
applicable for environments auto created from YAML
pipeline.

Reader Members of this role can view the environment.

User Members of this role can use the environment when


authoring yaml pipelines.

Administrator In addition to using the environment, members of this role


can manage membership of all other roles for the
environment. Creators are added as members by default.

NOTE
If you create an environment within a YAML, contributors and project administrators will be granted Administrator
role. This is typically used in provisioning Dev/Test environments.
If you create an environment through the UI, only the creator will be granted the Administrator role. You should use
the UI to create protected environments like for a production environment.

Pipeline permissions
Pipeline permissions can be used to authorize all or selected pipelines for deployment to the environment.
To remove Open access on the environment or resource, click the Restrict permission in Pipeline
permissions .
To allow specific pipelines to deploy to an environment or a specific resource, click + and choose from the list
of pipelines.
Environment - Kubernetes resource
11/3/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Kubernetes resource view within environments provides a glimpse of the status of objects within the namespace
mapped to the resource. It also overlays pipeline traceability on top of these objects so that one can trace back
from a Kubernetes object to the pipeline and then back to the commit.

Overview
The advantages of using Kubernetes resource views within environments include -
Pipeline traceability - The Kubernetes manifest task used for deployments adds additional annotations to
portray pipeline traceability in resource views. This can help in identifying the originating Azure DevOps
organization, project and pipeline responsible for updates made to an object within the namespace.

Diagnose resource health - Workload status can be useful in quickly debugging potential mistakes or
regressions that could have been introduced by a new deployment. For example, in the case of
unconfigured imagePullSecrets resulting in ImagePullBackOff errors, pod status information can help
identify the root cause for this issue.

Review App - Review app works by deploying every pull request from Git repository to a dynamic
Kubernetes resource under the environment. Reviewers can see how those changes look as well as work
with other dependent services before they're merged into the target branch and deployed to production.
Kubernetes resource creation
Azure Kubernetes Service
A ServiceAccount is created in the chosen cluster and namespace. For an RBAC enabled cluster, RoleBinding is
created as well to limit the scope of the created service account to the chosen namespace. For an RBAC disabled
cluster, the ServiceAccount created has cluster-wide privileges (across namespaces).
1. In the environment details page, click on Add resource and choose Kubernetes .
2. Select Azure Kubernetes Ser vice in the Provider dropdown.
3. Choose the Azure subscription, cluster and namespace (new/existing).
4. Click on Validate and create to create the Kubernetes resource.
Using existing service account
While the Azure Provider option creates a new ServiceAccount, the generic provider allows for using an existing
ServiceAccount to allow a Kubernetes resource within environment to be mapped to a namespace.

TIP
Generic provider (existing service account) is useful for mapping a Kubernetes resource to a namespace from a non-AKS
cluster.

1. In the environment details page, click on Add resource and choose Kubernetes .
2. Select Generic provider (existing ser vice account) in the Provider dropdown.
3. Input cluster name and namespace values.
4. For fetching Server URL, execute the following command on your shell:

kubectl config view --minify -o 'jsonpath={.clusters[0].cluster.server}'

5. For fetching Secret object required to connect and authenticate with the cluster, the following sequence of
commands need to be run:

kubectl get serviceAccounts <service-account-name> -n <namespace> -o 'jsonpath={.secrets[*].name}'

The above command fetches the name of the secret associated with a ServiceAccount. The output of the
above command is to be substituted in the following command for fetching Secret object:

kubectl get secret <service-account-secret-name> -n <namespace> -o json

Copy and paste the Secret object fetched in JSON form into the Secret text-field.
6. Click on Validate and create to create the Kubernetes resource.

Setup Review App


Below is an example YAML snippet for adding a Review App. In this example, the first deployment job is run for
non-PR branches and performs deployments against regular Kubernetes resource under environments. The
second job runs only for PR branches and deploys against review app resources (namespaces inside Kubernetes
cluster) generated on the fly. These resources are marked with a 'Review' label in the resource listing view of the
environment.
- stage: Deploy
displayName: Deploy stage
dependsOn: Build

jobs:
- deployment: Deploy
condition: and(succeeded(), not(startsWith(variables['Build.SourceBranch'], 'refs/pull/')))
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: $(envName).$(resourceName)
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)

- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)

- deployment: DeployPullRequest
displayName: Deploy Pull request
condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/pull/'))
pool:
vmImage: $(vmImageName)

environment: '$(envName).$(k8sNamespaceForPR)'
strategy:
runOnce:
deploy:
steps:
- reviewApp: $(resourceName)

- task: Kubernetes@1
displayName: 'Create a new namespace for the pull request'
inputs:
command: apply
useConfigurationFile: true
inline: '{ "kind": "Namespace", "apiVersion": "v1", "metadata": { "name": "$(k8sNamespaceForPR)"
}}'

- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
namespace: $(k8sNamespaceForPR)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)

- task: KubernetesManifest@0
displayName: Deploy to the new namespace in the Kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespaceForPR)
manifests: |
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)

To use this job in an exiting pipeline, the service connection backing the regular Kubernetes environment
resource needs to be modified to "Use cluster admin credentials". Alternatively, role bindings need to be created
for the underlying service account to the review app namespace.
For setting up review apps without the need to author the above YAML from scratch or create explicit role bindings
manually, checkout new pipeline creation experience using the Deploy to Azure Kubernetes Services template.
Environment - virtual machine resource
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020


Virtual machines can be added as resources within environments and can be targeted for multi-VM deployments.
Deployment history views within the environment provide traceability from the VM to the pipeline and then to the
commit.

Virtual machine resource creation


NOTE
You can use this same process to set up physical machines with the registration script.

You can define environments in Environments under Pipelines .


1. Click Create Environment .
2. Specify a Name (required) for the environment and a Description .
3. Choose Vir tual Machines as a Resource to be added to the environment and click Next .
4. Choose Windows or Linux for the Operating System .
5. Copy the registration script.
6. Run the copied script from an administrator PowerShell command prompt on each of the target VMs that you
want to register with this environment.

NOTE
The Personal Access Token (PAT) of the logged in user is included in the script. The PAT expires on the day you generate
the script.
If your VM already has any other agent running on it, provide a unique name for agent to register with the environment.

7. Once your VM is registered, it will start appearing as an environment resource under the Resources tab of the
environment.

8. To add more VMs, copy the script again by clicking Add resource and selecting Vir tual Machines . This script
remains the same for all the VMs added to the environment.
9. Each machine interacts with Azure Pipelines to coordinate deployment of your app.

Adding and managing tags


You can add tags to the VM as part of the interactive PS registration script. You can also add or remove tags from
the resource view by clicking on ... at the end of each VM resource on the Resources tab.
The tags you assign allow you to limit deployment to specific virtual machines when the environment is used in a
deployment job. Tags are each limited to 256 characters. There is no limit to the number of tags you can use.

Reference VM resources in pipelines


Create a new pipeline by referencing the environment and VM resources in a pipeline YAML. The environment will
be created if it does not already exist.

jobs:
- deployment: VMDeploy
displayName: web
environment:
name: VMenv
resourceType: VirtualMachine
tags: web1
strategy:

You can select specific sets of virtual machines from the environment to receive the deployment by specifying the
tags that you have defined. Here is the complete YAML schema for a deployment job.

Apply deployment strategy


You can apply a deployment strategy to define how your application is rolled out. The runOnce strategy and the
rolling strategy for VMs are both supported. Here is the reference documentation for deployment strategies and
the details about various life-cycle hooks.

Deployment history views


The Deployments tab provides complete traceability of commits and work items, and a cross-pipeline
deployment history per environment and resource.

Remove a VM from an Environment


To unconfigure virtual machines that are previously added to an environment, run this command from an
administrator PowerShell command prompt on each of the machines, in the same folder path where the script to
register to the environment has been previously run:

./config.cmd remove

Known limitations
When you retry a stage, it will rerun the deployment on all VMs and not just failed targets.

Next steps
Learn more about deployment jobs and environments.
To learn what else you can do in YAML pipelines, see the YAML schema reference.
Deploy to a Linux Virtual Machine
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines provides a complete, fully featured set of CI/CD automation tools for deployments to virtual
machines.
You can use continuous integration (CI) and continuous deployment (CD) to build, release, and deploy your code.
Learn how to set up a CI/CD pipeline for multi-machine deployments.
This article covers how to set up continuous deployment of your app to a web server running on Ubuntu. You can
use these steps for any app that publishes a web deployment package.

Get your sample code


Java
JavaScript
If you already have an app in GitHub that you want to deploy, you can create a pipeline for that code.
If you are a new user, fork this repo in GitHub:

https://github.com/spring-projects/spring-petclinic

NOTE
Petclinic is a Spring Boot application built using Maven.

Prerequisites for the Linux VM


Use Ubuntu 16.04 for this quickstart. Follow additional steps for Java or JavaScript.
Java
JavaScript
For deploying Java Spring Boot and Spring Cloud based apps, create a Linux VM in Azure using this template,
which provides a fully supported OpenJDK-based runtime.
For deploying Java servlets on Tomcat server, create a Linux VM with Java 8 using this Azure template and
configure Tomcat 9.x as a service.
For deploying Java EE-based Wildfly app, follow the blog post here. To provision the VM, use an Azure template
to create a Linux VM + Java + WebSphere 9.x or a Linux VM + Java + WebLogic 12.x or a Linux VM +Java +
WildFly/JBoss 14

Create an environment with virtual machines


Virtual machines can be added as resources within environments and can be targeted for multi-VM deployments.
The deployment history view provides traceability from the VM to the commit.
You can create an environment in Environments within Pipelines .
1. Sign into your Azure DevOps organization and navigate to your project.
2. Navigate to the Pipelines page. Select Environments and click Create Environment . Specify a Name
(required) for the environment and a Description .
3. Choose Vir tual Machines as a Resource to be added to the environment. Click Next .
4. Choose the Windows or Linux for the Operating System and copy PS registration script.
5. Run the copied script from an administrator PowerShell command prompt on each of the target VMs
registered with this environment.

NOTE
The Personal Access Token (PAT) of the logged in user is pre-inserted in the script and expires after three hours.
If your VM already has any agent running on it, provide a unique name to register with environment.

6. Once VM is registered, it will start appearing as an environment resource under Resources .

7. To add more VMs, copy the script again. Click Add resource and choose Vir tual Machines . This script is
the same for all the VMs you want to add to the same environment.
8. Each machine interacts with Azure Pipelines to coordinate deployment of your app.

9. You can add or remove tags for the VM. Click on the dots at the end of each VM resource in Resources . The
tags you assign allow you to limit deployment to specific VMs when the environment is used in a
deployment job. Tags are each limited to 256 characters, but there is no limit to the number of tags you can
create.
Define your CI build pipeline
You'll need a continuous integration (CI) build pipeline that publishes your web application and a deployment script
that can be run locally on the Ubuntu server. Set up a CI build pipeline based on the runtime you want to use.
1. Sign in to your Azure DevOps organization and navigate to your project.
2. In your project, navigate to the Pipelines page. Then choose the action to create a new pipeline.
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You may be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your desired sample app repository.
6. Azure Pipelines will analyze your repository and recommend a suitable pipeline template.
Java
JavaScript
Select the star ter template and copy this YAML snippet to build your Java project and runs tests with Apache
Maven:

- job: Build
displayName: Build Maven Project
steps:
- task: Maven@3
displayName: 'Maven Package'
inputs:
mavenPomFile: 'pom.xml'
- task: CopyFiles@2
displayName: 'Copy Files to artifact staging directory'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: '**/target/*.?(war|jar)'
TargetFolder: $(Build.ArtifactStagingDirectory)
- upload: $(Build.ArtifactStagingDirectory)
artifact: drop

For more guidance, follow the steps mentioned in Build your Java app with Maven for creating a build.
Define CD steps to deploy to the Linux VM
1. Edit your pipeline and include a deployment job by referencing the environment and the VM resources you
created earlier:

jobs:
- deployment: VMDeploy
displayName: web
environment:
name: <environment name>
resourceType: VirtualMachine
tags: web1
strategy:

2. You can select specific sets of virtual machines from the environment to receive the deployment by
specifying the tags that you have defined for each virtual machine in the environment. Here is the complete
YAML schema for Deployment job.
3. You can specify either runOnce or rolling as a deployment strategy.
runOnce is the simplest deployment strategy. All the life-cycle hooks, namely preDeploy deploy ,
routeTraffic , and postRouteTraffic , are executed once. Then, either on: success or on: failure is
executed.
Below is an example YAML snippet for runOnce :

jobs:
- deployment: VMDeploy
displayName: web
pool:
vmImage: 'Ubuntu-16.04'
environment:
name: <environment name>
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: echo my first deployment

4. Below is an example YAML snippet for the rolling strategy. You can update up to 5 targets gets in each
iteration. maxParallel will determine the number of targets that can be deployed to, in parallel. The selection
accounts for absolute number or percentage of targets that must remain available at any time excluding the
targets that are being deployed to. It is also used to determine the success and failure conditions during
deployment.
jobs:
- deployment: VMDeploy
displayName: web
environment:
name: <environment name>
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 2 #for percentages, mention as x%
preDeploy:
steps:
- download: current
artifact: drop
- script: echo initialize, cleanup, backup, install certs
deploy:
steps:
- task: Bash@3
inputs:
targetType: 'inline'
script: |
# Modify deployment script based on the app type
echo "Starting deployment script run"
sudo java -jar '$(Pipeline.Workspace)/drop/**/target/*.jar'
routeTraffic:
steps:
- script: echo routing traffic
postRouteTraffic:
steps:
- script: echo health check post-route traffic
on:
failure:
steps:
- script: echo Restore from backup! This is on failure
success:
steps:
- script: echo Notify! This is on success

With each run of this job, deployment history is recorded against the <environment name> environment that
you have created and registered the VMs.

Pipeline traceability views in environment


The Deployments view provides complete traceability of commits and work items, and a cross-pipeline
deployment history per environment.
Next Steps
To learn more about the topics in this guide see Jobs, Tasks, Catalog of Tasks, Variables, Triggers, or Troubleshooting.
To learn what else you can do in YAML pipelines, see YAML schema reference.
Quickstart: Use an ARM template to deploy a Linux
web app to Azure
11/2/2020 • 7 minutes to read • Edit Online

Get started with Azure Resource Manager templates (ARM templates) by deploying a Linux web app with MySQL.
ARM templates give you a way to save your configuration in code. Using an ARM template is an example of
infrastructure as code and a good DevOps practice.
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for
your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to
write the sequence of programming commands to create it.

Prerequisites
Before you begin, you need:
An Azure account with an active subscription. Create an account for free.
An active Azure DevOps organization. Sign up for Azure Pipelines.

Create a project
If you signed up for Azure DevOps with a newly created Microsoft account (MSA), your project is automatically
created and named based on your sign-in.
If you signed up for Azure DevOps with an existing MSA or GitHub identity, you're automatically prompted to create
a project. You can create either a public or private project. To learn more about public projects, see What is a public
project?.
1. Enter information into the form provided, which includes a project name, description, visibility selection,
initial source control type, and work item process.

See choosing the right version control for your project and choose a process for guidance.
2. When your project is complete, the welcome page appears.

Get the code


Fork this repo in GitHub:

https://github.com/Azure/azure-quickstart-templates/

Review the template


The template used in this quickstart is from Azure Quickstart Templates.

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"contentVersion": "1.0.0.0",
"parameters": {
"siteName": {
"type": "string",
"defaultValue": "[concat('MySQL-', uniqueString(resourceGroup().name))]",
"metadata": {
"description": "The unique name of your Web Site."
}
},
"administratorLogin": {
"type": "string",
"minLength": 1,
"metadata": {
"description": "Database administrator login name"
}
},
"administratorLoginPassword": {
"type": "securestring",
"minLength": 8,
"metadata": {
"description": "Database administrator password"
}
},
"dbSkucapacity": {
"type": "int",
"defaultValue": 2,
"allowedValues": [
2,
4,
8,
16,
32
],
"metadata": {
"description": "Azure database for mySQL compute capacity in vCores (2,4,8,16,32)"
}
},
"dbSkuName": {
"type": "string",
"defaultValue": "GP_Gen5_2",
"allowedValues": [
"GP_Gen5_2",
"GP_Gen5_4",
"GP_Gen5_8",
"GP_Gen5_16",
"GP_Gen5_32",
"MO_Gen5_2",
"MO_Gen5_4",
"MO_Gen5_8",
"MO_Gen5_16",
"MO_Gen5_32"
],
"metadata": {
"description": "Azure database for mySQL sku name "
}
},
"dbSkuSizeMB": {
"type": "int",
"defaultValue": 51200,
"allowedValues": [
102400,
51200
],
"metadata": {
"description": "Azure database for mySQL Sku Size "
}
},
"dbSkuTier": {
"type": "string",
"defaultValue": "GeneralPurpose",
"defaultValue": "GeneralPurpose",
"allowedValues": [
"GeneralPurpose",
"MemoryOptimized"
],
"metadata": {
"description": "Azure database for mySQL pricing tier"
}
},
"mysqlVersion": {
"type": "string",
"defaultValue": "5.7",
"allowedValues": [
"5.6",
"5.7"
],
"metadata": {
"description": "MySQL version"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"databaseskuFamily": {
"type": "string",
"defaultValue": "Gen5",
"metadata": {
"description": "Azure database for mySQL sku family"
}
}
},
"variables": {
"databaseName": "[concat('database', uniqueString(resourceGroup().id))]",
"serverName": "[concat('mysql-', uniqueString(resourceGroup().id))]",
"hostingPlanName": "[concat('hpn-', uniqueString(resourceGroup().id))]"
},
"resources": [
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2020-06-01",
"name": "[variables('hostingPlanName')]",
"location": "[parameters('location')]",
"sku": {
"Tier": "Standard",
"Name": "S1"
},
"kind": "linux",
"properties": {
"name": "[variables('hostingPlanName')]",
"workerSizeId": "1",
"reserved": true,
"numberOfWorkers": "1"
}
},
{
"type": "Microsoft.Web/sites",
"apiVersion": "2020-06-01",
"name": "[parameters('siteName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[variables('hostingPlanName')]"
],
"properties": {
"siteConfig": {
"linuxFxVersion": "php|7.0",
"connectionStrings": [
{
"name": "defaultConnection",
"ConnectionString": "[concat('Database=', variables('databaseName'), ';Data Source=',
reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName,';User
Id=',parameters('administratorLogin'),'@',variables('serverName')
,';Password=',parameters('administratorLoginPassword'))]",
"type": "MySql"
}
]
},
"name": "[parameters('siteName')]",
"serverFarmId": "[variables('hostingPlanName')]"
}
},
{
"type": "Microsoft.DBforMySQL/servers",
"apiVersion": "2017-12-01",
"name": "[variables('serverName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('dbSkuName')]",
"tier": "[parameters('dbSkuTier')]",
"capacity": "[parameters('dbSkucapacity')]",
"size": "[parameters('dbSkuSizeMB')]",
"family": "[parameters('databaseSkuFamily')]"
},
"properties": {
"createMode": "Default",
"version": "[parameters('mysqlVersion')]",
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]",
"storageProfile": {
"storageMB": "[parameters('dbSkuSizeMB')]",
"backupRetentionDays": 7,
"geoRedundantBackup": "Disabled"
},
"sslEnforcement": "Disabled"
},
"resources": [
{
"type": "firewallrules",
"apiVersion": "2017-12-01",
"name": "AllowAzureIPs",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.DBforMySQL/servers/databases', variables('serverName'),
variables('databaseName'))]",
"[resourceId('Microsoft.DBforMySQL/servers/', variables('serverName'))]"
],
"properties": {
"startIpAddress": "0.0.0.0",
"endIpAddress": "255.255.255.255"
}
},
{
"type": "databases",
"apiVersion": "2017-12-01",
"name": "[variables('databaseName')]",
"dependsOn": [
"[resourceId('Microsoft.DBforMySQL/servers/', variables('serverName'))]"
],
"properties": {
"charset": "utf8",
"collation": "utf8_general_ci"
}
}
]
}
}
]
}

The template defines several resources:


Microsoft.Web/serverfarms
Microsoft.Web/sites
Microsoft.DBforMySQL/servers
Microsoft.DBforMySQL/servers/firewallrules
Microsoft.DBforMySQL/servers/databases

Create your pipeline and deploy your template


1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Select GitHub as the location of your source code.

NOTE
You may be redirected to GitHub to sign in. If so, enter your GitHub credentials.

4. When the list of repositories appears, select yourname/azure-quickstart-templates/ .

NOTE
You may be redirected to GitHub to install the Azure Pipelines app. If so, select Approve and install.

5. When the Configure tab appears, select Starter pipeline .


6. Replace the content of your pipeline with this code:

trigger:
- none

pool:
vmImage: 'ubuntu-latest'

7. Create three variables: siteName , administratorLogin , and administratorLoginPassword .


administratorLoginPassword needs to be a secret variable.
Select Variables .
Use the + sign to add three variables. When you create administratorLoginPassword , select Keep this
value secret .
Click Save when you're done.

VA RIA B L E VA L UE SEC RET ?

siteName mytestsite No

adminUser fabrikam No
VA RIA B L E VA L UE SEC RET ?

adminPass Fqdn:5362! Yes

8. Map the secret variable $(adminPass) so that it is available in your Azure Resource Group Deployment task.
At the top of your YAML file, map $(adminPass) to $(ARM_PASS) .

variables:
ARM_PASS: $(adminPass)

trigger:
- none

pool:
vmImage: 'ubuntu-latest'

9. Add the Copy Files task to the YAML file. You will use the 101-webapp-linux-managed-mysql project. For more
information, see Build a Web app on Linux with Azure database for MySQL repo for more details.

variables:
ARM_PASS: $(adminPass)

trigger:
- none

pool:
vmImage: 'ubuntu-latest'

steps:
- task: CopyFiles@2
inputs:
SourceFolder: '101-webapp-linux-managed-mysql'
Contents: '**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

10. Add and configure the Azure Resource Group Deployment task.
The task references both the artifact you built with the Copy Files task and your pipeline variables. Set these
values when configuring your task.
Deployment scope (deploymentScope) : Set the deployment scope to Resource Group . You can target
your deployment to a management group, an Azure subscription, or a resource group.
Azure Resource Manager connection (azureResourceManagerConnection) : Select your Azure
Resource Manager service connection. To configure new service connection, select the Azure subscription
from the list and click Authorize . See Connect to Microsoft Azure for more details
Subscription (subscriptionId) : Select the subscription where the deployment should go.
Action (action) : Set to Create or update resource group to create a new resource group or to update an
existing one.
Resource group : Set to ARMPipelinesLAMP-rg to name your new resource group. If this is an existing
resource group, it will be updated.
Location(location) : Location for deploying the resource group. Set to your closest location (for example,
West US). If the resource group already exists in your subscription, this value will be ignored.
Template location (templateLocation) : Set to Linked artifact . This is location of your template and
the parameters files.
Template (cmsFile) : Set to $(Build.ArtifactStagingDirectory)/azuredeploy.json . This is the path to the
ARM template.
Template parameters (cmsParametersFile) : Set to
$(Build.ArtifactStagingDirectory)/azuredeploy.parameters.json . This is the path to the parameters file for
your ARM template.
Override template parameters (overrideParameters) : Set to
-siteName $(siteName) -administratorLogin $(adminUser) -administratorLoginPassword $(ARM_PASS) to use
the variables you created earlier. These values will replace the parameters set in your template
parameters file.
Deployment mode (deploymentMode) : The way resources should be deployed. Set to Incremental .
Incremental keeps resources that are not in the ARM template and is faster than Complete . Validate
mode lets you find problems with the template before deploying.

variables:
ARM_PASS: $(adminPass)

trigger:
- none

pool:
vmImage: 'ubuntu-latest'

steps:
- task: CopyFiles@2
inputs:
SourceFolder: '101-webapp-linux-managed-mysql'
Contents: '**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

- task: AzureResourceManagerTemplateDeployment@3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: '<your-resource-manager-connection>'
subscriptionId: '<your-subscription-id>'
action: 'Create Or Update Resource Group'
resourceGroupName: 'ARMPipelinesLAMP-rg'
location: '<your-closest-location>'
templateLocation: 'Linked artifact'
csmFile: '$(Build.ArtifactStagingDirectory)/azuredeploy.json'
csmParametersFile: '$(Build.ArtifactStagingDirectory)/azuredeploy.parameters.json'
overrideParameters: '-siteName $(siteName) -administratorLogin $(adminUser) -
administratorLoginPassword $(ARM_PASS)'
deploymentMode: 'Incremental'

11. Click Save and run to deploy your template. The pipeline job will be launched and after few minutes,
depending on your agent, the job status should indicate Success .

Review deployed resources


1. Verify that the resources deployed. Go to the ARMPipelinesLAMP-rg resource group in the Azure portal and
verify that you see App Service, App Service Plan, and Azure Database for MySQL server resources.

You can also verify the resources using Azure CLI.


az resource list --resource-group ARMPipelinesLAMP-rg --output table

2. Go to your new site. If you set siteName to armpipelinetestsite , the site is located at
https://armpipelinetestsite.azurewebsites.net/ .

Clean up resources
You can also use an ARM template to delete resources. Change the action value in your Azure Resource Group
Deployment task to DeleteRG . You can also remove the inputs for templateLocation , csmFile , csmParametersFile
, overrideParameters , and deploymentMode .

variables:
ARM_PASS: $(adminPass)

trigger:
- none

pool:
vmImage: 'ubuntu-latest'

steps:
- task: CopyFiles@2
inputs:
SourceFolder: '101-webapp-linux-managed-mysql'
Contents: '**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'

- task: AzureResourceManagerTemplateDeployment@3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: '<your-resource-manager-connection>'
subscriptionId: '<your-subscription-id>'
action: 'DeleteRG'
resourceGroupName: 'ARMPipelinesLAMP-rg'
location: ''<your-closest-location>'

Next steps
Create your first ARM template
Why data pipelines?
11/2/2020 • 2 minutes to read • Edit Online

You can use data pipelines to:


Ingest data from various data sources
Process and transform the data
Save the processed data to a staging location for others to consume

Data pipelines in the enterprise can evolve into more complicated scenarios with multiple source systems and
supporting various downstream applications.
Data pipelines provide:
Consistency: Data pipelines transform data into a consistent format for users to consume
Error reduction: Automated data pipelines eliminate human errors when manipulating data
Efficiency: Data professionals save time spent on data processing transformation. Saving time allows then to
focus on their core job function - getting the insight out of the data and helping business makes better decisions

What is CI/CD?
Continuous integration and continuous delivery (CI/CD) is a software development approach where all developers
work together on a shared repository of code – and as changes are made, there are automated build process for
detecting code issues. The outcome is a faster development life cycle and a lower error rate.

What is a CICD/ data pipeline and why does it matter for data science?
The building of machine learning models is similar to traditional software development in the sense that the data
scientist needs to write code to train and score machine learning models.
Unlike traditional software development where the product is based on code, data science machine learning
models are based on both the code (algorithm, hyper parameters) and the data used to train the model. That’s why
most data scientists will tell you that they spend 80% of the time doing data preparation, cleaning and feature
engineering.
To complicate the matter even further – to ensure the quality of the machine learning models, techniques such as
A/B testing is used – where there could be multiple machine learning models being used concurrently. There is
usually one control model and one or more treatment models for comparison – so that the model performance can
be compared and maintained. Having multiple models adds another layer of complexity for the CI/CD of machine
learning models.
Having a CI/CD data pipeline is crucial for the data science team to deliver the machine learning models to the
business in a timely and quality manner.

Next steps
Build a data pipeline with Azure
Build a data pipeline with DevOps, Azure Data
Factory, and machine learning
11/2/2020 • 8 minutes to read • Edit Online

Get started with data pipelines by building a data pipeline with data ingestion, data transformation, and model
training.
Learn how to grab data from a CSV and save to blob storage, and then transform the data and save it to a staging
area. Then train a machine learning model using the transformed data and output the model as pickle file to blob
storage.

Prerequisites
Before you begin, you need:
An Azure account with an active subscription. Create an account for free.
An active Azure DevOps organization. Sign up for Azure Pipelines.
Downloaded data (sample.csv)
Access to the data pipeline solution in GitHub
DevOps for Azure Databricks

Provision Azure resources


1. Sign in to the Azure portal.
2. From the menu, select Cloud Shell . When prompted, select the Bash experience.

NOTE
Cloud Shell requires an Azure storage resource to persist any files that you create in Cloud Shell. When you first open
Cloud Shell, you're prompted to create a resource group, storage account, and Azure Files share. This setup is
automatically used for all future Cloud Shell sessions.

Select an Azure region


A region is one or more Azure datacenters within a geographic location. East US, West US, and North Europe are
examples of regions. Every Azure resource, including an App Service instance, is assigned a region.
To make commands easier to run, start by selecting a default region. After you specify the default region, later
commands use that region unless you specify a different region.
1. From Cloud Shell, run the following az account list-locations command to list the regions that are
available from your Azure subscription.

az account list-locations \
--query "[].{Name: name, DisplayName: displayName}" \
--output table
2. From the Name column in the output, choose a region that's close to you. For example, choose eastasia or
westus2 .

3. Run az configure to set your default region. Replace <REGION> with the name of the region you chose.

az configure --defaults location=<REGION>

This example sets westus2 as the default region:

az configure --defaults location=westus2

Create Bash variables


1. From Cloud Shell, generate a random number. This will make it easier to create globally unique names for
certain services in the next step.

resourceSuffix=$RANDOM

2. Create globally unique names for your storage account and key vault. These commands use double quotes,
which instruct Bash to interpolate the variables using the inline syntax.

storageName="datacicd${resourceSuffix}"
keyVault="keyvault${resourceSuffix}"

3. Create one more Bash variable to store the names of your resource group.

rgName='data-pipeline-cicd-rg'

4. Create variable names for your Azure Data Factory and Azure Databricks instances.

datafactorydev='data-factory-cicd-dev'
datafactorytest='data-factory-cicd-test'

databricksname='databricks-cicd-ws'

Create Azure resources


1. Run the following az group create command to create a resource group using rgName .

az group create --name $rgName

2. Run the following az storage account create command to create a new storage account.

az storage account create \


--name $storageName \
--resource-group $rgName \
--sku Standard_RAGRS \
--kind StorageV2

a. Run the following az storage container create command to create two containers, rawdata ,
prepareddata .
az storage container create -n rawdata --account-name $storageName
az storage container create -n prepareddata --account-name $storageName

3. Run the following az keyvault create command to create a new key vault.

az keyvault create \
--name $keyVault \
--resource-group $rgName

4. Create a new Azure Data Factory within the portal UI or using Azure CLI.
Name: data-factory-cicd-dev
Version: V2
Resource Group: data-pipeline-cicd-rg
Location: your closest location
Uncheck Enable GIT
a. Add the Azure Data Factory extension.

az extension add --name datafactory

b. Run the following az datafactory factory create command to create a new data factory.

az datafactory factory create \


--name data-factory-cicd-dev \
--resource-group $rgName

c. Copy the Subscription ID for your Data Factory to use later.


5. Create a second Azure Data Factory within the portal UI or using Azure CLI for test.
Name: data-factory-cicd-test
Version: V2
Resource Group: data-pipeline-cicd-rg
Location: your closest location
Uncheck Enable GIT
a. Run the following az datafactory factory create command to create a new data factory for testing.

az datafactory factory create \


--name data-factory-cicd-test \
--resource-group $rgName

b. Copy the Subscription ID for your Data Factory to use later.


6. Add a new Azure Databricks service.
Resource Group: data-pipeline-cicd-rg
Workspace name: databricks-cicd-ws
Location: your closest location
a. Add the Azure Databricks extension if it is not already installed.

az extension add --name databricks


b. Run the following az databricks workspace create command to create a new workspace.

az databricks workspace create \


--resource-group $rgName \
--name databricks-cicd-ws \
--location eastus2 \
--sku trial

7. Copy the Subscription ID for your Databricks service to use later.

Upload data to your storage container


1. Open your storage account in the Azure portal UI in the data-pipeline-cicd-rg resource group.
2. Go to Blob Ser vice > Containers .
3. Open the prepareddata container.
4. Upload sample.csv (source).

Set up Key Vault


You will use Azure Key Vault to store all connection information for your Azure services.
Create a Databricks personal access token
1. Go Databricks in the Azure portal and launch your workspace.
2. Generate and copy a personal access token in the Azure Databricks UI (steps).
Copy the account key and connection string for your storage account
1. Go to your storage account.
2. Open Access keys .
3. Copy the first key and connection string.
Save values to Key Vault
1. Create three secrets:
databricks-token: your-databricks-pat
StorageKey: your-storage-key
StorageConnectString: your-storage-connection
Then, run the following az keyvault secret set command to add secrets to your key vault.

az keyvault secret set --vault-name "$keyVault" --name "databricks-token" --value "your-databricks-pat"


az keyvault secret set --vault-name "$keyVault" --name "StorageKey" --value "your-storage-key"
az keyvault secret set --vault-name "$keyVault" --name "StorageConnectString" --value "your-storage-
connection"

Import the data pipeline solution


1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Repos and import your forked version of the GitHub repository. Learn more about importing
repositories from GitHub.

Add an Azure Resource Manager service connection


1. Create an Azure Resource Manager service connection.
2. Select Service Principal (automatic) .
3. Choose the data-pipeline-cicd-rg resource group.
4. Name the service connection azure_rm_connection .
5. Check Grant access permission to all pipelines .

Add pipeline variables


1. Create a new variable group named datapipeline-vg .
2. Add the Azure DevOps extension if it isn't already installed.

az extension add -name azure-devops

3. Sign in to your Azure DevOps account.

az devops login --org https://dev.azure.com/<yourorganizationname>

az pipelines variable-group create --name datapipeline-vg -p <yourazuredevopsprojectname> --variables `


"LOCATION=$region" `
"RESOURCE_GROUP=$rgName" `
"DATA_FACTORY_NAME=$datafactorydev" `
"DATA_FACTORY_DEV_NAME=$datafactorydev" `
"DATA_FACTORY_TEST_NAME=$datafactorytest" `
"ADF_PIPELINE_NAME=DataPipeline" `
"DATABRICKS_NAME=$databricksname" `
"AZURE_RM_CONNECTION=azure_rm_connection" `
"DATABRICKS_URL=<URL copied from Databricks in Azure portal>" `
"STORAGE_ACCOUNT_NAME=$storageName" `
"STORAGE_CONTAINER_NAME=rawdata"

4. Create a second variable group named keys-vg that pulls data variables from Key Vault.
5. Check Link secrets from an Azure key vault as variables . Learn how to link secrets from an Azure key
vault.
6. Authorize the Azure subscription.
7. Choose all of the available secrets to add as variables ( databricks-token , StorageConnectString , StorageKey ).

Configure Azure Databricks and Azure Data Factory


Create testscope in Azure Databricks
1. Go to Key vault > Proper ties in the Azure portal UI.
2. Copy the DNS Name and Resource ID .
3. Create a secret scope in your Azure Databricks workspace named testscope .
Add a new cluster in Azure Databricks
1. Go to Clusters in the Azure Databricks workspace.
2. Select Create Cluster .
3. Name and save your new cluster.
4. Click on your new cluster name.
5. In the URL string, copy the content between /clusters/ and /configuration . For example, in the string
clusters/0306-152107-daft561/configuration , you would copy 0306-152107-daft561 .
6. Save this string to use later.
Set up your code repository in Azure Data Factory
1. Go to Author & Monitor in Azure Data Factory. Learn more about setting up Azure Data Factory.
2. Select Set up code repositor y and connect your repo.
Repository type: Azure DevOps Git
Azure DevOps organization: Your active account
Project name: Your Azure DevOps data pipeline project
Git repository name: Use existing .
Select the master branch for collaboration.
Set /azure-data-pipeline/factor ydata as the root folder
Branch to import resource into: Select Use existing and master
Link Azure Data Factory to your key vault
1. In the Azure portal UI, open the key vault.
2. Select Access policies .
3. Select Add Access Policy .
4. For Configure from template , select Key & Secret Management .
5. In Select principal , search for the name of your dev Data Factory and add it.
6. Select Add to add your access policies.
7. Repeat these steps to add an Access policy for the test Data Factory.
Update key vault linked service in Azure Data Factory
1. Go to Manage > Linked ser vices .
2. Update the Azure key vault to connect to your subscription.
Update storage linked service in Azure Data Factory
1. Go to Manage > Linked ser vices .
2. Update the Azure Blob Storage value to connect to your subscription.
Update Azure Databricks linked service in Azure Data Factory
1. Go to Manage > Linked ser vices .
2. Update the Azure Databricks value to connect to your subscription.
3. For the Existing Cluster ID , enter the cluster value you saved earlier.
Test and publish the data factory
1. Go to Edit in Azure Data Factory.
2. Open DataPipeline .
3. Select Variables .
4. Verify that the storage_account_name refers to your storage account in the Azure portal. Update the default
value if necessary and save.
5. Select Validate to verify DataPipeline .
6. Select Publish to publish Data Factory assets to the adf_publish branch of your repository.

Run the CI/CD pipeline


1. Navigate to the Pipelines page. Then choose the action to create a new pipeline.
2. Select Azure Repos Git as the location of your source code.
3. When the list of repositories appears, select your repository.
4. When configuring your pipeline, select Existing Azure Pipelines YAML file and choose the YAML file at
/azure-data-pipeline/data_pipeline_ci_cd.yml .
5. Run the pipeline. If this is the first time running your pipeline, you will need to give permission to access a
resource during the run.
6. You may need to give permission during the run.

Clean up resources
If you're not going to continue to use this application, delete your data pipeline with the following steps:
1. Delete the data-pipeline-cicd-rg resource group.
2. Delete your Azure DevOps project.

Next steps
Learn more about data in Azure Data Factory
Deploy apps to Azure Government Cloud
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines | TFS 2017 | TFS 2018

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Azure Government Clouds provide private and semi-isolated locations for specific Government or other services,
separate from the normal Azure services. Highest levels of privacy have been adopted for these clouds, including
restricted data access policies.
Azure Pipelines is not available in Azure Government Clouds, so there are some special considerations when you
want to deploy apps to Government Clouds because artifact storage, build, and deployment orchestration must
execute outside the Government Cloud.
To enable connection to an Azure Government Cloud, you specify it as the Environment parameter when you
create an Azure Resource Manager service connection. You must use the full version of the service connection
dialog to manually define the connection. Before you configure a service connection, you should also ensure you
meet all relevant compliance requirements for your application.
You can then use the service connection in your build and release pipeline tasks.
Next
Deploy an Azure Web App
Troubleshoot Azure Resource Manager service connections

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Connect to Microsoft Azure
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.

To deploy your app to an Azure resource (to an app service or to a virtual machine), you need an Azure
Resource Manager service connection.

For other types of connection, and general information about creating and using connections, see Service
connections for builds and releases.

Create an Azure Resource Manager service connection using


automated security
We recommend this simple approach if:
You're signed in as the owner of the Azure Pipelines organization and the Azure subscription.
You don't need to further limit the permissions for Azure resources accessed through the service
connection.
You're not connecting to Azure Stack or an Azure Government Cloud.
You're not connecting from Azure DevOps Server 2019 or earlier versions of TFS
1. In Azure DevOps, open the Ser vice connections page from the project settings page. In TFS, open the
Ser vices page from the "settings" icon in the top menu bar.
2. Choose + New ser vice connection and select Azure Resource Manager .
3. Specify the following parameters.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure subscription.

Scope level Select Subscription or Management Group.


Management groups are containers that help you
manage access, policy, and compliance across multiple
subscriptions.

Subscription If you selected Subscription for the scope, select an


existing Azure subscription. If you don't see any Azure
subscriptions or instances, see Troubleshoot Azure
Resource Manager service connections.

Management Group If you selected Management Group for the scope, select
an existing Azure management group. See Create
management groups.

Resource Group Leave empty to allow users to access all resources


defined within the subscription, or select a resource
group to which you want to restrict users' access (users
will be able to access only the resources defined within
that group).

4. After the new service connection is created:


If you're using the classic editor, select the connection name you assigned in the Azure
subscription setting of your pipeline.
If you're using YAML, copy the connection name into your code as the azureSubscription value.
5. To deploy to a specific Azure resource, the task will need additional data about that resource.
If you're using the classic editor, select data you need. For example, the App service name.
If you're using YAML, then go to the resource in the Azure portal, and then copy the data into your
code. For example, to deploy a web app, you would copy the name of the App Service into the
WebAppName value.

To refresh a service connection, edit the connection and select Verify . Once you save, the service
connection will be valid for two years.

See also: Troubleshoot Azure Resource Manager service connection.


If you have problems using this approach (such as no subscriptions being shown in the drop-down list), or if
you want to further limit users' permissions, you can instead use a service principal or a VM with a managed
service identity.

Create an Azure Resource Manager service connection with an


existing service principal
1. If you want to use a pre-defined set of access permissions, and you don't already have a suitable service
principal defined, follow one of these tutorials to create a new service principal:
Use the portal to create an Azure Active Directory application and a service principal that can access
resources
Use Azure PowerShell to create an Azure service principal with a certificate
2. In Azure DevOps, open the Ser vice connections page from the project settings page. In TFS, open the
Ser vices page from the "settings" icon in the top menu bar.
3. Choose + New ser vice connection and select Azure Resource Manager .

4. Switch from the simplified version of the dialog to the full version using the link in the dialog.
5. Enter a user-friendly Connection name to use when referring to this service connection.
6. Select the Environment name (such as Azure Cloud, Azure Stack, or an Azure Government Cloud).
7. If you do not select Azure Cloud , enter the Environment URL. For Azure Stack, this will be something
like https://management.local.azurestack.external
8. Select the Scope level you require:
If you choose Subscription , select an existing Azure subscription. If you don't see any Azure
subscriptions or instances, see Troubleshoot Azure Resource Manager service connections. |
If you choose Management Group , select an existing Azure management group. See Create
management groups. |
9. Enter the information about your service principal into the Azure subscription dialog textboxes:
Subscription ID
Subscription name
Service principal ID
Either the service principal client key or, if you have selected Cer tificate , enter the contents of both
the certificate and private key sections of the *.pem file.
Tenant ID
You can obtain this information if you don't have it to hand by downloading and running this
PowerShell script in an Azure PowerShell window. When prompted, enter your subscription name,
password, role (optional), and the type of cloud such as Azure Cloud (the default), Azure Stack, or an
Azure Government Cloud.
10. Choose Verify connection to validate the settings you entered.
11. After the new service connection is created:
If you are using it in the UI, select the connection name you assigned in the Azure subscription
setting of your pipeline.
If you are using it in YAML, copy the connection name into your code as the azureSubscription
value.
12. If required, modify the service principal to expose the appropriate permissions. For more details, see
Use Role-Based Access Control to manage access to your Azure subscription resources. This blog post
also contains more information about using service principal authentication.
See also: Troubleshoot Azure Resource Manager service connections.

Create an Azure Resource Manager service connection to a VM with


a managed service identity
NOTE
You are required to use a self-hosted agent on an Azure VM in order to use managed service identity

You can configure Azure Virtual Machines (VM)-based agents with an Azure Managed Service Identity in
Azure Active Directory (Azure AD). This lets you use the system assigned identity (Service Principal) to grant
the Azure VM-based agents access to any Azure resource that supports Azure AD, such as Key Vault, instead of
persisting credentials in Azure DevOps for the connection.
1. In Azure DevOps, open the Ser vice connections page from the project settings page. In TFS, open the
Ser vices page from the "settings" icon in the top menu bar.
2. Choose + New ser vice connection and select Azure Resource Manager .

3. Select the Managed Identity Authentication option.


4. Enter a user-friendly Connection name to use when referring to this service connection.
5. Select the Environment name (such as Azure Cloud, Azure Stack, or an Azure Government Cloud).
6. Enter the values for your subscription into these fields of the connection dialog:
Subscription ID
Subscription name
Tenant ID
7. After the new service connection is created:
If you are using it in the UI, select the connection name you assigned in the Azure subscription
setting of your pipeline.
If you are using it in YAML, copy the connection name into your code as the azureSubscription
value.
8. Ensure that the VM (agent) has the appropriate permissions. For example, if your code needs to call
Azure Resource Manager, assign the VM the appropriate role using Role-Based Access Control (RBAC)
in Azure AD. For more details, see How can I use managed identities for Azure resources? and Use Role-
Based Access Control to manage access to your Azure subscription resources.
See also: Troubleshoot Azure Resource Manager service connections.

Connect to an Azure Government Cloud


For information about connecting to an Azure Government Cloud, see:
Connecting from Azure Pipelines (Azure Government Cloud)

Connect to Azure Stack


For information about connecting to Azure Stack, see:
Connect to Azure Stack
Connect Azure Stack to Azure using VPN
Connect Azure Stack to Azure using ExpressRoute
Help and support
See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature
on our Azure DevOps Developer Community. Support page.
Azure SQL database deployment
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

You can automatically deploy your database updates to Azure SQL database after every successful build.

DACPAC
The simplest way to deploy a database is to create data-tier package or DACPAC. DACPACs can be used to package
and deploy schema changes as well as data. You can create a DACPAC using the SQL database project in Visual
Studio.
YAML
Classic
To deploy a DACPAC to an Azure SQL database, add the following snippet to your azure-pipelines.yml file.

- task: SqlAzureDacpacDeployment@1
displayName: Execute Azure SQL : DacpacTask
inputs:
azureSubscription: '<Azure service connection>'
ServerName: '<Database server name>'
DatabaseName: '<Database name>'
SqlUsername: '<SQL user name>'
SqlPassword: '<SQL user password>'
DacpacFile: '<Location of Dacpac file in $(Build.SourcesDirectory) after compilation>'

YAML pipelines aren't available in TFS.


See also authentication information when using the Azure SQL Database Deployment task.

SQL scripts
Instead of using a DACPAC, you can also use SQL scripts to deploy your database. Here is a simple example of a
SQL script that creates an empty database.

USE [main]
GO
IF NOT EXISTS (SELECT name FROM main.sys.databases WHERE name = N'DatabaseExample')
CREATE DATABASE [DatabaseExample]
GO

To run SQL scripts as part of a pipeline, you will need Azure Powershell scripts to create and remove firewall rules
in Azure. Without the firewall rules, the Azure Pipelines agent cannot communicate with Azure SQL Database.
The following Powershell script creates firewall rules. You can check-in this script as SetAzureFirewallRule.ps1 into
your repository.
ARM

[CmdletBinding(DefaultParameterSetName = 'None')]
param
(
[String] [Parameter(Mandatory = $true)] $ServerName,
[String] [Parameter(Mandatory = $true)] $ResourceGroup,
[String] $AzureFirewallName = "AzureWebAppFirewall"
)
$agentIP = (New-Object net.webclient).downloadstring("http://checkip.dyndns.com") -replace "[^\d\.]"
New-AzureRmSqlServerFirewallRule -ResourceGroupName $ResourceGroup -ServerName $ServerName -FirewallRuleName
$AzureFirewallName -StartIPAddress $agentIp -EndIPAddress $

Classic

[CmdletBinding(DefaultParameterSetName = 'None')]
param
(
[String] [Parameter(Mandatory = $true)] $ServerName,
[String] [Parameter(Mandatory = $true)] $ResourceGroupName,
[String] $AzureFirewallName = "AzureWebAppFirewall"
)

$ErrorActionPreference = 'Stop'

function New-AzureSQLServerFirewallRule {
$agentIP = (New-Object net.webclient).downloadstring("http://checkip.dyndns.com") -replace "[^\d\.]"
New-AzureSqlDatabaseServerFirewallRule -StartIPAddress $agentIp -EndIPAddress $agentIp -FirewallRuleName
$AzureFirewallName -ServerName $ServerName -ResourceGroupName $ResourceGroupName
}
function Update-AzureSQLServerFirewallRule{
$agentIP= (New-Object net.webclient).downloadstring("http://checkip.dyndns.com") -replace "[^\d\.]"
Set-AzureSqlDatabaseServerFirewallRule -StartIPAddress $agentIp -EndIPAddress $agentIp -FirewallRuleName
$AzureFirewallName -ServerName $ServerName -ResourceGroupName $ResourceGroupName
}

If ((Get-AzureSqlDatabaseServerFirewallRule -ServerName $ServerName -FirewallRuleName $AzureFirewallName -


ResourceGroupName $ResourceGroupName -ErrorAction SilentlyContinue) -eq $null)
{
New-AzureSQLServerFirewallRule
}
else
{
Update-AzureSQLServerFirewallRule
}

The following Powershell script removes firewall rules. You can check-in this script as RemoveAzureFirewall.ps1 into
your repository.
ARM

[CmdletBinding(DefaultParameterSetName = 'None')]
param
(
[String] [Parameter(Mandatory = $true)] $ServerName,
[String] [Parameter(Mandatory = $true)] $ResourceGroup,
[String] $AzureFirewallName = "AzureWebAppFirewall"
)
Remove-AzureRmSqlServerFirewallRule -ServerName $ServerName -FirewallRuleName $AzureFirewallName -
ResourceGroupName $ResourceGroup
Classic

[CmdletBinding(DefaultParameterSetName = 'None')]
param
(
[String] [Parameter(Mandatory = $true)] $ServerName,
[String] [Parameter(Mandatory = $true)] $ResourceGroupName,
[String] $AzureFirewallName = "AzureWebAppFirewall"
)

$ErrorActionPreference = 'Stop'

If ((Get-AzureSqlDatabaseServerFirewallRule -ServerName $ServerName -FirewallRuleName $AzureFirewallName -


ResourceGroupName $ResourceGroupName -ErrorAction SilentlyContinue))
{
Remove-AzureSqlDatabaseServerFirewallRule -FirewallRuleName $AzureFirewallName -ServerName $ServerName -
ResourceGroupName $ResourceGroupName
}

YAML
Classic
Add the following to your azure-pipelines.yml file to run a SQL script.

variables:
AzureSubscription: '<Azure service connection>'
ServerName: '<Database server name>'
DatabaseName: '<Database name>'
AdminUser: '<SQL user name>'
AdminPassword: '<SQL user password>'
SQLFile: '<Location of SQL file in $(Build.SourcesDirectory)>'

steps:
- task: AzurePowerShell@2
displayName: Azure PowerShell script: FilePath
inputs:
azureSubscription: '$(AzureSubscription)'
ScriptPath: '$(Build.SourcesDirectory)\scripts\SetAzureFirewallRule.ps1'
ScriptArguments: '$(ServerName)'
azurePowerShellVersion: LatestVersion

- task: CmdLine@1
displayName: Run Sqlcmd
inputs:
filename: Sqlcmd
arguments: '-S $(ServerName) -U $(AdminUser) -P $(AdminPassword) -d $(DatabaseName) -i $(SQLFile)'

- task: AzurePowerShell@2
displayName: Azure PowerShell script: FilePath
inputs:
azureSubscription: '$(AzureSubscription)'
ScriptPath: '$(Build.SourcesDirectory)\scripts\RemoveAzureFirewallRule.ps1'
ScriptArguments: '$(ServerName)'
azurePowerShellVersion: LatestVersion

YAML pipelines aren't available in TFS.

Azure service connection


The Azure SQL Database Deployment task is the primary mechanism to deploy a database to Azure. This task,
as with other built-in Azure tasks, requires an Azure service connection as an input. The Azure service connection
stores the credentials to connect from Azure Pipelines or TFS to Azure.
The easiest way to get started with this task is to be signed in as a user that owns both the Azure DevOps
organization and the Azure subscription. In this case, you won't have to manually create the service connection.
Otherwise, to learn how to create an Azure service connection, see Create an Azure service connection.
To learn how to create an Azure service connection, see Create an Azure service connection.

Deploying conditionally
You may choose to deploy only certain builds to your Azure database.
YAML
Classic
To do this in YAML, you can use one of these techniques:
Isolate the deployment steps into a separate job, and add a condition to that job.
Add a condition to the step.
The following example shows how to use step conditions to deploy only those builds that originate from main
branch.

- task: SqlAzureDacpacDeployment@1
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
inputs:
azureSubscription: '<Azure service connection>'
ServerName: '<Database server name>'
DatabaseName: '<Database name>'
SqlUsername: '<SQL user name>'
SqlPassword: '<SQL user password>'
DacpacFile: '<Location of Dacpac file in $(Build.SourcesDirectory) after compilation>'

To learn more about conditions, see Specify conditions.


YAML pipelines aren't available in TFS.

Additional SQL actions


SQL Azure Dacpac Deployment may not support all SQL server actions that you want to perform. In these
cases, you can simply use Powershell or command line scripts to run the commands you need. This section shows
some of the common use cases for invoking the SqlPackage.exe tool. As a prerequisite to running this tool, you
must use a self-hosted agent and have the tool installed on your agent.

NOTE
If you execute SQLPackage from the folder where it is installed, you must prefix the path with & and wrap it in double-
quotes.

Basic Syntax
<Path of SQLPackage.exe> <Arguments to SQLPackage.exe>

You can use any of the following SQL scripts depending on the action that you want to perform
Extract
Creates a database snapshot (.dacpac) file from a live SQL server or Microsoft Azure SQL Database.
Command Syntax:
SqlPackage.exe /TargetFile:"<Target location of dacpac file>" /Action:Extract
/SourceServerName:"<ServerName>.database.windows.net"
/SourceDatabaseName:"<DatabaseName>" /SourceUser:"<Username>" /SourcePassword:"<Password>"

or

SqlPackage.exe /action:Extract /tf:"<Target location of dacpac file>"


/SourceConnectionString:"Data Source=ServerName;Initial Catalog=DatabaseName;Integrated Security=SSPI;Persist
Security Info=False;"

Example:

SqlPackage.exe /TargetFile:"C:\temp\test.dacpac" /Action:Extract


/SourceServerName:"DemoSqlServer.database.windows.net"
/SourceDatabaseName:"Testdb" /SourceUser:"ajay" /SourcePassword:"SQLPassword"

Help:

sqlpackage.exe /Action:Extract /?

Publish
Incrementally updates a database schema to match the schema of a source .dacpac file. If the database does not
exist on the server, the publish operation will create it. Otherwise, an existing database will be updated.
Command Syntax:

SqlPackage.exe /SourceFile:"<Dacpac file location>" /Action:Publish /TargetServerName:"


<ServerName>.database.windows.net"
/TargetDatabaseName:"<DatabaseName>" /TargetUser:"<Username>" /TargetPassword:"<Password> "

Example:

SqlPackage.exe /SourceFile:"E:\dacpac\ajyadb.dacpac" /Action:Publish


/TargetServerName:"DemoSqlServer.database.windows.net"
/TargetDatabaseName:"Testdb4" /TargetUser:"ajay" /TargetPassword:"SQLPassword"

Help:

sqlpackage.exe /Action:Publish /?

Export
Exports a live database, including database schema and user data, from SQL Server or Microsoft Azure SQL
Database to a BACPAC package (.bacpac file).
Command Syntax:

SqlPackage.exe /TargetFile:"<Target location for bacpac file>" /Action:Export /SourceServerName:"


<ServerName>.database.windows.net"
/SourceDatabaseName:"<DatabaseName>" /SourceUser:"<Username>" /SourcePassword:"<Password>"

Example:
SqlPackage.exe /TargetFile:"C:\temp\test.bacpac" /Action:Export
/SourceServerName:"DemoSqlServer.database.windows.net"
/SourceDatabaseName:"Testdb" /SourceUser:"ajay" /SourcePassword:"SQLPassword"

Help:

sqlpackage.exe /Action:Export /?

Import
Imports the schema and table data from a BACPAC package into a new user database in an instance of SQL Server
or Microsoft Azure SQL Database.
Command Syntax:

SqlPackage.exe /SourceFile:"<Bacpac file location>" /Action:Import /TargetServerName:"


<ServerName>.database.windows.net"
/TargetDatabaseName:"<DatabaseName>" /TargetUser:"<Username>" /TargetPassword:"<Password>"

Example:

SqlPackage.exe /SourceFile:"C:\temp\test.bacpac" /Action:Import


/TargetServerName:"DemoSqlServer.database.windows.net"
/TargetDatabaseName:"Testdb" /TargetUser:"ajay" /TargetPassword:"SQLPassword"

Help:

sqlpackage.exe /Action:Import /?

DeployReport
Creates an XML report of the changes that would be made by a publish action.
Command Syntax:

SqlPackage.exe /SourceFile:"<Dacpac file location>" /Action:DeployReport /TargetServerName:"


<ServerName>.database.windows.net"
/TargetDatabaseName:"<DatabaseName>" /TargetUser:"<Username>" /TargetPassword:"<Password>" /OutputPath:"
<Output XML file path for deploy report>"

Example:

SqlPackage.exe /SourceFile:"E: \dacpac\ajyadb.dacpac" /Action:DeployReport


/TargetServerName:"DemoSqlServer.database.windows.net"
/TargetDatabaseName:"Testdb" /TargetUser:"ajay" /TargetPassword:"SQLPassword"
/OutputPath:"C:\temp\deployReport.xml"

Help:

sqlpackage.exe /Action:DeployReport /?

DriftReport
Creates an XML report of the changes that have been made to a registered database since it was last registered.
Command Syntax:

SqlPackage.exe /Action:DriftReport /TargetServerName:"<ServerName>.database.windows.net" /TargetDatabaseName:"


<DatabaseName>"
/TargetUser:"<Username>" /TargetPassword:"<Password>" /OutputPath:"<Output XML file path for drift report>"

Example:

SqlPackage.exe /Action:DriftReport /TargetServerName:"DemoSqlServer.database.windows.net"


/TargetDatabaseName:"Testdb"
/TargetUser:"ajay" /TargetPassword:"SQLPassword" /OutputPath:"C:\temp\driftReport.xml"

Help:

sqlpackage.exe /Action:DriftReport /?

Script
Creates a Transact-SQL incremental update script that updates the schema of a target to match the schema of a
source.
Command Syntax:

SqlPackage.exe /SourceFile:"<Dacpac file location>" /Action:Script /TargetServerName:"


<ServerName>.database.windows.net"
/TargetDatabaseName:"<DatabaseName>" /TargetUser:"<Username>" /TargetPassword:"<Password>" /OutputPath:"
<Output SQL script file path>"

Example:

SqlPackage.exe /Action:Script /SourceFile:"E:\dacpac\ajyadb.dacpac"


/TargetServerName:"DemoSqlServer.database.windows.net"
/TargetDatabaseName:"Testdb" /TargetUser:"ajay" /TargetPassword:"SQLPassword" /OutputPath:"C:\temp\test.sql"
/Variables:StagingDatabase="Staging DB Variable value"

Help:

sqlpackage.exe /Action:Script /?
Deploy to Azure App Service using Visual Studio
Code
11/2/2020 • 6 minutes to read • Edit Online

This tutorial walks you through setting up a CI/CD pipeline for deploying Node.js application to Azure App Service
using Deploy to Azure extension.

Prerequisites
An Azure account. If you don't have one, you cancreate for free.
You needVisual Studio Code installed along with theNode.js and npm the Node.js package manager and the
below extensions:
You need Azure Account extension and Deploy to Azure extension
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.

IMPORTANT
Ensure that you have all the prerequisites installed and configured. In VS Code, you should see your Azure email address in
the Status Bar.

Create Node.js application


Create a Node.js application that can be deployed to the Cloud. This tutorial uses an application generator to
quickly scaffold the application from a terminal.

TIP
If you have already completed theNode.js tutorial, you can skip ahead toSetup CI/CD Pipeline.

Install the Express Generator


Expressis a popular framework for building and running Node.js applications. You can scaffold (create) a new
Express application using theExpress Generatortool. The Express Generator is shipped as an npm module and
installed by using the npm command-line tool npm .

TIP
To test that you've got npm correctly installed on your computer, type npm --help from a terminal and you should see the
usage documentation.

Install the Express Generator by running the following from a terminal:


npm install -g express-generator

The -g switch installs the Express Generator globally on your machine so you can run it from anywhere.
Scaffold a new application
We can now scaffold a new Express application called myExpressApp by running:
express myExpressApp --view pug --git

This creates a new folder called myExpressApp with the contents of your application. The --view pug parameters
tell the generator to use the pug template engine (formerly known as jade).
To install all of the application's dependencies (again shipped as npm modules), go to the new folder and execute
npm install :

cd myExpressApp
npm install

At this point, we should test that our application runs. The generated Express application has a package.json file,
that includes a start script to run node ./bin/www . This will start the Node.js application running.
Run the application
1. From a terminal in the Express application folder, run:
npm start

The Node.js web server will start and you can browse to http://localhost:3000 to see the running application.
2. Follow this link to push this project to GitHub using the command line.
3. Open your application folder in VS Code and get ready to deploy to Azure.

Install the extension


1. Bring up the Extensions view by clicking on the Extensions icon in the Activity Bar on the side of VS Code or
the View: Extensions command (Ctrl+Shift+X) .
2. Search for Deploy to Azure extension and install.

3. After the installation is complete, the extension will be located in enabled extension space.
Setup CI/CD Pipeline
Now you can deploy to Azure App Services, Azure Function App and AKS using VS code. This VS Code extension
helps you set up continuous build and deployment for Azure App Services without leaving VS Code.
To use this service, you need to install the extension on VS Code. You can browse and install extensions from within
VS Code.
Combination of workflows
We support GitHub Actions and Azure Pipelines for GitHub & Azure Repos correspondingly. We also allow you to
create Azure Pipelines if you still manage the code in GitHub.
GitHub + GitHub Actions
1. To set up a pipeline, choose Deploy to Azure: Configure CI/CD Pipeline from the command palette
(Ctrl/Cmd + Shift + P) or right-click on the file explorer.

NOTE
If the code is not opened in the workspace, it will ask for folder location. Similarly, if the code in the workspace has
more than one folder, it will ask for folder.

2. Select a pipeline template you want to create from the list. Since we're targeting Node.js , select
Node.js with npm to App Service.
3. Select the target Azure Subscription to deploy your application.

4. Select the target Azure resource to deploy your application.


5. Enter GitHub personal access token (PAT), required to populate secrets that are used in GitHub workflows.
Set the scope to repo and admin:repo_hook .

TIP
If the code is in Azure Repos, you need different permissions.

6. The configuration of GitHub workflow or Azure Pipeline happens based on the extension setting. The guided
workflow will generate a starter YAML file defining the build and deploy process. Commit & push the
YAML file to proceed with the deployment.
TIP
You can customize the pipeline using all the features offered by Azure Pipelines and GitHub Actions.

7. Navigate to your GitHub repo to see the actions in progress.

8. Navigate to your site running in Azure using the Web App URL http://{web_app_name}.azurewebsites.net ,
and verify its contents.
GitHub + Azure Pipelines

IMPORTANT
To setup CI/CD in Azure Pipelines for Github Repository, you need to enable Use Azure Pipelines for GitHub in the
extension.

To open your user and workspace settings, use the following VS Code menu command:
On Windows/Linux - File > Preferences > Settings
On macOS - Code > Preferences > Settings
You can also open the Settings editor from the Command Palette ( Ctrl+Shift+P ) with Preferences: Open Settings
or use the keyboard shortcut ( Ctrl+, ).
When you open the settings editor, you can search and discover settings you are looking for. Search for the name
deployToAzure.UseAzurePipelinesForGithub and enable as shown below.

1. To set up a pipeline, choose Deploy to Azure: Configure CI/CD Pipeline from the command palette
(Ctrl/Cmd + Shift + P) or right-click on the file explorer.

NOTE
If the code is not opened in the workspace, it will ask for folder location. Similarly, if the code in the workspace has
more than one folder, it will ask for folder.

2. Select a pipeline template you want to create from the list. Since we're targeting Node.js , select
Node.js with npm to App Service.
3. Select the target Azure Subscription to deploy your application.

4. Select the target Azure resource to deploy your application.


5. Enter GitHub personal access token (PAT), required to populate secrets that are used in GitHub workflows.
Set the scope to repo and admin:repo_hook .
6. Select an Azure DevOps organization.

7. Select an Azure DevOps project.


8. The configuration of GitHub workflow or Azure Pipeline happens based on the extension setting. The guided
workflow will generate a starter YAML file defining the build and deploy process. Commit & push the
YAML file to proceed with the deployment.

TIP
You can customize the pipeline using all the features offered by Azure Pipelines and GitHub Actions.

9. Navigate to your Azure DevOps project to see the pipeline in progress.


10. Navigate to your site running in Azure using the Web App URL http://{web_app_name}.azurewebsites.net ,
and verify its contents.
Azure Repos + Azure Pipelines
1. To set up a pipeline, choose Deploy to Azure: Configure CI/CD Pipeline from the command palette
(Ctrl/Cmd + Shift + P) or right-click on the file explorer.

NOTE
If the code is not opened in the workspace, it will ask for folder location. Similarly, if the code in the workspace has
more than one folder, it will ask for folder.

2. Select a pipeline template you want to create from the list. Since we're targeting Node.js , select
Node.js with npm to App Service.
3. Select the target Azure Subscription to deploy your application.

4. Select the target Azure resource to deploy your application.


5. The configuration of GitHub workflow or Azure Pipeline happens based on the extension setting. The guided
workflow will generate a starter YAML file defining the build and deploy process. Commit & push the
YAML file to proceed with the deployment.

TIP
You can customize the pipeline using all the features offered by Azure Pipelines and GitHub Actions.

6. Navigate to your Azure DevOps project to see the pipeline in progress.


7. Navigate to your site running in Azure using the Web App URL http://{web_app_name}.azurewebsites.net ,
and verify its contents.

Next steps
Try the workflow with a Docker file in a repo.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Deploy apps to Azure Stack
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Azure Stack is an extension of Azure that enables the agility and fast-paced innovation of cloud computing through
a hybrid cloud and on-premises environment.

In addition to supporting Azure AD, Azure DevOps Server 2019 can be used to deploy to Azure stack with
Active Directory Federation Services (AD FS) using a service principal with certificate.

Prerequisites
To deploy to Azure stack using Azure Pipelines, ensure the following:
Azure stack requirements:
Use an Azure Stack integrated system or deploy the Azure Stack Development Kit (ASDK)
Use the ConfigASDK.ps1 PowerShell script to automate ASDK post-deployment steps.
Create a tenant subscription in Azure Stack.
Deploy a Windows Server 2012 Virtual Machine in the tenant subscription. You'll use this server as your build
server and to run Azure DevOps Services.
Provide a Windows Server 2016 image with .NET 3.5 for a virtual machine (VM). This VM will be built on your
Azure Stack as a private build agent.
Azure Pipelines agent requirements:
Create a new service principal name (SPN) or use an existing one.
Validate the Azure Stack Subscription via Role-Based Access Control(RBAC) to allow the Service Principal Name
(SPN) to be part of the Contributor's role. Azure DevOps Services must have the Contributor role to provision
resources in an Azure Stack subscription.
Create a new Service connection in Azure DevOps Services using the Azure Stack endpoints and SPN
information. Specify Azure Stack in the Environment parameter when you create an Azure Resource Manager
service connection. You must use the full version of the service connection dialog to manually define the
connection.
You can then use the service connection in your build and release pipeline tasks.
For more details, refer to Tutorial: Deploy apps to Azure and Azure Stack
Next
Deploy an Azure Web App
Troubleshoot Azure Resource Manager service connections
Azure Stack Operator Documentation

FAQ
Are all the Azure tasks supported?
The following Azure tasks are validated with Azure Stack:
Azure PowerShell
Azure File Copy
Azure Resource Group Deployment
Azure App Service Deploy
Azure App Service Manage
Azure SQL Database Deployment
How do I resolve SSL errors during deployment?
To ignore SSL errors, set a variable named VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or
release pipeline.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Deploy a Function App Container
11/2/2020 • 4 minutes to read • Edit Online

You can automatically deploy your functions to Azure Function App for Linux Container after every successful build.

Before you begin


If you already have an app in GitHub that you want to deploy, you can try creating a pipeline for that code.
However, if you are a new user, then you might get a better start by using our sample code. In that case, fork this
repo in GitHub:

https://github.com/azooinmyluggage/GHFunctionAppContainer

Build your app


YAML
Classic
Follow the Build, test, and push Docker container apps to set up the build pipeline. When you're done, you'll have a
YAML pipeline to build, test, and push the image to container registry.
We aren't yet advising new users to use YAML pipelines to deploy from Azure DevOps Server 2019. If you're an
experienced pipeline user and already have a YAML pipeline to build your .NET Core app, then you might find the
examples below useful.
YAML pipelines aren't available on TFS.
Now that the build pipeline is in place, you will learn a few more common configurations to customize the
deployment of the Azure Function App Container.

Azure service connection


The Azure Function App on Container Deploy task, similar to other built-in Azure tasks, requires an Azure
service connection as an input. The Azure service connection stores the credentials to connect from Azure Pipelines
or Azure DevOps Server to Azure.
YAML
Classic
You must supply an Azure service connection to the AzureFunctionAppContainer task. Add the following YAML
snippet to your existing azure-pipelines.yaml file. Make sure you add the service connection details in the
variables section as shown below.

variables:
## Add this under variables section in the pipeline
azureSubscription: <Name of the Azure subscription>
appName: <Name of the function App>
containerRegistry: <Name of the Azure container registry>

YAML pipelines aren't available on TFS.


Configure registry credentials in Function App
App Service needs information about your registry and image to pull the private image. In the Azure portal, go to
your Function App --> Platform features --> All settings . Select Container settings from the app service
and update the Image source, Registr y and save.

Deploy with Azure Function App for Container


YAML
Classic
The simplest way to deploy to an Azure Function App Container is to use the Azure Function App on Container
Deploy task.
To deploy to an Azure Function App container, add the following snippet at the end of your azure-pipelines.yml
file:
trigger:
- main

variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: <Docker registry service connection>
imageRepository: <Name of your image repository>
containerRegistry: <Name of the Azure container registry>
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'

# Agent VM image name


vmImageName: 'ubuntu-latest'

- task: AzureFunctionAppContainer@1 # Add this at the end of your file


inputs:
azureSubscription: '<Azure service connection>'
appName: '<Name of the function app>'
imageName: $(containerRegistry)/$(imageRepository):$(tag)

The snippet assumes that the build steps in your YAML file build and push the docker image to your Azure
container registry. The Azure Function App on Container Deploy task will pull the appropriate docker image
corresponding to the BuildId from the repository specified, and then deploys the image to the Azure Function App
Container.
YAML pipelines aren't available on TFS.

Deploy to a slot
YAML
Classic
You can configure the Azure Function App container to have multiple slots. Slots allow you to safely deploy your
app and test it before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:

- task: AzureFunctionAppContainer@1
inputs:
azureSubscription: <Azure service connection>
appName: <Name of the function app>
imageName: $(containerRegistry)/$(imageRepository):$(tag)
deployToSlotOrASE: true
resourceGroupName: <Name of the resource group>
slotName: staging

- task: AzureAppServiceManage@0
inputs:
azureSubscription: <Azure service connection>
WebAppName: <name of the function app>
ResourceGroupName: <name of resource group>
SourceSlot: staging
SwapWithProduction: true

YAML pipelines aren't available on TFS.


Deploy an Azure Function
11/2/2020 • 5 minutes to read • Edit Online

You can automatically deploy your Azure Function after every successful build.

Before you begin


Based on the desired runtime, import (into Azure DevOps) or fork (into GitHub) the following repository.
.NET Core
Java
Nodejs
If you already have an app in GitHub that you want to deploy, you can try creating a pipeline for that code.
However, if you are a new user, then you might get a better start by using our sample code. In that case, fork this
repo in GitHub:

https://github.com/microsoft/devops-project-samples/tree/master/dotnet/aspnetcore/functionApp

Build your app


YAML
Classic
Follow the guidance in Create your first pipeline to setup the build pipeline. The CI steps will be similar to any
Nodejs or .NET Core apps. When you're done, you'll have a YAML pipeline to build, test, and publish the source as
an artifact.
We aren't yet advising new users to use YAML pipelines to deploy from Azure DevOps Server 2019. If you're an
experienced pipeline user and already have a YAML pipeline to build your java function app, then you might find
the examples below useful.
YAML pipelines aren't available on TFS.
Now you're ready to read through the rest of this topic to learn some of the more common configurations to
customize the deployment of an Azure Function App.

Azure service connection


The Azure Function App Deploy task, similar to other built-in Azure tasks, requires an Azure service connection
as an input. The Azure service connection stores the credentials to connect from Azure Pipelines or Azure DevOps
Server to Azure.
YAML
Classic
You must supply an Azure service connection to the AzureFunctionApp task. Add the following YAML snippet to
your existing azure-pipelines.yaml file. Make sure you add the service connection details in the variables section
as shown below-
variables:
## Add this under variables section in the pipeline
azureSubscription: <Name of the Azure subscription>
appName: <Name of the Function App>

The snippet assumes that the build steps in your YAML file build and publishes the source as an artifact. The Azure
Function App Deploy task will pull the artifact corresponding to the BuildId from the Source type specified, and
then deploys the artifact to the Azure Function App Service.
YAML pipelines aren't available on TFS.

Deploy with Azure Function App


YAML
Classic
The simplest way to deploy to an Azure Function is to use the Azure Function App Deploy task.
To deploy to Azure Function, add the following snippet at the end of your azure-pipelines.yml file:

trigger:
- main

variables:
# Azure service connection established during pipeline creation
azureSubscription: <Name of your Azure subscription>
appName: <Name of the Function app>
# Agent VM image name
vmImageName: 'ubuntu-latest'

- task: AzureFunctionApp@1 # Add this at the end of your file


inputs:
azureSubscription: <Azure service connection>
appType: functionAppLinux
appName: $(appName)
package: $(System.ArtifactsDirectory)/**/*.zip

The snippet assumes that the build steps in your YAML file produce the zip archive in the
$(System.ArtifactsDirectory) folder on your agent.

YAML pipelines aren't available on TFS.

Deploy to a slot
YAML
Classic
You can configure the Azure Function App to have multiple slots. Slots allow you to safely deploy your app and test
it before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
- task: AzureFunctionApp@1
inputs:
azureSubscription: <Azure service connection>
appType: functionAppLinux
appName: <Name of the Function app>
package: $(System.ArtifactsDirectory)/**/*.zip
deployToSlotOrASE: true
resourceGroupName: <Name of the resource group>
slotName: staging

- task: AzureAppServiceManage@0
inputs:
azureSubscription: <Azure service connection>
WebAppName: <name of the Function app>
ResourceGroupName: <name of resource group>
SourceSlot: staging
SwapWithProduction: true

YAML pipelines aren't available on TFS.


Deploy an Azure Function (Windows)
11/2/2020 • 5 minutes to read • Edit Online

You can automatically deploy your Azure Function after every successful build.

Before you begin


Based on the desired runtime, import (into Azure DevOps) or fork (into GitHub) the following repository.
.NET Core
Java
Nodejs
If you already have an app in GitHub that you want to deploy, you can try creating a pipeline for that code.
However, if you are a new user, then you might get a better start by using our sample code. In that case, fork this
repo in GitHub:

https://github.com/microsoft/devops-project-samples/tree/master/dotnet/aspnetcore/functionApp

Build your app


YAML
Classic
Follow the guidance in Create your first pipeline to setup the build pipeline. When you're done, you'll have a YAML
pipeline to build, test, and publish the source as an artifact.
We aren't yet advising new users to use YAML pipelines to deploy from Azure DevOps Server 2019. If you're an
experienced pipeline user and already have a YAML pipeline to build your java function app, then you might find
the examples below useful.
YAML pipelines aren't available on TFS.
Now you're ready to read through the rest of this topic to learn some of the more common configurations to
customize the deployment of an Azure Function App.

Azure service connection


The Azure Function App Deploy task, similar to other built-in Azure tasks, requires an Azure service connection
as an input. The Azure service connection stores the credentials to connect from Azure Pipelines or Azure DevOps
Server to Azure.
YAML
Classic
You must supply an Azure service connection to the AzureFunctionApp task. Add the following YAML snippet to
your existing azure-pipelines.yaml file. Make sure you add the service connection details in the variables section
as shown below.
variables:
## Add this under variables section in the pipeline
azureSubscription: <Name of the Azure subscription>
appName: <Name of the Function App>

## Add the below snippet at the end of your pipeline


- task: AzureFunctionApp@1
displayName: Azure Function App Deploy
inputs:
azureSubscription: $(azureSubscription)
appType: functionApp
appName: $(appName)
package: $(System.ArtifactsDirectory)/**/*.zip

The snippet assumes that the build steps in your YAML file build and publishes the source as an artifact. The Azure
Function App Deploy task will pull the artifact corresponding to the BuildId from the Source type specified, and
then deploys the artifact to the Azure Function App Service.
YAML pipelines aren't available on TFS.

Deploy with Azure Function App


YAML
Classic
The simplest way to deploy to an Azure Function is to use the Azure Function App Deploy task.
To deploy to Azure Function, add the following snippet at the end of your azure-pipelines.yml file:

trigger:
- main

variables:
# Azure service connection established during pipeline creation
azureSubscription: <Name of your Azure subscription>
appName: <Name of the Function app>
# Agent VM image name
vmImageName: 'ubuntu-latest'

- task: AzureFunctionApp@1 # Add this at the end of your file


inputs:
azureSubscription: <Azure service connection>
appType: functionApp
appName: $(appName)
package: $(System.ArtifactsDirectory)/**/*.zip

The snippet assumes that the build steps in your YAML file produce the zip archive in the
$(System.ArtifactsDirectory) folder on your agent.

YAML pipelines aren't available on TFS.

Deploy to a slot
YAML
Classic
You can configure the Azure Function App to have multiple slots. Slots allow you to safely deploy your app and test
it before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
- task: AzureFunctionApp@1
inputs:
azureSubscription: <Azure service connection>
appType: functionApp
appName: <Name of the Function app>
package: $(System.ArtifactsDirectory)/**/*.zip
deployToSlotOrASE: true
resourceGroupName: <Name of the resource group>
slotName: staging

- task: AzureAppServiceManage@0
inputs:
azureSubscription: <Azure service connection>
WebAppName: <name of the Function app>
ResourceGroupName: <name of resource group>
SourceSlot: staging
SwapWithProduction: true

YAML pipelines aren't available on TFS.


Deploy an Azure Web App (Linux)
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

You can automatically deploy your web app to an Azure App Service Linux on every successful build.

NOTE

This guidance applies to Azure DevOps Services.

Before you begin


Based on the desired runtime, import (into Azure DevOps) or fork (into GitHub) the following repository.
.NET Core
Java
Nodejs
If you already have an app in GitHub that you want to deploy, you can try creating a pipeline for that code.
However, if you are a new user, then you might get a better start by using our sample code. In that case, fork this
repo in GitHub:

https://github.com/MicrosoftDocs/pipelines-dotnet-core

Build your app


YAML
Classic
.NET Core
Java
Nodejs
Follow the Build, test, and deploy .NET Core apps till Create your first pipeline section to set up the build
pipeline. When you're done, you'll have a YAML pipeline to build, test, and publish the source as an artifact.
We advise new users to use Classic Editor and not use YAML pipelines to deploy from Azure DevOps Services. If
you're an experienced pipeline user and already have a YAML pipeline to build your .NET Core app, then you might
find the examples below useful.
YAML pipelines aren't available on TFS.
Now you're ready to read through the rest of this topic to learn some of the more common configurations to
customize the deployment of the Azure Web App.

Azure service connection


The Azure Web App task, similar to other built-in Azure tasks, requires an Azure service connection as an input.
The Azure service connection stores the credentials to connect from Azure Pipelines or Azure DevOps Server to
Azure.
YAML
Classic
You must supply an Azure service connection to the AzureWebApp task. Add the following YAML snippet to your
existing azure-pipelines.yaml file. Make sure you add the service connection details in the variables section as
shown below.

variables:
## Add this under variables section in the pipeline
azureSubscription: <Name of the Azure subscription>
appName: <Name of the Web App>

## Add the below snippet at the end of your pipeline


- task: AzureWebApp@1
displayName: 'Azure Web App Deploy'
inputs:
azureSubscription: $(azureSubscription)
appType: webAppLinux
appName: $(appName)
package: $(System.ArtifactsDirectory)/**/*.zip

YAML pipelines aren't available on TFS.

Deploy with Azure Web App


YAML
Classic
The simplest way to deploy to an Azure Web App is to use the Azure Web App task.
To deploy to an Azure Web App, add the following snippet at the end of your azure-pipelines.yml file:

trigger:
- main

variables:
# Azure service connection established during pipeline creation
azureSubscription: <Name of your Azure subscription>
appName: <Name of the web app>
# Agent VM image name
vmImageName: 'ubuntu-latest'

- task: AzureWebApp@1 # Add this at the end of your file


inputs:
azureSubscription: <Azure service connection>
appType: webAppLinux
appName: $(appName)
package: $(System.ArtifactsDirectory)/**/*.zip

The snippet assumes that the build steps in your YAML file build and publishes the source as an artifact. The Azure
Web App Deploy task will pull the artifact corresponding to the BuildId from the Source type specified, and then
deploys the artifact to the Linux App Service.
YAML pipelines aren't available on TFS.

Deploy to a slot
YAML
Classic
You can configure the Azure Web App to have multiple slots. Slots allow you to safely deploy your app and test it
before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:

- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connection>'
appType: webAppLinux
appName: '<name of web app>'
deployToSlotOrASE: true
resourceGroupName: '<name of resource group>'
slotName: staging

- task: AzureAppServiceManage@0
inputs:
azureSubscription: '<Azure service connection>'
appType: webAppLinux
WebAppName: '<name of web app>'
ResourceGroupName: '<name of resource group>'
SourceSlot: staging
SwapWithProduction: true

YAML pipelines aren't available on TFS.


Deploy an Azure Web App Container
11/2/2020 • 5 minutes to read • Edit Online

You can automatically deploy your web app to an Azure Web App for Linux Containers after every successful build.

Before you begin


Based on the desired runtime, import (into Azure DevOps) or fork (into GitHub) the following repository.
.NET Core
Java
Nodejs
If you already have an app in GitHub that you want to deploy, you can try creating a pipeline for that code.
However, if you are a new user, then you might get a better start by using our sample code. In that case, fork this
repo in GitHub:

https://github.com/MicrosoftDocs/pipelines-dotnet-core-docker

Build your app


YAML
Classic
.NET Core
Java
Nodejs
Follow the Build, test, and push Docker container apps till push an image section to set up the build pipeline.
When you're done, you'll have a YAML pipeline to build, test, and push the image to container registry.
We aren't yet advising new users to use YAML pipelines to deploy from Azure DevOps Server 2019. If you're an
experienced pipeline user and already have a YAML pipeline to build your .NET Core app, then you might find the
examples below useful.
YAML pipelines aren't available on TFS.
Now that the build pipeline is in place, you will learn a few more common configurations to customize the
deployment of the Azure Container Web App.

Azure service connection


The Azure WebApp Container task similar to other built-in Azure tasks, requires an Azure service connection as
an input. The Azure service connection stores the credentials to connect from Azure Pipelines or Azure DevOps
Server to Azure.
YAML
Classic
You must supply an Azure service connection to the AzureWebAppContainer task. Add the following YAML snippet to
your existing azure-pipelines.yaml file. Make sure you add the service connection details in the variables section
as shown below-

variables:
## Add this under variables section in the pipeline
azureSubscription: <Name of the Azure subscription>
appName: <Name of the Web App>
containerRegistry: <Name of the Azure container registry>

## Add the below snippet at the end of your pipeline


- task: AzureWebAppContainer@1
displayName: 'Azure Web App on Container Deploy'
inputs:
azureSubscription: $(azureSubscription)
appName: $(appName)
containers: $(containerRegistry)/$(imageRepository):$(tag)

YAML pipelines aren't available on TFS.

Configure registry credentials in web app


App Service needs information about your registry and image to pull the private image. In the Azure portal, go to
Container settings from the web app and update the Image source, Registr y and save.

Deploy with Azure Web App for Container


YAML
Classic
The simplest way to deploy to an Azure Web App Container is to use the Azure Web App for Containers task.
To deploy to an Azure Web App container, add the following snippet at the end of your azure-pipelines.yml file:

trigger:
- main

variables:
# Container registry service connection established during pipeline creation
imageRepository: <Name of your image repository>
containerRegistry: <Name of the Azure container registry>
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'

# Agent VM image name


vmImageName: 'ubuntu-latest'

- task: AzureWebAppContainer@1 # Add this at the end of your file


inputs:
azureSubscription: '<Azure service connection>'
appName: '<Name of the container web app>'
containers: $(containerRegistry)/$(imageRepository):$(tag)

The snippet assumes that the build steps in your YAML file build and push the docker image to your Azure
container registry. The Azure Web App on Container task will pull the appropriate docker image corresponding
to the BuildId from the repository specified, and then deploys the image to the Linux App Service.
YAML pipelines aren't available on TFS.

Deploy to a slot
YAML
Classic
You can configure the Azure Web App container to have multiple slots. Slots allow you to safely deploy your app
and test it before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:

- task: AzureWebAppContainer@1
inputs:
azureSubscription: '<Azure service connection>'
appName: '<Name of the web app>'
containers: $(containerRegistry)/$(imageRepository):$(tag)
deployToSlotOrASE: true
resourceGroupName: '<Name of the resource group>'
slotName: staging

- task: AzureAppServiceManage@0
inputs:
azureSubscription: '<Azure service connection>'
WebAppName: '<name of web app>'
ResourceGroupName: '<name of resource group>'
SourceSlot: staging
SwapWithProduction: true

YAML pipelines aren't available on TFS.


Deploy an Azure Web App
11/2/2020 • 11 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

You can automatically deploy your web app to an Azure App Service web app after every successful build.

NOTE
This guidance applies to Team Foundation Server (TFS) version 2017.3 and later.

Build your app


YAML
Classic
Follow the guidance in Create your first pipeline and use the .NET Core sample offered there before you use this
topic. When you're done, you'll have a YAML pipeline to build, test, and publish the source as an artifact.
We aren't yet advising new users to use YAML pipelines to deploy from Azure DevOps Server 2019. If you're an
experienced pipeline user and already have a YAML pipeline to build your .NET Core app, then you might find the
examples below useful.
YAML pipelines aren't available on TFS.
Now you're ready to read through the rest of this topic to learn some of the more common changes that people
make to customize an Azure Web App deployment.

Azure Web App Deploy task


YAML
Classic
The simplest way to deploy to an Azure Web App is to use the Azure Web App Deploy ( AzureWebApp ) task.
Deploy a Web Deploy package (ASP.NET )
To deploy a .zip Web Deploy package (for example, from an ASP.NET web app) to an Azure Web App, add the
following snippet to your azure-pipelines.yml file:

- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connection>'
appName: '<Name of web app>'
package: $(System.DefaultWorkingDirectory)/**/*.zip
azureSubscription : your Azure subscription.
appName : the name of your existing app service.
package : the file path to the package or a folder containing your app service contents. Wildcards are
supported.
The snippet assumes that the build steps in your YAML file produce the zip archive in the
$(System.DefaultWorkingDirectory) folder on your agent.

For information on Azure service connections, see the following section.


Deploy a Java app
If you're building a Java app, use the following snippet to deploy the web archive (.war) to a Linux Webapp:

- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connection>'
appType: webAppLinux
appName: '<Name of web app>'
package: '$(System.DefaultWorkingDirectory)/**/*.war'

azureSubscription : your Azure subscription.


appType : your Web App type.
appName : the name of your existing app service.
package : the file path to the package or a folder containing your app service contents. Wildcards are
supported.
The snippet assumes that the build steps in your YAML file produce the .war archive in one of the folders in your
source code folder structure; for example, under <project root>/build/libs . If your build steps copy the .war file
to $(System.DefaultWorkingDirectory) instead, change the last line in the snippet to
$(System.DefaultWorkingDirectory)/**/*.war .

For information on Azure service connections, see the following section.


Deploy a JavaScript Node.js app
If you're building a JavaScript Node.js app, you publish the entire contents of your working directory to the web
app. This snippet also generates a Web.config file during deployment if the application does not have one and
starts the iisnode handler on the Azure Web App:

- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connections>'
appName: '<Name of web app>'
package: '$(System.DefaultWorkingDirectory)'
customWebConfig: '-Handler iisnode -NodeStartFile server.js -appType node'

azureSubscription : your Azure subscription.


appName : the name of your existing app service.
package : the file path to the package or a folder containing your app service contents. Wildcards are
supported.
customWebConfig : generate web.config parameters for Python, Node.js, Go and Java apps. A standard
web.config file will be generated and deployed to Azure App Service if the application does not have one.

For information on Azure service connections, see the following section.


YAML pipelines aren't available on TFS.
Azure service connection
All the built-in Azure tasks require an Azure service connection as an input. The Azure service connection stores
the credentials to connect from Azure Pipelines or TFS to Azure.
YAML
Classic
You must supply an Azure service connection to the AzureWebApp task. The Azure service connection stores the
credentials to connect from Azure Pipelines to Azure. See Create an Azure service connection.
YAML pipelines aren't available on TFS.

Deploy to a virtual application


YAML
Classic
By default, your deployment happens to the root application in the Azure Web App. You can deploy to a specific
virtual application by using the VirtualApplication property of the AzureRmWebAppDeployment task:

- task: AzureRmWebAppDeployment@4
inputs:
VirtualApplication: '<name of virtual application>'

Vir tualApplication : the name of the Virtual Application that has been configured in the Azure portal. See
Configure an App Service app in the Azure portal for more details.
YAML pipelines aren't available on TFS.

Deploy to a slot
YAML
Classic
You can configure the Azure Web App to have multiple slots. Slots allow you to safely deploy your app and test it
before making it available to your customers.
The following example shows how to deploy to a staging slot, and then swap to a production slot:

- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connection>'
appName: '<name of web app>'
slotName: staging

- task: AzureAppServiceManage@0
inputs:
azureSubscription: '<Azure service connection>'
WebAppName: '<name of web app>'
ResourceGroupName: '<name of resource group>'
SourceSlot: staging

YAML pipelines aren't available on TFS.

Deploy to multiple web apps


YAML
Classic
You can use jobs in your YAML file to set up a pipeline of deployments. By using jobs, you can control the order
of deployment to multiple web apps.

jobs:

- job: buildandtest
pool:
vmImage: 'ubuntu-16.04'
steps:
# publish an artifact called drop
- task: PublishBuildArtifacts@1
inputs:
artifactName: drop

# deploy to Azure Web App staging


- task: AzureWebApp@1
inputs:
azureSubscription: '<test stage Azure service connection>'
appName: '<name of test stage web app>'

- job: deploy
pool:
vmImage: 'ubuntu-16.04'
dependsOn: buildandtest
condition: succeeded()
steps:

# download the artifact drop from the previous job


- task: DownloadBuildArtifacts@0
inputs:
artifactName: drop

# deploy to Azure Web App production


- task: AzureWebApp@1
inputs:
azureSubscription: '<prod Azure service connection>'
appName: '<name of prod web app>'

YAML pipelines aren't available on TFS.

Configuration changes
For most language stacks, app settings and connection strings can be set as environment variables at runtime.
App settings can also be resolved from Key Vault using Key Vault references.
For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in in
Web.config. You might want to apply a specific configuration for your web app target before deploying to it. This
is useful when you deploy the same build to multiple web apps in a pipeline. For example, if your Web.config file
contains a connection string named connectionString , you can change its value before deploying to each web
app. You can do this either by applying a Web.config transformation or by substituting variables in your
Web.config file.
Azure App Ser vice Deploy task allows users to modify configuration settings in configuration files (*.config
files) inside web packages and XML parameters files (parameters.xml), based on the stage name specified.
NOTE
File transforms and variable substitution are also supported by the separate File Transform task for use in Azure Pipelines.
You can use the File Transform task to apply file transformations and variable substitutions on any configuration and
parameters files.

YAML
Classic
The following snippet shows an example of variable substitution:

jobs:
- job: test
variables:
connectionString: <test-stage connection string>
steps:
- task: AzureRmWebAppDeployment@4
inputs:
azureSubscription: '<Test stage Azure service connection>'
WebAppName: '<name of test stage web app>'
enableXmlVariableSubstitution: true

- job: prod
dependsOn: test
variables:
connectionString: <prod-stage connection string>
steps:
- task: AzureRmWebAppDeployment@4
inputs:
azureSubscription: '<Prod stage Azure service connection>'
WebAppName: '<name of prod stage web app>'
enableXmlVariableSubstitution: true

YAML pipelines aren't available on TFS.

Deploying conditionally
You can choose to deploy only certain builds to your Azure Web App.
YAML
Classic
To do this in YAML, you can use one of these techniques:
Isolate the deployment steps into a separate job, and add a condition to that job.
Add a condition to the step.
The following example shows how to use step conditions to deploy only builds that originate from the main
branch:

- task: AzureWebApp@1
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
inputs:
azureSubscription: '<Azure service connection>'
appName: '<name of web app>'

To learn more about conditions, see Specify conditions.


YAML pipelines aren't available on TFS.
Deploy to an Azure Government cloud or to Azure Stack
Create a suitable service connection:
Azure Government Cloud deployment
Azure Stack deployment

Deployment mechanisms
The preceding examples rely on the built-in Azure Web App task, which provides simplified integration with
Azure.
If you use a Windows agent, this task uses Web Deploy technology to interact with the Azure Web App. Web
Deploy provides several convenient deployment options, such as renaming locked files and excluding files from
the App_Data folder during deployment.
If you use the Linux agent, the task relies on the Kudu REST APIs.
One thing worth checking before deploying is the Azure App Service access restrictions list. This list can include
IP addresses or Azure Virtual Network subnets. When there are one or more entries, there is then an implicit
"deny all" that exists at the end of the list. To modify the access restriction rules to your app, see Adding and
editing access restriction rules in Azure portal. You can also modify/restrict access to your source control
management (scm) site.
The Azure App Service Manage task is another task that's useful for deployment. You can use this task to start,
stop, or restart the web app before or after deployment. You can also use this task to swap slots, install site
extensions, or enable monitoring of the web app.
You can use the File Transform task to apply file transformations and variable substitutions on any configuration
and parameters files.
If the built-in tasks don't meet your needs, you can use other methods to script your deployment. View the YAML
snippets in each of the following tasks for some examples:
Azure PowerShell task
Azure CLI task
FTP task
Release pipelines
11/2/2020 • 10 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

NOTE
This topic covers classic release pipelines. If you want to use YAML to author CI/CD pipelines, then see Create your first
pipeline.

Release pipelines in Azure Pipelines and Team Foundation Server (TFS 2015.2 and later) help your team
continuously deliver software to your customers at a faster pace and with lower risk. You can fully automate
the testing and delivery of your software in multiple stages all the way to production, or set up semi-automated
processes with approvals and on-demand deployments .

See Releases in Azure Pipelines to understand releases and deployments and watch the below video to see
release pipelines in action.

How do release pipelines work?


Release pipelines stores the data for your pipelines, stages, tasks, releases, and deployments in Azure Pipelines or
TFS.
Azure Pipelines runs the following steps as part of every deployment:
1. Pre-deployment approval : When a new deployment request is triggered, Azure Pipelines checks
whether a pre-deployment approval is required before deploying a release to a stage. If it is required, it
sends out email notifications to the appropriate approvers.
2. Queue deployment job:: Azure Pipelines schedules the deployment job on an available automation
agent. An agent is a piece of software that is capable of running tasks in the deployment.
3. Agent selection : An automation agent picks up the job. The agents for release pipelines are exactly the
same as those that run your builds in Azure Pipelines and TFS. A release pipeline can contain settings to
select an appropriate agent at runtime.
4. Download ar tifacts : The agent downloads all the artifacts specified in that release (provided you have
not opted to skip the download). The agent currently understands two types of artifacts: Azure Pipelines
artifacts and Jenkins artifacts.
5. Run the deployment tasks : The agent then runs all the tasks in the deployment job to deploy the app to
the target servers for a stage.
6. Generate progress logs : The agent creates detailed logs for each step while running the deployment,
and pushes these logs back to Azure Pipelines or TFS.
7. Post-deployment approval : When deployment to a stage is complete, Azure Pipelines checks if there is
a post-deployment approval required for that stage. If no approval is required, or upon completion of a
required approval, it proceeds to trigger deployment to the next stage.
Release pipelines and build pipelines have separate UIs. The main differences in the pipelines are the support in
release pipelines for different types of triggers, and the support for approvals and gates.

How do I use a release pipeline?


You start using Azure Pipelines releases by authoring a release pipeline for your application. To author a release
pipeline, you must specify the artifacts that make up the application and the release pipeline .
An ar tifact is a deployable component of your application. It is typically produced through a Continuous
Integration or a build pipeline. Azure Pipelines releases can deploy artifacts that are produced by a wide range of
artifact sources such as Azure Pipelines build, Jenkins, or Team City.
You define the release pipeline using stages, and restrict deployments into or out of a stage using approvals.
You define the automation in each stage using jobs and tasks. You use variables to generalize your automation
and triggers to control when the deployments should be kicked off automatically.
An example of a release pipeline that can be modeled through a release pipeline in shown below:

In this example, a release of a website is created by collecting specific versions of two builds (artifacts), each from
a different build pipeline. The release is first deployed to a Dev stage and then forked to two QA stages in parallel.
If the deployment succeeds in both the QA stages, the release is deployed to Prod ring 1 and then to Prod ring 2.
Each production ring represents multiple instances of the same website deployed at various locations around the
globe.
An example of how deployment automation can be modeled within a stage is shown below:

In this example, a job is used to deploy the app to websites across the globe in parallel within production ring 1.
After all those deployments are successful, a second job is used to switch traffic from the previous version to the
newer version.

NOTE
TFS 2015 : Jobs and fork/join deployments are not available in TFS 2015.

Next:
Check out the following articles to learn how to:
Create your first pipeline.
Set up a multi-stage managed release pipeline.
Manage deployments by using approvals and gates.

What is a draft release?


Draft releases are deprecated in Azure Pipelines because you can change variables while you are creating the
release.
Creating a draft release allows you to edit some of the settings for the release and the tasks, depending on your
role permissions, before starting the deployment. The changes apply only to that release, and do not affect the
settings of the original pipeline.
Create a draft release using the "..." ellipses link in the list of releases:

... or the Release drop-down in the pipeline definition page:

After you finish editing the draft release, choose Star t from the draft release toolbar.

How do I specify variables I want to edit when a release is created?


In the Variables tab of a release pipeline, when you add new variables, set the Settable at release time option
for those you want to be able to edit when a release is created and queued.
Then, when you create a new release, you can edit the values for these variables.

How do I integrate and report release status?


The current status for a release can be reported back in the source repository. In the Options tab of a release
pipeline, open the Integrations page.
Repor t deployment status to the repositor y host
If your sources are in an Azure Repos Git repository in your project, this option displays a badge on the Azure
Repos pages to indicate where the specific commit was deployed and whether the deployment is passing or
failing. This improves the traceability from code commit to deployment.
The deployment status is displayed in the following sections of Azure Repos:
Files : Indicates the status of the latest deployment for the selected branch.
Commits : Indicates the deployment status for each commit (this requires the continuous integration (CD)
trigger to be enabled for your release).
Branches : Indicates the status of the latest deployment for each branch.
If a commit is deployed to multiple release pipelines (with multiple stages), each has an entry in the badge with
the status shown for each stage. By default, when you create a release pipeline, deployment status is posted for
all stages. However, you can selectively choose the stages for which deployment status should be displayed in the
status badge (for example, show only the production stage). Your team members can click the status badge to
view the latest deployment status for each of the selected stages of the release pipelines.

NOTE
If your source is not an Azure Repos Git repository, you cannot use Azure Pipelines or TFS to automatically publish the
deployment status to your repository. However, you can still use the Enable the Deployment status badge option
described below to show deployment status within your version control system.

Repor t deployment status to Work


Select this option if you want to create links to all work items that represent associated changes to the source
when a release is complete.
Enable the deployment status badge
Select this option if you want to display the latest outcome of a stage deployment on an external website.
1. Select "Enable the deployment status badge".
2. Select the stages for which you want to display the outcome. By default, all the stages are selected.
3. Save your pipeline.
4. Copy the badge URL for the required stage to the clipboard.
5. Use this badge URL as a source of an image in an external website.
For example: <img src="{URL you copied from the link}"/>

When should I edit a release instead of the pipeline that defines it?
You can edit the approvals, tasks, and variables of a previously deployed release, instead of editing these values
in the pipeline from which the release was created. However, these edits apply to only the release generated
when you redeploy the artifacts. If you want your edits apply to all future releases and deployments, choose the
option to edit the release pipeline instead.

When and why would I abandon a release?


After you create a release, you can use it to redeploy the artifacts to any of the stages defined in that release.This
is useful if you want to perform regular manual releases, or set up a continuous integration stage trigger that
redeploys the artifacts using this release.
If you do not intend to reuse the release, or want to prevent it being used to redeploy the artifacts, you can
abandon the release using the shortcut menu that opens from the ellipses (...) icon in the Pipeline view of the
pipeline.

You cannot abandon a release when a deployment is in progress, you must cancel the deployment first.

How do I send release summaries by email?


After a release is triggered and completed, you may want to email the summary to stakeholders. Use the Send
Email option on the menu that opens from the ellipses (...) icon in the Pipeline view of the pipeline.
In the Send release summar y mail window, you can further customize the information to be sent in the email
by selecting only certain sections of the release summary.

How do I manage the names for new releases?


The names of releases for a release pipeline are, by default, sequentially numbered. The first release is named
Release-1 , the next release is Release-2 , and so on. You can change this naming scheme by editing the release
name format mask. In the Options tab of a release pipeline, edit the Release name format property in the
General page.
When specifying the format mask, you can use the following pre-defined variables.

VA RIA B L E DESC RIP T IO N

Rev:rr An auto-incremented number with at least the specified


number of digits.

Date / Date:MMddyy The current date, with the default format MMddyy . Any
combinations of M/MM/MMM/MMMM, d/dd/ddd/dddd,
y/yy/yyyy/yyyy, h/hh/H/HH, m/mm, s/ss are supported.

System.TeamProject The name of the project to which this build belongs.

Release.ReleaseId The ID of the release, which is unique across all releases in


the project.

Release.DefinitionName The name of the release pipeline to which the current release
belongs.

Build.BuildNumber The number of the build contained in the release. If a release


has multiple builds, this is the number of the primary build.

Build.DefinitionName The pipeline name of the build contained in the release. If a


release has multiple builds, this is the pipeline name of the
primary build.
VA RIA B L E DESC RIP T IO N

Ar tifact.Ar tifactType The type of the artifact source linked with the release. For
example, this can be Azure Pipelines or Jenkins .

Build.SourceBranch The branch of the primary artifact source. For Git, this is of
the form main if the branch is refs/heads/main . For Team
Foundation Version Control, this is of the form branch if the
root server path for the workspace is
$/teamproject/branch . This variable is not set for Jenkins
or other artifact sources.

Custom variable The value of a global configuration property defined in the


release pipeline.

For example, the release name format


Release $(Rev:rrr) for build $(Build.BuildNumber) $(Build.DefinitionName) will create releases with names such
as Release 002 for build 20170213.2 MySampleAppBuild .

How do I specify the retention period for releases?


You can customize how long releases of this pipeline must be retained. For more information, see release
retention.

How do I use and manage release history?


Every time you save a release pipeline, Azure Pipelines keeps a copy of the changes. This allows you to compare
the changes at a later point, especially when you are debugging a deployment failure.

Get started now!


Follow these steps:
1. Set up a multi-stage managed release pipeline
2. Manage deployments by using approvals and gates

Related topics
Deploy pull request builds using Azure Pipelines
Stage templates in Azure Pipelines
Deploy from multiple branches using Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2019


Artifact filters can be used with release triggers to deploy from multiple branches. Applying the artifact filter to a
specific branch results in the artifact deploying to a specific stage only when those filter conditions are met.

Prerequisites
You'll need:
A working build for your repository
Build multiple branches
Two separate targets where you will deploy the app. These could be virtual machines, web servers, on-
premises physical deployment groups, or other types of deployment target. You will have to choose names
that are unique, but it's a good idea to include "Dev" in the name of one, and "Prod" in the name of the other
so that you can easily identify them.

Set up a release pipeline


1. In Azure Pipelines , open the Releases tab. Create a New release Pipeline, select Add an ar tifact and
specify your build artifact.
2. Select the Continuous deployment trigger icon in the Ar tifacts section to open up the continuous
deployment trigger panel and switch the button to Enabled .

3. Add a stage with a name Dev . This stage will be triggered when a build artifact is published from the dev
branch.
4. Choose the Pre-deployment conditions icon in the Stages section to open up the pre-deployment
conditions panel. Under select trigger select After release . This means that a deployment will be initiated
automatically when a new release is created from this release pipeline.
5. Enable the Ar tifact filters . Select Add and specify your artifact. In the Build branch select the dev branch
then Save.
6. Add another stage and name it Prod . This stage will be triggered when a build artifact is published from the
main branch. Repeat steps 4-5 and replace Build branch to main.

7. Add your appropriate deployment tasks in each stage.


Now the next time you have a successful build, the artifact filter will detect which branch triggered that build and
only the appropriate stage will get deployed.
Related articles
Release triggers
If you encounter issues or have any suggestions, please feel free to ask questions or suggest a feature on our Azure
DevOps Developer Community.
Deploy pull request Artifacts with Azure Pipelines
11/2/2020 • 3 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
Pull requests provide an effective way to have code reviewed before it is merged to the codebase. However, certain
issues can be tricky to find until the code is built and deployed to an environment. Before the introduction of pull
request release triggers, when a PR was raised, you could trigger a build, but not a deployment. Pull request
triggers enable you to set up a set of criteria that must be met before deploying your code. You can use pull request
triggers with code hosted on Azure Repos or GitHub.
Configuring pull request based releases has two parts:
1. Setting up a pull request trigger.
2. Setting up a branch policy (in Azure Repos) or status checks (in GitHub) for your release pipeline.
Once a pull request release is configured, anytime a pull request is raised for the protected branch a release is
triggered automatically, deployed to the specified environments, and the status of the deployment is displayed in
the PR page. Pull request deployments may help you catch deployment issues early in the cycle, maintain better
code quality, and release with higher confidence.
This article shows how you can set up a pull request based release for code hosted in Azure Repos and in GitHub.

Create a pull request trigger


Pull request trigger creates a release every time a new version of your selected Artifact is available. You can set up
PR triggers for both Azure Repos or GitHub repositories.
1. Under Ar tifacts select the Continuous deployment trigger icon.

2. Select the pull request trigger toggle and set it to Enabled .


3. Set up one or more target branches. Target branches are the branches for which the pull request is raised.
When a pull request is created for one of these branches, it triggers a build, and when the build succeeds, it
triggers the PR release. You can optionally specify build tags as well.

4. To deploy to a specific stage you need to explicitly opt-in that stage. The Stages section shows the stages
that are enabled for pull request deployments.

To opt-in a stage for PR deployment, select the Pre-deployment conditions icon for that specific stage
and under the Triggers section, select Pull request deployment to set it to Enabled .
IMPORTANT
For critical stages like production, Pull request deployment should not be turned on.

Set up branch policy for Azure Repos


You can use branch policies to implement a list of criteria that must be met for a PR to be merged.
1. Under Repos select Branches to access the list of branches for your repository.

2. Select the the context menu ... for your appropriate branch and select Branch policies .
3. Select Add status policy and select a status policy from the status to check dropdown menu. The
dropdown contains a list of recent statuses. The release definition should have run at least once with the PR
trigger switched on in order to get the status. Select the status corresponding to your release definition and
save the policy.

You can further customize the policy for this status, like making the policy required or optional. For more
information, see Configure a branch policy for an external service.
4. You should now be able to see your new status policy in the list. Users won't be able to merge any changes
to the target branch until "succeeded" status is posted to the pull request.
5. You can view the status of your policies in the pull request Overview page. Depending on your policy
settings, you can view the posted release status under the Required , Optional , or Status sections. The
release status gets updated every time the pipeline is triggered.

Set up status checks for GitHub repositories


Enabling status checks for a GitHub repository allow an administrator to choose which status checks must pass
before a pull request is merged into the target branch. Follow the GitHub how-to guide to enable status checks for
your GitHub repository. The status checks will appear in your PRs only after your release pipeline is run at least
once with the Pull request deployment condition set to Enabled .

You can view your status checks in your pull request under the Conversation tab.
Related articles
Release triggers
Supported build source repositories

Additional resources
Azure Repos
Branch policies
Configure branch policy for an external service
Define your multi-stage continuous deployment (CD)
pipeline
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Azure Pipelines provide a highly configurable and manageable pipeline for releases to multiple stages such as
development, staging, QA, and production. it also offers the opportunity to implement gates and approvals at
each specific stage.
In this tutorial, you will learn about:
Continuous deployment triggers
Adding stages
Adding pre-deployment approvals
Creating releases and monitoring deployments

Prerequisites
You'll need:
A release pipeline that contains at least one stage. If you don't already have one, you can create it by
working through any of the following quickstarts and tutorials:
Deploy to an Azure Web App
Azure DevOps Project
Deploy to IIS web server on Windows
Two separate targets where you will deploy the app. These could be virtual machines, web servers, on-
premises physical deployment groups, or other types of deployment target. In this example, we are using
Azure App Services website instances. If you decide to do the same, you will have to choose names that are
unique, but it's a good idea to include "QA" in the name of one, and "Production" in the name of the other
so that you can easily identify them. Use the Azure portal to create a new web app.

Continuous deployment (CD) triggers


Enabling continuous deployment trigger will instruct the pipeline to automatically create a new release every time
a new build is available.
1. In Azure Pipelines , open the Releases tab. Select your release pipeline select Edit .
[!div class="mx-imgBorder"]

2. Select the Continuous deployment trigger icon in the Ar tifacts section to open the trigger panel. Make
sure this is enabled so that a new release is created after every new successful build is completed.

[!div class="mx-imgBorder"]

3. Select the Pre-deployment conditions icon in the Stages section to open the conditions panel. Make
sure that the trigger for deployment to this stage is set to After release . This means that a deployment
will be initiated automatically when a new release is created from this release pipeline.

[!div class="mx-imgBorder"]
You can also set up Release triggers, Stage triggers or schedule deployments.

Add stages
In this section, we will add two new stages to our release pipeline: QA and production (Two Azure App Services
websites in this example). This is a typical scenario where you would deploy initially to a test or staging server, and
then to a live or production server. Each stage represents one deployment target.
1. Select the Pipeline tab in your release pipeline and select the existing stage. Change the name of your
stage to Production .

[!div class="mx-imgBorder"]

2. Select the + Add drop-down list and choose Clone stage (the clone option is available only when an
existing stage is selected).

[!div class="mx-imgBorder"]

Typically, you want to use the same deployment methods with a test and a production stage so that you
can be sure your deployed apps will behave the same way. Cloning an existing stage is a good way to
ensure you have the same settings for both. You then just need to change the deployment targets.
3. Your cloned stage will have the name Copy of Production . Select it and change the name to QA .

[!div class="mx-imgBorder"]
4. To reorganize the stages in the pipeline, select the Pre-deployment conditions icon in your QA stage
and set the trigger to After release . The pipeline diagram will then show the two stages in parallel.

[!div class="mx-imgBorder"]

5. Select the Pre-deployment conditions icon in your Production stage and set the trigger to After
stage , then select QA in the Stages drop-down list. The pipeline diagram will now indicate that the two
stages will execute in the correct order.

[!div class="mx-imgBorder"]

NOTE
You can set up your deployment to start when a deployment to the previous stage is partially successful. This
means that the deployment will continue even if a specific non-critical task have failed. This is usually used in a fork
and join deployments that deploy to different stages in parallel.

6. Select the Tasks drop-down list and select the QA stage.

[!div class="mx-imgBorder"]
7. Depending on the tasks that you are using, change the settings so that this stage deploys to your "QA"
target. In our example, we will be using Deploy Azure App Ser vice task as shown below.

[!div class="mx-imgBorder"]

Add Pre-deployment approvals


The release pipeline we previously modified deploys to QA and production. If the deployment to QA fails, then
deployment to production won't trigger. It is recommended to always verify if your app is working properly in QA
or test stage before deploying to production. Adding approvals will ensure all the criteria are met before
deploying to the next stage. To add approvals to your pipeline follow the steps below:
1. Select the Pipeline tab, Pre-deployment conditions icon then Pre-deployment approvers .

[!div class="mx-imgBorder"]
2. In the Approvers text box, enter the user(s) that will be responsible for approving the deployment. It is
also recommended to uncheck the The user requesting a release or deployment should not
approve it check box.

[!div class="mx-imgBorder"]

You can add as many approvers as you need, both individual users and organization groups. It's also
possible to set up post-deployment approvals by selecting the "user" icon at the right side of the stage in
the pipeline diagram. For more information, see Releases gates and approvals.
3. Select Save .

[!div class="mx-imgBorder"]

Create a release
Now that the release pipeline setup is complete, it's time to start the deployment. To do this, we will manually
create a new release. Usually a release is created automatically when a new build artifact is available. However, in
this scenario we will create it manually.
1. Select the Release drop-down list and choose Create release .

[!div class="mx-imgBorder"]

2. Enter a description for your release, check that the correct artifacts are selected, and then select Create .

[!div class="mx-imgBorder"]

3. A banner will appear indicating that a new release has been create. Select the release link to see more
details.

[!div class="mx-imgBorder"]
4. The release summary page will show the status of the deployment to each stage.

[!div class="mx-imgBorder"]

Other views, such as the list of releases, also display an icon that indicates approval is pending. The icon
shows a pop-up containing the stage name and more details when you point to it. This makes it easy for an
administrator to see which releases are awaiting approval, as well as the overall progress of all releases.

[!div class="mx-imgBorder"]

5. Select the pending_approval icon to open the approval window panel. Enter a brief comment, and select
Approve .

[!div class="mx-imgBorder"]
NOTE
You can schedule deployment at a later date, for example during non-peak hours. You can also reassign approval to a
different user. Release administrators can access and override all approval decisions.

Monitor and track deployments


Deployment logs help you monitor and debug the release of your application. To check the logs of our
deployment follow the steps below:
1. In the release summary, hover over a stage and select Logs .

[!div class="mx-imgBorder"]

During deployment, you can still access the logs page to see the live logs of every task.
2. Select any task to see the logs for that specific task. This makes it easier to trace and debug deployment
issues. You can also download individual task logs, or a zip of all the log files.

[!div class="mx-imgBorder"]

3. If you need additional information to debug your deployment, you can run the release in debug mode.

Next step
Use approvals and gates to control your deployment
Building a Continuous Integration and Continuous
Deployment pipeline with DSC
11/2/2020 • 11 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

This example demonstrates how to build a Continuous Integration/Continuous Deployment (CI/CD) pipeline by
using PowerShell, DSC, and Pester.
After the pipeline is built and configured, you can use it to fully deploy, configure and test a DNS server and
associated host records. This process simulates the first part of a pipeline that would be used in a development
environment.
An automated CI/CD pipeline helps you update software faster and more reliably, ensuring that all code is tested,
and that a current build of your code is available at all times.

Prerequisites
To use this example, you should be familiar with the following:
CI-CD concepts. A good reference can be found at The Release Pipeline Model.
Git source control
The Pester testing framework
Desired State Configuration(DSC)

What you will need


To build and run this example, you will need an environment with several computers and/or virtual machines.
Client
This is the computer where you'll do all of the work setting up and running the example. The client computer must
be a Windows computer with the following installed:
Git
a local git repo cloned from https://github.com/PowerShell/Demo_CI
a text editor, such as Visual Studio Code
Azure DevOps Subscription
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps organization is
different from your GitHub organization. Give them the same name if you want alignment between them.)
TFSSrv
The computer that hosts the TFS server where you will define your build and release. This computer must have
Team Foundation Server 2017 installed.
BuildAgent
The computer that runs the Windows build agent that builds the project. This computer must have a Windows build
agent installed and running. See Deploy an agent on Windows for instructions on how to install and run a Windows
build agent.
You also need to install both the xDnsServer and xNetworking DSC modules on this computer.
TestAgent1
This is the computer that is configured as a DNS server by the DSC configuration in this example. The computer
must be running Windows Server 2016.
TestAgent2
This is the computer that hosts the website this example configures. The computer must be running Windows
Server 2016.

Add the code to a repository


We'll start out by creating a Git repository, and importing the code from your local repository on the client
computer. If you have not already cloned the Demo_CI repository to your client computer, do so now by running
the following git command:

git clone https://github.com/PowerShell/Demo_CI

1. On your client computer, navigate to your TFS server in a web browser.


2. Create a new team project named Demo_CI.
Make sure that Version control is set to Git .
3. On your client computer, add a remote to the repository you just created in TFS with the following
command:
git remote add tfs <YourTFSRepoURL>

Where <YourTFSRepoURL> is the clone URL to the TFS repository you created in the previous step.
If you don't know where to find this URL, see Clone an existing Git repo.
4. Push the code from your local repository to your TFS repository with the following command:
git push tfs --all

5. The TFS repository will be populated with the Demo_CI code.


1. Navigate to your Azure DevOps subscription in a web browser.
2. Create a new team project named Demo_CI. Make sure that Version control is set to Git .
3. On your client computer, add a remote to the repository you just created with the following command:
git remote add devops <YourDevOpsRepoURL>

Where <YourDevOpsRepoURL> is the clone URL to the Azure DevOps repository you created in the previous
step.
If you don't know where to find this URL, see Clone an existing Git repo.
4. Push the code from your local repository to your TFS repository with the following command:
git push devops --all
5. The Azure DevOps repository will be populated with the Demo_CI code.

NOTE
This example uses the code in the ci-cd-example branch of the Git repo. Be sure to specify this branch as the default
branch in your project, and for the CI/CD triggers you create.

Understanding the code


Before we create the build and deployment pipelines, let's look at some of the code to understand what is going on.
On your client computer, open your favorite text editor and navigate to the root of your Demo_CI Git repository.
The DSC configuration
Open the file DNSServer.ps1 (from the root of the local Demo_CI repository, ./InfraDNS/Configs/DNSServer.ps1 ).
This file contains the DSC configuration that sets up the DNS server. Here it is in its entirety:

configuration DNSServer
{
Import-DscResource -module 'xDnsServer','xNetworking', 'PSDesiredStateConfiguration'

Node $AllNodes.Where{$_.Role -eq 'DNSServer'}.NodeName


{
WindowsFeature DNS
{
Ensure = 'Present'
Name = 'DNS'
}

xDnsServerPrimaryZone $Node.zone
{
Ensure = 'Present'
Name = $Node.Zone
DependsOn = '[WindowsFeature]DNS'
}

foreach ($ARec in $Node.ARecords.keys) {


xDnsRecord $ARec
{
Ensure = 'Present'
Name = $ARec
Zone = $Node.Zone
Type = 'ARecord'
Target = $Node.ARecords[$ARec]
DependsOn = '[WindowsFeature]DNS'
}
}

foreach ($CName in $Node.CNameRecords.keys) {


xDnsRecord $CName
{
Ensure = 'Present'
Name = $CName
Zone = $Node.Zone
Type = 'CName'
Target = $Node.CNameRecords[$CName]
DependsOn = '[WindowsFeature]DNS'
}
}
}
}
Notice the Node statement:

Node $AllNodes.Where{$_.Role -eq 'DNSServer'}.NodeName

This finds any nodes that were defined as having a role of DNSServer in the configuration data, which is created by
the DevEnv.ps1 script.
You can read more about the Where method in about_arrays
Using configuration data to define nodes is important when doing CI because node information will likely change
between environments, and using configuration data allows you to easily make changes to node information
without changing the configuration code.
In the first resource block, the configuration calls the WindowsFeature to ensure that the DNS feature is enabled.
The resource blocks that follow call resources from the xDnsServer module to configure the primary zone and DNS
records.
Notice that the two xDnsRecord blocks are wrapped in foreach loops that iterate through arrays in the
configuration data. Again, the configuration data is created by the DevEnv.ps1 script, which we'll look at next.
Configuration data
The DevEnv.ps1 file (from the root of the local Demo_CI repository, ./InfraDNS/DevEnv.ps1 ) specifies the
environment-specific configuration data in a hashtable, and then passes that hashtable to a call to the
New-DscConfigurationDataDocument function, which is defined in DscPipelineTools.psm (
./Assets/DscPipelineTools/DscPipelineTools.psm1 ).

The DevEnv.ps1 file:

param(
[parameter(Mandatory=$true)]
[string]
$OutputPath
)

Import-Module $PSScriptRoot\..\Assets\DscPipelineTools\DscPipelineTools.psd1 -Force

# Define Unit Test Environment


$DevEnvironment = @{
Name = 'DevEnv';
Roles = @(
@{ Role = 'DNSServer';
VMName = 'TestAgent1';
Zone = 'Contoso.com';
ARecords = @{'TFSSrv1'=
'10.0.0.10';'Client'='10.0.0.15';'BuildAgent'='10.0.0.30';'TestAgent1'='10.0.0.40';'TestAgent2'='10.0.0.50'};
CNameRecords = @{'DNS' = 'TestAgent1.contoso.com'};
}
)
}

return New-DscConfigurationDataDocument -RawEnvData $DevEnvironment -OutputPath $OutputPath

The New-DscConfigurationDataDocument function (defined in \Assets\DscPipelineTools\DscPipelineTools.psm1 )


programmatically creates a configuration data document from the hashtable (node data) and array (non-node data)
that are passed as the RawEnvData and OtherEnvData parameters.
In our case, only the RawEnvData parameter is used.
The psake build script
The psake build script defined in Build.ps1 (from the root of the Demo_CI repository, ./InfraDNS/Build.ps1 )
defines tasks that are part of the build. It also defines which other tasks each task depends on. When invoked, the
psake script ensures that the specified task (or the task named Default if none is specified) runs, and that all
dependencies also run (this is recursive, so that dependencies of dependencies run, and so on).
In this example, the Default task is defined as:

Task Default -depends UnitTests

The Default task has no implementation itself, but has a dependency on the CompileConfigs task. The resulting
chain of task dependencies ensures that all tasks in the build script are run.
In this example, the psake script is invoked by a call to Invoke-PSake in the Initiate.ps1 file (located at the root of
the Demo_CI repository):

param(
[parameter()]
[ValidateSet('Build','Deploy')]
[string]
$fileName
)

#$Error.Clear()

Invoke-PSake $PSScriptRoot\InfraDNS\$fileName.ps1

<#if($Error.count)
{
Throw "$fileName script failed. Check logs for failure details."
}
#>

When we create the build definition for our example, we will supply our psake script file as the fileName parameter
for this script.
The build script defines the following tasks:
GenerateEnvironmentFiles
Runs DevEnv.ps1 , which generates the configuration data file.
InstallModules
Installs the modules required by the configuration DNSServer.ps1 .
ScriptAnalysis
Calls the PSScriptAnalyzer.
UnitTests
Runs the Pester unit tests.
CompileConfigs
Compiles the configuration ( DNSServer.ps1 ) into a MOF file, using the configuration data generated by the
GenerateEnvironmentFiles task.

Clean
Creates the folders used for the example, and removes any test results, configuration data files, and modules from
previous runs.
The psake deploy script
The psake deployment script defined in Deploy.ps1 (from the root of the Demo_CI repository,
./InfraDNS/Deploy.ps1 ) defines tasks that deploy and run the configuration.
Deploy.ps1 defines the following tasks:
DeployModules
Starts a PowerShell session on TestAgent1 and installs the modules containing the DSC resources required for the
configuration.
DeployConfigs
Calls the Start-DscConfiguration cmdlet to run the configuration on TestAgent1 .
IntegrationTests
Runs the Pester integration tests.
AcceptanceTests
Runs the Pester acceptance tests.
Clean
Removes any modules installed in previous runs, and ensures that the test result folder exists.
Test scripts
Acceptance, Integration, and Unit tests are defined in scripts in the Tests folder (from the root of the Demo_CI
repository, ./InfraDNS/Tests ), each in files named DNSServer.tests.ps1 in their respective folders.
The test scripts use Pester and PoshSpec syntax.
Unit tests
The unit tests test the DSC configurations themselves to ensure that the configurations will do what is expected
when they run. The unit test script uses Pester.
Integration tests
The integration tests test the configuration of the system to ensure that when integrated with other components,
the system is configured as expected. These tests run on the target node after it has been configured with DSC. The
integration test script uses a mixture of Pester and PoshSpec syntax.
Acceptance tests
Acceptance tests test the system to ensure that it behaves as expected. For example, it tests to ensure a web page
returns the right information when queried. These tests run remotely from the target node in order to test real
world scenarios. The integration test script uses a mixture of Pester and PoshSpec syntax.

Define the build


Now that we've uploaded our code to a repo and looked at what it does, let's define our build.
Here, we'll cover only the build steps that you'll add to the build. For instructions on how to create a build definition
in Azure DevOps, see Create and queue a build definition.
Create a new build definition (select the Star ter Pipeline template) named "InfraDNS". Add the following steps to
you build definition:
PowerShell
Publish Test Results
Copy Files
Publish Artifact
After adding these build steps, edit the properties of each step as follows:
PowerShell
1. Set the targetType property to File Path .
2. Set the filePath property to initiate.ps1 .
3. Add -fileName build to the Arguments property.

This build step runs the initiate.ps1 file, which calls the psake build script.
Publish Test Results
1. Set TestResultsFormat to NUnit
2. Set TestResultsFiles to InfraDNS/Tests/Results/*.xml
3. Set TestRunTitle to Unit .
4. Make sure Control Options Enabled and Always run are both selected.
This build step runs the unit tests in the Pester script we looked at earlier, and stores the results in the
InfraDNS/Tests/Results/*.xml folder.

Copy Files
1. Add each of the following lines to Contents :

initiate.ps1
**\deploy.ps1
**\Acceptance\**
**\Integration\**

2. Set TargetFolder to $(Build.ArtifactStagingDirectory)\

This step copies the build and test scripts to the staging directory so that the can be published as build artifacts by
the next step.
Publish Artifact
1. Set TargetPath to $(Build.ArtifactStagingDirectory)\
2. Set Ar tifactName to Deploy
3. Set Enabled to true .

Enable continuous integration


Now we'll set up a trigger that causes the project to build any time a change is checked in to the ci-cd-example
branch of the git repository.
1. In TFS, click the Build & Release tab
2. Select the DNS Infra build definition, and click Edit
3. Click the Triggers tab
4. Select Continuous integration (CI) , and select refs/heads/ci-cd-example in the branch drop-down list
5. Click Save and then OK
Now any change in the git repository triggers an automated build.

Create the release definition


Let's create a release definition so that the project is deployed to the development environment with every code
check-in.
To do this, add a new release definition associated with the InfraDNS build definition you created previously. Be
sure to select Continuous deployment so that a new release will be triggered any time a new build is completed.
(What are release pipelines?) and configure it as follows:
Add the following steps to the release definition:
PowerShell
Publish Test Results
Publish Test Results
Edit the steps as follows:
PowerShell
1. Set the TargetPath field to $(Build.DefinitionName)\Deploy\initiate.ps1"
2. Set the Arguments field to -fileName Deploy
First Publish Test Results
1. Select NUnit for the TestResultsFormat field
2. Set the TestResultsFiles field to $(Build.DefinitionName)\Deploy\InfraDNS\Tests\Results\Integration*.xml
3. Set the TestRunTitle to Integration
4. Set Condition to succeededOrFailed()
Second Publish Test Results
1. Select NUnit for the TestResultsFormat field
2. Set the TestResultsFiles field to $(Build.DefinitionName)\Deploy\InfraDNS\Tests\Results\Acceptance*.xml
3. Set the TestRunTitle to Acceptance
4. Set Condition to succeededOrFailed()

Verify your results


Now, any time you push changes in the ci-cd-example branch, a new build will start. If the build completes
successfully, a new deployment is triggered.
You can check the result of the deployment by opening a browser on the client machine and navigating to
www.contoso.com .

Next steps
This example configures the DNS server TestAgent1 so that the URL www.contoso.com resolves to TestAgent2 , but
it does not actually deploy a website. The skeleton for doing so is provided in the repo under the WebApp folder. You
can use the stubs provided to create psake scripts, Pester tests, and DSC configurations to deploy your own website.
Stage templates in Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

When you start a new release pipeline, or when you add a stage to an existing release pipeline, you can choose
from a list of templates for each stage. These templates pre-populate the stage with the appropriate tasks and
settings, which can considerably reduce the time and effort required to create a release pipeline for your DevOps
CI/CD processes.
A set of pre-defined stage templates is available in Azure Pipelines and in each version of TFS. You can use these
templates when you create a new release pipeline or add a new stage to a pipeline. You can also create your own
custom stage templates from a stage you have populated and configured.

NOTE
Templates do not have any additional security capability. There is no way to restrict the use of a template to specific users. All
templates, pre-defined and custom, are available for use by all users who have permission to create release pipelines.

When a stage is created from a template, the tasks in the template are copied over to the stage. Any further
updates to the template have no impact on existing stages. If you want a way to easily insert a number of stages
into release pipelines (perhaps to keep the definitions consistent) and to enable these stages to all be updated in
one operation, use task groups instead of stage templates.

FAQ
Can I expor t templates or share them with other subscriptions, enterprises, or projects?
Custom templates that you create are scoped to the project that you created them in. Templates cannot be exported
or shared with another project, collection, server, or organization. You can, however, export a release pipeline and
import it into another project, collection, server, or subscription. Then you can re-create the template for use in that
location.
How do I delete a custom stage template?
You can delete an existing custom template from the list of templates that is displayed when you add a new stage
to our pipeline.
How do I update a custom stage template?
To update a stage template, delete the existing template in a release pipeline and then save the stage as a template
with the same name.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Deploy a web app to Azure App Services
2/26/2020 • 3 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017.2


We'll show you how to set up continuous deployment of your ASP.NET or Node.js app to an Azure Web App using
Azure Pipelines or Team Foundation Server (TFS). You can use the steps in this quickstart as long as your
continuous integration pipeline publishes a Web Deploy package.

Prerequisites
Before you begin, you'll need a CI build that publishes your Web Deploy package. To set up CI for your specific type
of app, see:
Build your ASP.NET 4 app
Build your ASP.NET Core app
Build your Node.js app with gulp
You'll also need an Azure Web App where you will deploy the app.

Define your CD release pipeline


Your CD release pipeline picks up the artifacts published by your CI build and then deploys them to your Azure
web site.
1. Do one of the following to start creating a release pipeline:
If you've just completed a CI build (see above), choose the link (for example, Build 20170815.1) to
open the build summary. Then choose Release to start a new release pipeline that's automatically
linked to the build pipeline.
Open the Releases tab in Azure Pipelines , open the + drop-down in the list of release pipelines,
and choose Create release pipeline .
2. The easiest way to create a release pipeline is to use a template. If you are deploying a Node.js app, select
the Deploy Node.js App to Azure App Ser vice template. Otherwise, select the Azure App Ser vice
Deployment template. Then choose Apply .

The only difference between these templates is that Node.js template configures the task to generate a
web.config file containing a parameter that starts the iisnode service.

3. If you created your new release pipeline from a build summary, check that the build pipeline and artifact is
shown in the Ar tifacts section on the Pipeline tab. If you created a new release pipeline from the
Releases tab, choose the + Add link and select your build artifact.
4. Choose the Continuous deployment icon in the Ar tifacts section, check that the continuous deployment
trigger is enabled, and add a filter to include the master branch.

Continuous deployment is not enabled by default when you create a new release pipeline from the
Releases tab.
5. Open the Tasks tab and, with Stage 1 selected, configure the task property variables as follows:
Azure Subscription: Select a connection from the list under Available Azure Ser vice
Connections or create a more restricted permissions connection to your Azure subscription. If you
are using Azure Pipelines and if you see an Authorize button next to the input, click on it to
authorize Azure Pipelines to connect to your Azure subscription. If you are using TFS or if you do not
see the desired Azure subscription in the list of subscriptions, see Azure Resource Manager service
connection to manually set up the connection.
App Ser vice Name : Select the name of the web app from your subscription.

NOTE
Some settings for the tasks may have been automatically defined as stage variables when you created a release
pipeline from a template. These settings cannot be modified in the task settings; instead you must select the parent
stage item in order to edit these settings.

6. Save the release pipeline.

Create a release to deploy your app


You're now ready to create a release, which means to run the release pipeline with the artifacts produced by a
specific build. This will result in deploying the build:
1. Choose + Release and select Create a release .
2. In the Create a new release panel, check that the artifact version you want to use is selected and choose
Create .
3. Choose the release link in the information bar message. For example: "Release Release-1 has been
created".
4. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output.
5. After the release is complete, navigate to your site running in Azure using the Web App URL
http://{web_app_name}.azurewebsites.net , and verify its contents.

Next step
Customize web app deployment
Deploy to an Azure Web App for Containers
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines
We'll show you how to set up continuous deployment of your Docker-enabled app to an Azure Web App using
Azure Pipelines.
For example, you can continuously deliver your app to a Windows VM hosted in Azure.

After you commit and push a code change, it is automatically built and then deployed. The results will
automatically show up on your site.

Get the code


If you want some sample code that works with this guidance, import (into Azure DevOps) or fork (into GitHub) the
following repository, based on the desired runtime.
Java
JavaScript
Python
.NET Core
If you already have an app in GitHub that you want to deploy, you can create a pipeline for that code.
If you are a new user, fork this repo in GitHub:

https://github.com/spring-guides/gs-spring-boot-docker.git

Define your CI build pipeline


Set up a CI pipeline for building an image and pushing it to a container registry.

Prerequisites
You'll need an Azure subscription. You can get one free through Visual Studio Dev Essentials.

Create an Azure Web App to host a container


1. Sign into Azure at https://portal.azure.com.
2. In the Azure Portal, choose Create a resource , Web , then choose Web App for Containers .
3. Enter a name for your new web app, and select or create a new Resource Group. For the OS , choose Linux .
4. Choose Configure container and select Azure Container Registr y . Use the drop-down lists to select the
registry you created earlier, and the Docker image and tag that was generated by the build pipeline.
5. Wait until the new web app has been created. Then you can create a release pipeline as shown in the next
section.
The Docker tasks you used in the build pipeline when you created the build artifacts push the Docker image back
into your Azure Container Registry. The web app you created here will host an instance of that image and expose it
as a website.

Why use a separate release pipeline instead of the automatic deployment feature available in Web
App for Containers?
You can configure Web App for Containers to automatically configure deployment as part of the CI/CD pipeline so
that the web app is automatically updated when a new image is pushed to the container registry (this feature uses
a webhook). However, by using a separate release pipeline in Azure Pipelines or TFS you gain extra flexibility and
traceability. You can:
Specify an appropriate tag that is used to select the deployment target for multi-stage deployments.
Use separate container registries for different stages.
Use parameterized start-up commands to, for example, set the values of variables based on the target stage.
Avoid using the same tag for all the deployments. The default CD pipeline for Web App for Containers uses the
same tag for every deployment. While this may be appropriate for a tag such as latest , you can achieve end-to-
end traceability from code to deployment by using a build-specific tag for each deployment. For example, the
Docker build tasks let you tag your images with the Build.ID for each deployment.

Create a release pipeline


1. In Azure Pipelines , open the build summary for your build and choose Release to start a new release
pipeline.
If you have previously created a release pipeline that uses these build artifacts, you will be prompted to
create a new release instead. In that case, go to the Releases tab page and start a new release pipeline from
there by choosing the + icon.
2. Select the Azure App Ser vice Deployment template and choose Apply .
3. Open the Tasks tab and select the Stage 1 item. In the settings panel next to Parameters , choose Unlink
all .
4. Select the Deploy Azure App Ser vice task and configure the settings as follows:
Version : Select 4.* (preview) .
Azure Subscription : Select a connection from the list under Available Azure Ser vice
Connections or create a more restricted permissions connection to your Azure subscription. If you
are using Azure Pipelines and if you see an Authorize button next to the input, click on it to
authorize Azure Pipelines to connect to your Azure subscription. If you are using TFS or if you do not
see the desired Azure subscription in the list of subscriptions, see Azure Resource Manager service
connection to manually set up the connection.
App Ser vice type : Select Web App for Containers .
App Ser vice name : Select the web app you created earlier from your subscription. App services
based on selected app type will only be listed.
When you select the Docker-enabled app type, the task recognizes that it is a containerized app, and
changes the property settings to show the following:
Registr y or Namespace : Enter the path to your Azure Container Registry which is a globally
unique top-level domain name for your specific registry or namespace. Typically this is your-registry-
name.azurecr.io
Image : Name of the repository where the container images are stored.
Tag : Tags are optional, it is the mechanism that registries use to give Docker images a version. A fully
qualified image name will be of the format: '/:'. For example, 'myregistry.azurecr.io/nginx:latest'.
Star tup command : Start up command for the container.
5. Save the release pipeline.

Create a release to deploy your app


You're now ready to create a release, which means to run the release pipeline with the artifacts produced by a
specific build. This will result in deploying the build:
1. Choose + Release and select Create a release .
2. In the Create a new release panel, check that the artifact version you want to use is selected and choose
Create .
3. Choose the release link in the information bar message. For example: "Release Release-1 has been
created".
4. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output.
5. After the release is complete, navigate to your site running in Azure using the Web App URL
http://{web_app_name}.azurewebsites.net , and verify its contents.

Next steps
Set up multi-stage release
Deploy a Docker container app to Azure Kubernetes
Service
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines
We'll show you how to set up continuous deployment of your containerized application to an Azure Kubernetes
Service (AKS) using Azure Pipelines.
After you commit and push a code change, it will be automatically built and deployed to the target Kubernetes
cluster.

Get the code


If you want some sample code that works with this guidance, import (into Azure DevOps), or fork (into GitHub), the
following repository, based on the desired runtime.
Java
JavaScript
Python
.NET Core
If you already have an app in GitHub that you want to deploy, you can create a pipeline for that code.
If you are a new user, fork this repo in GitHub:

https://github.com/spring-guides/gs-spring-boot-docker.git

Define your CI build process


Set up a CI pipeline for building an image and pushing it to a container registry.

Prerequisites
You'll need an Azure subscription. You can get one free through Visual Studio Dev Essentials.

Create an AKS cluster to host your app


1. Sign into Azure at https://portal.azure.com.
2. In the Azure portal, choose Create a resource , New , Containers , then choose Kubernetes Ser vice .
3. Select or create a new Resource Group, enter name for your new Kubernetes Service cluster and DNS name
prefix.
4. Choose Review + Create and then, after validation, choose Create .
5. Wait until the new AKS cluster has been created.

Configure authentication
When you use Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), you must establish an
authentication mechanism. This can be achieved in two ways:
1. Grant AKS access to ACR. See Authenticate with Azure Container Registry from Azure Kubernetes Service.
2. Use a Kubernetes image pull secret. An image pull secret can be created by using the Kubernetes
deployment task.

Create a release pipeline


The build pipeline used to set up CI has already built a Docker image and pushed it to an Azure Container Registry.
It also packaged and published a Helm chart as an artifact. In the release pipeline, we'll deploy the container image
as a Helm application to the AKS cluster.
1. In Azure Pipelines , or the Build & Release hub in TFS, open the summary for your build.
2. In the build summary, choose the Release icon to start a new release pipeline.
If you have previously created a release pipeline that uses these build artifacts, you will be prompted to
create a new release instead. In that case, go to the Releases page and start a new release pipeline from
there by choosing the + icon.
3. Select the Empty job template.
4. Open the Tasks page and select Agent job .
5. Choose + to add a new task and add a Helm tool installer task. This ensures the agent that runs the
subsequent tasks has Helm and Kubectl installed on it.
6. Choose + again and add a Package and deploy Helm char ts task. Configure the settings for this task as
follows:
Connection Type : Select Azure Resource Manager to connect to an AKS cluster by using an Azure
service connection. Alternatively, if you want to connect to any Kubernetes cluster by using
kubeconfig or a service account, you can select Kubernetes Ser vice Connection . In this case, you
will need to create and select a Kubernetes service connection instead of an Azure subscription for the
following setting.
Azure subscription : Select a connection from the list under Available Azure Ser vice
Connections or create a more restricted permissions connection to your Azure subscription. If you
see an Authorize button next to the input, use it to authorize the connection to your Azure
subscription. If you do not see the required Azure subscription in the list of subscriptions, see Create
an Azure service connection to manually set up the connection.
Resource group : Enter or select the resource group containing your AKS cluster.
Kubernetes cluster : Enter or select the AKS cluster you created.
Command : Select init as the Helm command. This will install Tiller to your running Kubernetes
cluster. It will also set up any necessary local configuration. Tick Use canar y image version to
install the latest pre-release version of Tiller. You could also choose to upgrade Tiller if it is pre-
installed by ticking Upgrade Tiller . If these options are enabled, the task will run
helm init --canary-image --upgrade

7. Choose + in the Agent job and add another Package and deploy Helm char ts task. Configure the
settings for this task as follows:
Kubernetes cluster : Enter or select the AKS cluster you created.
Namespace : Enter your Kubernetes cluster namespace where you want to deploy your application.
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual
clusters are called namespaces. You can use namespaces to create different environments such as dev,
test, and staging in the same cluster.
Command : Select upgrade as the Helm command. You can run any Helm command using this task
and pass in command options as arguments. When you select the upgrade , the task shows some
additional fields:
Char t Type : Select File Path . Alternatively, you can specify Char t Name if you want to
specify a URL or a chart name. For example, if the chart name is stable/mysql , the task will
execute helm upgrade stable/mysql
Char t Path : This can be a path to a packaged chart or a path to an unpacked chart directory. In
this example you are publishing the chart using a CI build, so select the file package using file
picker or enter $(System.DefaultWorkingDirectory)/**/*.tgz
Release Name : Enter a name for your release; for example azuredevops

Recreate Pods : Tick this checkbox if there is a configuration change during the release and
you want to replace a running pod with the new configuration.
Reset Values : Tick this checkbox if you want the values built into the chart to override all
values provided by the task.
Force : Tick this checkbox if, should conflicts occur, you want to upgrade and rollback to delete,
recreate the resource, and reinstall the full release. This is useful in scenarios where applying
patches can fail (for example, for services because the cluster IP address is immutable).
Arguments : Enter the Helm command arguments and their values; for this example
--set image.repository=$(imageRepoName) --set image.tag=$(Build.BuildId) See this section for
a description of why we are using these arguments.
Enable TLS : Tick this checkbox to enable strong TLS-based connections between Helm and
Tiller.
CA cer tificate : Specify a CA certificate to be uploaded and used to issue certificates for Tiller
and Helm client.
Cer tificate : Specify the Tiller certificate or Helm client certificate
Key : Specify the Tiller Key or Helm client key
8. In the Variables page of the pipeline, add a variable named imageRepoName and set the value to the
name of your Helm image repository. Typically, this is in the format name.azurecr.io/coderepository
9. Save the release pipeline.
Arguments used in the Helm upgrade task
In the build pipeline, the container image is tagged with $(Build.BuildId) and this is pushed to an Azure Container
Registry. In a Helm chart you can parameterize the container image details such as the name and tag because the
same chart can be used to deploy to different environments. These values can also be specified in the values.yaml
file or be overridden by a user-supplied values file, which can in turn be overridden by --set parameters during
the Helm install or upgrade.
In this example, we pass the following arguments:
--set image.repository=$(imageRepoName) --set image.tag=$(Build.BuildId)

The value of $(imageRepoName) was set in the Variables page (or the variables section of your YAML file).
Alternatively, you can directly replace it with your image repository name in the --set arguments value or
values.yaml file. For example:

image:
repository: VALUE_TO_BE_OVERRIDDEN
tag: latest

Another alternative is to set the Set Values option of the task to specify the argument values as comma separated
key-value pairs.

Create a release to deploy your app


You're now ready to create a release, which means to start the process of running the release pipeline with the
artifacts produced by a specific build. This will result in deploying the build:
1. Choose + Release and select Create a release .
2. In the Create a new release panel, check that the artifact version you want to use is selected and choose
Create .
3. Choose the release link in the information bar message. For example: "Release Release-1 has been created".
4. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output.

Next steps
Set up multi-stage release
Automatically deploy to IoT edge devices
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines
In this tutorial, you'll learn how to build an Azure Internet of Things (IoT) solution, push the created module images
to your Azure Container Registry (ACR), create a deployment manifest, and then deploy the modules to targeted IoT
edge devices.

Prerequisites
1. Visual Studio (VS) Code to create an Iot Edge module. You can download it from here.
2. Azure DevOps Ser vices organization . If you don't yet have one, you can get one for free.
3. Microsoft Azure Account . If you don't yet have one, you can create one for free.
4. Azure IoT tools for VS Code.
5. Docker CE.
6. Create an Azure Container Registry.

Create an IoT Edge project


The following steps creates an IoT Edge module project that's based on .NET Core SDK by using VS Code and Azure
IoT tools.
1. In the VS Code, select View > Command Palette to open the VS Code command palette.
2. In the command palette, enter and run the command Azure: Sign in and follow the instructions to sign in
your Azure account. If you're already signed in, you can skip this step.
3. In the command palette, enter and run the command Azure IoT Edge: New IoT Edge solution . Follow the
prompts in the command palette to create the solution.

F IEL D VA L UES

Select Folder Choose the location on your development machine for VS


Code to create the solution files

Provide a solution name Enter a descriptive name for your solution or accept the
default EdgeSolution

Select module template Choose C# Module

Provide a module name Name your module CSharpModule


F IEL D VA L UES

Provide Docker image repository for the module An image repository includes the name of your container
registry and the name of your container image. Your
container image is prepopulated from the name that you
have provided in the last step. Replace localhost:5000
with the login server value from your Azure container
registry. You can retrieve the login server from the
Overview page of your container registry in the Azure
portal.

The VS Code window loads your IoT Edge solution workspace. The solution workspace contains five top-level
components.
1. modules - contains C# code for your module as well as Dockerfiles for building your module as a container
image
2. .env - file stores your container registry credentials
3. deployment.template.json - file contains the information that the IoT Edge runtime uses to deploy the
modules on a device
4. deployment.debug.template.json - file contains the debug version of modules
5. .vscode and .gitignore - do not edit
If you didn't specify a container registry when creating your solution, but accepted the default localhost:5000 value,
you won't have a .env file.

Add registry credentials


The environment file stores the credentials for your container registry and shares them with the IoT Edge runtime.
The runtime needs these credentials to pull your private images onto the IoT Edge device.
1. In the VS Code explorer, open the .env file.
2. Update the fields with the user name and password values that you copied from your Azure container
registry.
3. Save this file.

Build your IoT Edge solution


In the previous section, you created an IoT Edge solution using CSharpModule. Now you need to build the solution
as a container image and push it to the container registry.
1. In the VS Code explorer, right-click on the deployment.template.json file and select Build IoT Edge
solution .
2. Upon successful build, you should see an image with the following format
registr yname.azurecr.io/csharpmodule:0.0.1-amd64.

Push the code to Azure Repo


If your workspace isn't under Git source control, you can easily create a Git repository with the Initialize
Repositor y command.
1. In the VS Code, select View > Command Palette to open the VS Code command palette.
2. Run the Git: Initialize Repositor y command from the Command Palette. Running Initialize Repository will
create the necessary Git repository metadata files and show your workspace files as untracked changes
ready to be staged.
3. Select View > Terminal to open the terminal. To push, pull and sync you need to have a Git origin set up.
You can get the required URL from the repo host. Once you have that URL, you need to add it to the Git
settings by running a couple of command line actions as shown below.

git remote add origin https://<org [email protected]>/<org name>/<project name>/_git/<repo name>


git push -u origin --all

4. From the browser, navigate to the repo. You should see the code.

Create a build pipeline


You can use Azure Pipelines to build your projects on Windows, Linux, or macOS without needing to set up any
infrastructure of your own. The Microsoft-hosted agents in Azure Pipelines have several released versions of the
.NET Core SDKs preinstalled.
1. Navigate to your team project on Azure DevOps.
2. Navigate to Pipelines | Builds . From the New drop-down menu, select New build pipeline to create a
new one.
3. The default option for build pipelines involves using YAML to define the process. For this lab, select use the
classic editor .
4. The first thing you’ll need to do is to configure the source repository. This build will use the main branch of
the IoT Edge module repo. Leave the defaults and select Continue .
5. Select Empty job .
6. Select the Agent pool Hosted Ubuntu 1604 from the drop down.
7. Select + and search for Azure Resource Group Deployment task. Select add . Configure the task as
shown below -

F IEL D VA L UES

Azure subscription (Required) Name of Azure Resource Manager service


connection

Action (Required) Action to perform. Leave the default value as is

Resource group (Required) Provide the name of a resource group

Location (Required) Provide the location for deploying the resource


group

Template location (Required) Set the template location to URL of the file

Template link (Required) https://raw.githubusercontent.com/Azure-


Samples/devops-iot-
scripts/12d60bd513ead7c94aa1669e505083beaef8a480/
arm-acr.json
F IEL D VA L UES

Override template parameters -registr yName YOUR_REGISTRY_NAME -


registr ySku "Basic" -registr yLocation "YOUR
LOCATION"

NOTE
Save the pipeline and queue the build. The above step will create an Azure Container Registry. This is required to push
the IoT module images.

8. Edit the pipeline, and select + , and search for the Azure IoT Edge task. Select add . This step will build the
module images.
9. Select + and search for the Azure IoT Edge task. Select add . Configure the task as shown below -

F IEL D VA L UES

Action Select an Azure IoT Edge action to Push module images

Container registry type Select the Container registry type Azure Container
Registr y

Azure subscription Select the Azure Resource Manager subscription for the
deployment

Azure Container Registry Select an Azure Container Registry from the dropdown
which was created in the step 5

10. Select + and search for Publish Build Ar tifacts task. Select add . Set the path to publish to
$(Build.Ar tifactStagingDirector y)/deployment.amd64.json .
11. Save the pipeline and queue the build.
Create a release pipeline
The build pipeline has already built a Docker image and pushed it to an Azure Container Registry. In the release
pipeline we will create an IoT hub, IoT Edge device in that hub, deploy the sample module from the build pipeline,
and provision a virtual machine to run as your IoT Edge device.
1. Navigate to the Pipelines | Releases .
2. From the New drop-down menu, select New release pipeline to create a new release pipeline.
3. Select Empty job to create the pipeline.
4. Select + and search for Azure Resource Group Deployment task. Select add . Configure the task as
shown below.

F IEL D VA L UES

Azure subscription (Required) Name of Azure Resource Manager service


connection

Action (Required) Action to perform. Leave the default value as is

Resource group (Required) Provide the name of a resource group

Location (Required) Provide the location for deploying the resource


group

Template location (Required) Set the template location to URL of the file

Template link (Required) https://raw.githubusercontent.com/Azure-


Samples/devops-iot-
scripts/12d60bd513ead7c94aa1669e505083beaef8a480/
arm-iothub.json

Override template parameters -iotHubName IoTEdge -iotHubSku "S1"

5. Select + and search for Azure CLI task. Select add and configure the task as shown below.
Azure subscription : Select the Azure Resource Manager subscription for the deployment
Script Location : Set the type to Inline script and copy paste the below script
(az extension add --name azure-cli-iot-ext && az iot hub device-identity show --device-id
YOUR_DEVICE_ID --hub-name YOUR_HUB_NAME) || (az iot hub device-identity create --hub-name
YOUR_HUB_NAME --device-id YOUR_DEVICE_ID --edge-enabled && TMP_OUTPUT="$(az iot hub device-
identity show-connection-string --device-id YOUR_DEVICE_ID --hub-name YOUR_HUB_NAME)" &&
RE="\"cs\":\s?\"(.*)\"" && if [[ $TMP_OUTPUT =~ $RE ]]; then CS_OUTPUT=${BASH_REMATCH[1]}; fi &&
echo "##vso[task.setvariable variable=CS_OUTPUT]${CS_OUTPUT}")

In the above script, replace the following with your details -


hub name
device id

NOTE
Save the pipeline and queue the release. The above 2 steps will create an IoT Hub.

6. Edit the pipeline and select + and search for the Azure IoT Edge task. Select add . This step will Deploy the
module images to IoT Edge devices. Configure the task as shown below.

F IEL D VA L UES

Action Select an Azure IoT Edge action to Deploy to IoT Edge


devices

Deployment file $(System.DefaultWorkingDirectory)/*/.json

Azure subscription contains IoT Hub Select an Azure subscription that contains IoT Hub

Iot Hub name Select the IoT Hub

Choose single/multiple device Select Single Device

IoT Edge device ID Input the IoT Edge device ID

7. Select + and search for Azure Resource Group Deployment task. Select add . Configure the task as
shown below.

F IEL D VA L UES

Azure subscription (Required) Name of Azure Resource Manager service


connection

Action (Required) Action to perform. Leave the default value as is


F IEL D VA L UES

Resource group (Required) Provide the name of a resource group

Location (Required) Provide the location for deploying the resource


group

Template location (Required) Set the template location to URL of the file

Template link (Required) https://raw.githubusercontent.com/Azure-


Samples/devops-iot-
scripts/12d60bd513ead7c94aa1669e505083beaef8a480/
arm-linux-vm.json

Override template parameters -edgeDeviceConnectionString $(CS_OUTPUT) -


vir tualMachineName "YOUR_VM_NAME" -
adminUsername "devops" -adminPassword
"$(vmPassword)" -appInsightsLocation "" -
vir tualMachineSize "Standard_A0" -location
"YOUR_LOCATION"

8. Disable the first 2 tasks in the pipeline. Save and queue.

9. Once the release is complete, go to IoT hub in the Azure portal to view more information.
How-To: CI/CD with App Service and Azure Cosmos
DB
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines
Create a continuous integration (CI) and continuous delivery (CD) pipeline for Azure Cosmos DB backed Azure App
Service Web App. Azure Cosmos DB is Microsoft's globally distributed, multi-model database. Cosmos DB enables
you to elastically and independently scale throughput and storage across any number of Azure's geographic
regions.
You will:
Clone a sample Cosmos DB and Azure Web App to your repository
Create a Cosmos DB collection and database
Set up CI for your app
Set up CD to Azure for your app
Review the CI/CD pipeline

Prerequisites
An Azure subscription. You can get one free through Visual Studio Dev Essentials.
An Azure DevOps organization. If you don't have one, you can create one for free. If your team already has one,
then make sure you are an administrator of the project you want to use.
A SQL API based Cosmos DB instance. If you don't have one, you can follow the initial steps in this tutorial to
create a Cosmos DB instance and collection.

Clone a sample Cosmos DB and Azure Web App to your repository


This sample shows you how to use the Microsoft Azure Cosmos DB service to store and access data from an
ASP.NET MVC application hosted on Azure App Service. The application is a simple TODO application. You can learn
more about the sample application here.
To import the sample app into a Git repo in Azure Repos:
1. Sign into your Azure DevOps organization.
2. on the Code page for your project in Azure Repos, select the drop-down and choose the option to Impor t
repositor y .
3. On the Impor t a Git repositor y dialog box, paste https://github.com/Azure-Samples/documentdb-dotnet-
todo-app.git into the Clone URL text box.
4. Click Impor t to copy the sample code into your Git repo.

Set up CI for your App


Set up CI for your ASP.NET application and Cosmos DB to build and create deployable artifacts.
1. In Azure Pipelines , select Builds .
2. On the right-side of the screen, select + NEW to create a new build.
3. Choose the repositor y for the sample application you imported earlier in this tutorial, and then choose
continue .
4. Search for the ASP.NET Application build template, and then select Apply .

5. Select the triggers , and then select the checkbox for ""Enable continuous integration**. This setting ensures
every commit to the repository executes a build.
6. Select Save & Queue , and then choose Save and Queue to execute a new build.
7. Select the build hyperlink to examine the running build. In a few minutes the build completes. The build
produces artifacts which can be used to deploy to Azure.

Set up CD to Azure for your App


The CI for the sample app produces the artifacts needed for deployment to Azure. Follow the steps below to create
a release pipeline which uses the CI artifacts for deploying a Cosmos DB instance.
1. Select Release to create a release pipeline linked to the build artifacts from the CI pipeline you created with
the previous steps.
2. Choose the Azure App Ser vice deployment template, and then choose Apply .
3. On the Environments section, select the job and task link.
4. Select the Azure Subscription , and then select Authorize .

5. Choose an App Ser vice name .


6. Select the Deploy Azure App Ser vice task, and then select the File Transforms & Variable
Substitution Options setting.
7. Enable the checkbox for XML Variable substitution .
8. At the top of the menu, select Variables .
9. Retrieve your endpoint (URL) and authKey (primary or secondary key) for your Azure Cosmos DB account.
This information can be found on the Azure portal.

10. Select + Add to create a new variable named endpoint . Select + Add to create a second variable named
authKey .
11. Select the padlock icon to make the authKey variable secret.
12. Select the Pipeline menu.
13. Under the Ar tifacts ideas, choose the Continuous deployment trigger icon. On the right side of the
screen, ensure Enabled is on.
14. Select Save to save changes for the release definition.

Review the CI/CD pipeline


Follow the steps below to test and review the CI/CD pipeline.
1. on the Code page select the ellipsis (...) icon next to the web.config file in the src directory, and then select
Edit .
2. Replace the existing value (ToDoList) for the database key in the appSettings section of the web.config
with a new value such as NewToDoList . You will commit this change to demonstrate creating a new
Cosmos DB database as part of the CI/CD pipeline. This is a simple change to demonstrate CI/CD capabilities
of Cosmos DB with Azure Pipelines. However, more complicated code changes can also be deployed with the
same CI/CD pipeline.
3. Select Commit , and then choose Commit to save the changes directly to the repository.
4. On the Build page select Builds and you will see your CI build executing. You can follow the build execution
with the interactive logging. Once the build completes, you can also monitor the release.
5. Once the release finishes, navigate to your Cosmos DB service to see your new database.
The continuous integration trigger you enabled earlier ensures a build executes for every commit that is pushed to
the main branch. The build will complete and start a deployment to Azure. Navigate to Cosmos DB in the Azure
portal, and you will see the CD pipeline created a new database.
Clean up resources
NOTE
Ensure you delete any unneeded resources in Azure such as the Cosmos DB instance to avoid incurring charges.

Next steps
You can optionally modify these build and release definitions to meet the needs of your team. You can also use this
CI/CD pattern as a template for your other projects. You learned how to:
Clone a sample Cosmos DB and Azure Web App to your repository
Create a Cosmos DB collection and database
Set up CI for your app
Set up CD to Azure for your app
Review the CI/CD pipeline
To learn more about Azure Pipelines, see this tutorial:
ASP.NET MVC and Cosmos DB
Check policy compliance with gates
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Azure Policy helps you manage and prevent IT issues by using policy definitions that enforce rules and effects for
your resources. When you use Azure Policy, resources stay compliant with your corporate standards and service
level agreements. Policies can be applied to an entire subscription, a management group, or a resource group.
This tutorial guides you in enforcing compliance policies on your resources before and after deployment during the
release process through Azure Pipelines.
For more information, see What is Azure Policy? and Create and manage policies to enforce compliance.

Prepare
1. Create an Azure Policy in the Azure portal. There are several pre-defined sample policies that can be applied
to a management group, subscription, and resource group.
2. In Azure DevOps create a release pipeline that contains at least one stage, or open an existing release
pipeline.
3. Add a pre- or post-deployment condition that includes the Security and compliance assessment task as
a gate. More details.

Validate for any violation(s) during a release


1. Navigate to your team project in Azure DevOps.
2. In the Pipelines section, open the Releases page and create a new release.
3. Choose the In progress link in the release view to open the live logs page.
4. When the release is in progress and attempts to perform an action disallowed by the defined policy, the
deployment is marked as Failed . The error message contains a link to view the policy violations.

5. An error message is written to the logs and displayed in the stage status panel in the releases page of Azure
Pipelines.

6. When the policy compliance gate passes the release, a Succeeded status is displayed.

7. Choose the successful deployment to view the detailed logs.


Help and support
See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Deploy a web app to an nginx web server on a Linux
Virtual Machine
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
We'll show you how to set up continuous deployment of your app to an nginx web server running on Ubuntu
using Azure Pipelines or Team Foundation Server (TFS) 2018 and higher. You can use the steps in this quickstart for
any app as long as your continuous integration pipeline publishes a web deployment package.

After you commit and push a code change, it is automatically built and then deployed. The results will
automatically show up on your site.

Define your CI build pipeline


You'll need a continuous integration (CI) build pipeline that publishes your web application, as well as a
deployment script that can be run locally on the Ubuntu server. Set up a CI build pipeline based on the runtime you
want to use.
Java
JavaScript
If you already have an app in GitHub that you want to deploy, you can create a pipeline for that code.
If you are a new user, fork this repo in GitHub:

https://github.com/spring-guides/gs-spring-boot-docker.git

Follow additional steps mentioned in Build your Java app with Maven for creating a build to deploy to Linux.

Prerequisites for the Linux VM


The deployment scripts used in the above sample repositories have been tested on Ubuntu 16.04, and we
recommend you use the same version of Linux VM for this quickstart. Follow the additional steps described below
based on the runtime stack used for the app.
Java
JavaScript
For deploying Java Spring Boot and Spring Cloud based apps, create a Linux VM in Azure using this template,
which provides a fully supported OpenJDK-based runtime.
For deploying Java servlets on Tomcat server, create a Linux VM with Java 8 using this Azure template and
configure Tomcat 9.x as a service.
For deploying Java EE based app, use an Azure template to create a Linux VM + Java + WebSphere 9.x or a
Linux VM + Java + WebLogic 12.x or a Linux VM +Java + WildFly/JBoss 14

Create a deployment group


Deployment groups in Azure Pipelines make it easier to organize the servers you want to use to host your app. A
deployment group is a collection of machines with an Azure Pipelines agent on each of them. Each machine
interacts with Azure Pipelines to coordinate deployment of your app.
1. Open a SSH session to your Linux VM. You can do this using the Cloud Shell button on the menu in the
upper-right of the Azure portal.

2. Initiate the session by typing the following command, substituting the IP address of your VM:
ssh <publicIpAddress>

For more information, see SSH into your VM.


3. Run the following command:
sudo apt-get install -y libunwind8 libcurl3

The libraries this command installs are Prerequisites for installing the build and release agent onto a Ubuntu
16.04 VM. Prerequisites for other versions of Linux can be found here.
4. Open the Azure Pipelines web portal, navigate to Azure Pipelines , and choose Deployment groups .
5. Choose Add Deployment group (or New if you have existing deployment groups).
6. Enter a name for the group such as myNginx and choose Create .
7. In the Register machine section, make sure that Ubuntu 16.04+ is selected and that Use a personal
access token in the script for authentication is also checked. Choose Copy script to clipboard .
The script you've copied to your clipboard will download and configure an agent on the VM so that it can
receive new web deployment packages and apply them to web server.
8. Back in the SSH session to your VM, paste and run the script.
9. When you're prompted to configure tags for the agent, press Enter (you don't need any tags).
10. Wait for the script to finish and display the message Started Azure Pipelines Agent. Type "q" to exit the file
editor and return to the shell prompt.
11. Back in Azure Pipelines or TFS, on the Deployment groups page, open the myNginx deployment group.
On the Targets tab, verify that your VM is listed.

Define your CD release pipeline


Your CD release pipeline picks up the artifacts published by your CI build and then deploys them to your nginx
servers.
1. Do one of the following to start creating a release pipeline:
If you've just completed a CI build, in the build's Summar y tab under Deployments , choose Create
release followed by Yes . This starts a new release pipeline that's automatically linked to the build
pipeline.
Open the Releases tab of Azure Pipelines , open the + drop-down in the list of release pipelines,
and choose Create release pipeline .

2. Choose Star t with an Empty job .


3. If you created your new release pipeline from a build summary, check that the build pipeline and artifact is
shown in the Ar tifacts section on the Pipeline tab. If you created a new release pipeline from the
Releases tab, choose the + Add link and select your build artifact.

4. Choose the Continuous deployment icon in the Ar tifacts section, check that the continuous deployment
trigger is enabled, and add a filter that includes the master branch.
Continuous deployment is not enabled by default when you create a new release pipeline from the
Releases tab.

5. Open the Tasks tab, select the Agent job , and choose Remove to remove this job.

6. Choose ... next to the Stage 1 deployment pipeline and select Add deployment group job .

7. For the Deployment Group , select the deployment group you created earlier such as myNginx .
The tasks you add to this job will run on each of the machines in the deployment group you specified.
8. Choose + next to the Deployment group job and, in the task catalog, search for and add a Bash task.

9. In the properties of the Bash task, use the Browse button for the Script Path to select the path to the
deploy.sh script in the build artifact. For example, when you use the nodejs-sample repository to build
your app, the location of the script is
$(System.DefaultWorkingDirectory)/nodejs-sample/drop/deploy/deploy.sh
10. Save the release pipeline.

Create a release to deploy your app


You're now ready to create a release, which means to start the process of running the release pipeline with the
artifacts produced by a specific build. This will result in deploying the build.
1. Choose + Release and select Create a release .
2. In the Create a new release panel, check that the artifact version you want to use is selected and choose
Create .
3. Choose the release link in the information bar message. For example: "Release Release-1 has been
created".
4. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output.
5. After the release is complete, navigate to your app and verify its contents.

Next steps
Dynamically create and remove a deployment group
Apply stage-specific configurations
Perform a safe rolling deployment
Deploy a database with your app
Deploy to a Windows Virtual Machine
2/26/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
We'll show you how to set up continuous deployment of your ASP.NET or Node.js app to an IIS web server
running on Windows using Azure Pipelines. You can use the steps in this quickstart as long as your continuous
integration pipeline publishes a web deployment package.

After you commit and push a code change, it is automatically built and then deployed. The results will
automatically show up on your site.

Define your CI build pipeline


You'll need a continuous integration (CI) build pipeline that publishes your web deployment package. To set up a
CI build pipeline, see:
Build ASP.NET 4 apps
Build ASP.NET Core apps
Build JavaScript and Node.js apps

Prerequisites
IIS configuration
The configuration varies depending on the type of app you are deploying.
ASP.NET app
On your VM, open an Administrator : Windows PowerShell console. Install IIS:

# Install IIS
Install-WindowsFeature Web-Server,Web-Asp-Net45,NET-Framework-Features

ASP.NET Core app


Running an ASP.NET Core app on Windows requires some dependencies.
On your VM, open an Administrator : Windows PowerShell console. Install IIS and the required .NET features:
# Install IIS
Install-WindowsFeature Web-Server,Web-Asp-Net45,NET-Framework-Features

# Install the .NET Core SDK


Invoke-WebRequest https://go.microsoft.com/fwlink/?linkid=848827 -outfile $env:temp\dotnet-dev-win-
x64.1.0.6.exe
Start-Process $env:temp\dotnet-dev-win-x64.1.0.6.exe -ArgumentList '/quiet' -Wait

# Install the .NET Core Windows Server Hosting bundle


Invoke-WebRequest https://go.microsoft.com/fwlink/?LinkId=817246 -outfile
$env:temp\DotNetCore.WindowsHosting.exe
Start-Process $env:temp\DotNetCore.WindowsHosting.exe -ArgumentList '/quiet' -Wait

# Restart the web server so that system PATH updates take effect
Stop-Service was -Force
Start-Service w3svc

Node.js app
Follow the instructions in this topic to install and configure IISnode on IIS servers.

Create a deployment group


Deployment groups in Azure Pipelines make it easier to organize the servers that you want to use to host your
app. A deployment group is a collection of machines with an Azure Pipelines agent on each of them. Each
machine interacts with Azure Pipelines to coordinate deployment of your app.
1. Open the Azure Pipelines web portal and choose Deployment groups .
2. Click Add Deployment group (or New if there are already deployment groups in place).
3. Enter a name for the group, such as myIIS , and then click Create .
4. In the Register machine section, make sure that Windows is selected, and that Use a personal access
token in the script for authentication is also selected. Click Copy script to clipboard .
The script that you've copied to your clipboard will download and configure an agent on the VM so that it
can receive new web deployment packages and apply them to IIS.
5. On your VM, in an Administrator PowerShell console, paste and run the script.
6. When you're prompted to configure tags for the agent, press Enter (you don't need any tags).
7. When you're prompted for the user account, press Enter to accept the defaults.

The account under which the agent runs needs Manage permissions for the
C:\Windows\system32\inetsrv\ directory. Adding non-admin users to this directory is not
recommended. In addition, if you have a custom user identity for the application pools, the identity
needs permission to read the crypto-keys. Local service accounts and user accounts must be given
read access for this. For more details, see Keyset does not exist error message.

8. When the script is done, it displays the message Service vstsagent.account.computername started
successfully.
9. On the Deployment groups page in Azure Pipelines, open the myIIS deployment group. On the Targets
tab, verify that your VM is listed.

Define your CD release pipeline


Your CD release pipeline picks up the artifacts published by your CI build and then deploys them to your IIS
servers.
1. If you haven't already done so, install the IIS Web App Deployment Using WinRM extension from
Marketplace. This extension contains the tasks required for this example.
2. Do one of the following:
If you've just completed a CI build then, in the build's Summar y tab choose Release . This creates a
new release pipeline that's automatically linked to the build pipeline.
Open the Releases tab of Azure Pipelines , open the + drop-down in the list of release pipelines,
and choose Create release pipeline .
3. Select the IIS Website Deployment template and choose Apply .
4. If you created your new release pipeline from a build summary, check that the build pipeline and artifact is
shown in the Ar tifacts section on the Pipeline tab. If you created a new release pipeline from the
Releases tab, choose the + Add link and select your build artifact.
5. Choose the Continuous deployment icon in the Ar tifacts section, check that the continuous
deployment trigger is enabled, and add a filter to include the master branch.
6. Open the Tasks tab and select the IIS Deployment job. For the Deployment Group , select the
deployment group you created earlier (such as myIIS ).
7. Save the release pipeline.

Create a release to deploy your app


You're now ready to create a release, which means to run the release pipeline with the artifacts produced by a
specific build. This will result in deploying the build:
1. Choose + Release and select Create a release .
2. In the Create a new release panel, check that the artifact version you want to use is selected and choose
Create .
3. Choose the release link in the information bar message. For example: "Release Release-1 has been
created".
4. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output.
5. After the release is complete, navigate to your app and verify its contents.

Next steps
Dynamically create and remove a deployment group
Apply stage-specific configurations
Perform a safe rolling deployment
Deploy a database with your app
Deploy your Web Deploy package to IIS servers
using WinRM
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

A simpler way to deploy web applications to IIS servers is by using deployment groups instead of WinRM.
However, deployment groups are not available in version of TFS earlier than TFS 2018.

Continuous deployment means starting an automated deployment pipeline whenever a new successful build is
available. Here we'll show you how to set up continuous deployment of your ASP.NET or Node.js app to one or
more IIS servers using Azure Pipelines. A task running on the Build and Release agent opens a WinRM connection
to each IIS server to run Powershell scripts remotely in order to deploy the Web Deploy package.

Get set up
Begin with a CI build
Before you begin, you'll need a CI build that publishes your Web Deploy package. To set up CI for your specific type
of app, see:
Build your ASP.NET 4 app
Build your ASP.NET Core app
Build your Node.js app with gulp
WinRM configuration
Windows Remote Management (WinRM) requires target servers to be:
Domain-joined or workgroup-joined
Able to communicate using the HTTP or HTTPS protocol
Addressed by using a fully-qualified domain name (FQDN) or an IP address
This table shows the supported scenarios for WinRM.

JO IN ED TO A P ROTO C O L A DDRESSIN G M O DE

Workgroup HTTPS FQDN

Workgroup HTTPS IP address

Domain HTTPS IP address

Domain HTTPS FQDN


JO IN ED TO A P ROTO C O L A DDRESSIN G M O DE

Domain HTTP FQDN

Ensure that your IIS servers are set up in one of these configurations. For example, do not use WinRM over HTTP to
communicate with a Workgroup machine. Similarly, do not use an IP address to access the target server(s) when
you use HTTP. Instead, in both scenarios, use HTTPS.

If you need to deploy to a server that is not in the same workgroup or domain, add it to trusted hosts in your
WinRM configuration.

Follow these steps to configure each target server.


1. Enable File and Printer Sharing. You can do this by executing the following command in a Command
window with Administrative permissions:
netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=yes

2. Check your PowerShell version. You need PowerShell version 4.0 or above installed on every target machine.
To display the current PowerShell version, execute the following command in the PowerShell console:
$PSVersionTable.PSVersion

3. Check your .NET Framework version. You need version 4.5 or higher installed on every target machine. See
How to: Determine Which .NET Framework Versions Are Installed.
4. Download from GitHub this PowerShell script for Windows 10 and Windows Server 2016, or this
PowerShell script for previous versions of Windows. Copy them to every target machine. You will use them
to configure WinRM in the following steps.
5. Decide if you want to use HTTP or HTTPS to communicate with the target machine(s).
If you choose HTTP, execute the following in a Command window with Administrative permissions:
ConfigureWinRM.ps1 {FQDN} http

This command creates an HTTP WinRM listener and opens port 5985 inbound for WinRM over
HTTP.

If you choose HTTPS, you can use either a FQDN or an IP address to access the target machine(s). To
use a FQDN to access the target machine(s), execute the following in the PowerShell console with
Administrative permissions:
ConfigureWinRM.ps1 {FQDN} https

To use an IP address to access the target machine(s), execute the following in the PowerShell console
with Administrative permissions:
ConfigureWinRM.ps1 {ipaddress} https

These commands create a test certificate by using MakeCer t.exe , use the certificate to create an
HTTPS WinRM listener, and open port 5986 inbound for WinRM over HTTPS. The script also
increases the WinRM MaxEnvelopeSizekb setting. By default on Windows Server this is 500 KB,
which can result in a "Request size exceeded the configured MaxEnvelopeSize quota" error.

IIS configuration
If you are deploying an ASP.NET app, make sure that you have ASP.NET 4.5 or ASP.NET 4.6 installed on each of your
IIS target servers. For more information, see this topic.
If you are deploying an ASP.NET Core application to IIS target servers, follow the additional instructions in this topic
to install .NET Core Windows Server Hosting Bundle.
If you are deploying a Node.js application to IIS target servers, follow the instructions in this topic to install and
configure IISnode on IIS servers.
In this example, we will deploy to the Default Web Site on each of the servers. If you need to deploy to another
website, make sure you configure this as well.
IIS WinRM extension
Install the IIS Web App Deployment Using WinRM extension from Visual Studio Marketplace in Azure Pipelines or
TFS.

Define and test your CD release pipeline


Continuous deployment (CD) means starting an automated release pipeline whenever a new successful build is
available. Your CD release pipeline picks up the artifacts published by your CI build and then deploys them to your
IIS servers.
1. Do one of the following:
If you've just completed a CI build (see above) then, in the build's Summar y tab under
Deployments , choose Create release followed by Yes . This starts a new release pipeline that's
automatically linked to the build pipeline.
Open the Releases tab of Azure Pipelines , open the + drop-down in the list of release pipelines,
and choose Create release pipeline .
2. Choose Star t with an empty pipeline .
3. If you created your new release pipeline from a build summary, check that the build pipeline and artifact is
shown in the Ar tifacts section on the Pipeline tab. If you created a new release pipeline from the Releases
tab, choose the + Add link and select your build artifact.
4. Choose the Continuous deployment icon in the Ar tifacts section, check that the continuous deployment
trigger is enabled, and add a filter to include the master branch.

5. On the Variables tab of the stage in release pipeline, configure a variable named WebSer vers with the list
of IIS servers as its value; for example machine1,machine2,machine3 .
6. Configure the following tasks in the stage:

Deploy: Windows Machine File Copy - Copy the Web Deploy package to the IIS servers.
Source : Select the Web deploy package (zip file) from the artifact source.
Machines : $(WebServers)

Admin Login : Enter the administrator credentials for the target servers. For workgroup-joined
computers, use the format .\username . For domain-joined computers, use the format
domain\username .

Password : Enter the administrator password for the target servers.


Destination Folder : Specify a folder on the target server where the files should be copied to.

Deploy: WinRM - IIS Web App Deployment - Deploy the package.


Machines : $(WebServers)

Admin Login : Enter the administrator credentials for target servers. For workgroup-joined
computers, use the format .\username . For domain-joined computers, use the format
domain\username .

Password : Enter the administrator password for target servers.


Protocol : Select HTTP or HTTPS (depending on how you configured the target machine earlier).
Note that if the target machine is workgroup-joined, you must choose HTTPS . You can use HTTP only
if the target machine is domain-joined and configured to use a FQDN.
Web Deploy Package : Fully qualified path of the zip file you copied to the target server in the
previous task.
Website Name : Default Web Site (or the name of the website if you configured a different one
earlier).
7. Edit the name of the release pipeline, click Save , and click OK . Note that the default stage is named Stage1,
which you can edit by clicking directly on the name.
You're now ready to create a release, which means to run the release pipeline with the artifacts produced by a
specific build. This will result in deploying the build to IIS servers:
1. Choose + Release and select Create a release .
2. In the Create a new release panel, check that the artifact version you want to use is selected and choose
Create .
3. Choose the release link in the information bar message. For example: "Release Release-1 has been created".
4. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output.
5. After the release is complete, navigate to your app and verify its contents.

FAQ
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
How To: Extend your deployments to IIS Deployment
Groups
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
You can quickly and easily deploy your ASP.NET or Node.js app to an IIS Deployment Group using Azure Pipelines
or Team Foundation Server (TFS), as demonstrated in this example. In addition, you can extend your deployment in
a range of ways depending on your scenario and requirements. This topic shows you how to:
Dynamically create and remove a deployment group
Apply stage-specific configurations
Perform a safe rolling deployment
Deploy a database with your app

Prerequisites
You should have worked through the example CD to an IIS Deployment Group before you attempt any of these
steps. This ensures that you have the release pipeline, build artifacts, and websites required.

Dynamically create and remove a deployment group


You can create and remove deployment groups dynamically if you prefer by using the Azure Resource Group
Deployment task to install the agent on the machines in a deployment group using ARM templates. See Provision
deployment group agents.

Apply stage-specific configurations


If you deploy releases to multiple stages, you can substitute configuration settings in Web.config and other
configuration files of your website using these steps:
1. Define stage-specific configuration settings in the Variables tab of a stage in a release pipeline; for example,
<connectionStringKeyName> = <value> .

2. In the IIS Web App Deploy task, select the checkbox for XML variable substitution under File
Transforms and Variable Substitution Options .

If you prefer to manage stage configuration settings in your own database or Azure KeyVault, add a task
to the stage to read and emit those values using
##vso[task.setvariable variable=connectionString;issecret=true]<value> .

At present, you cannot apply a different configuration to individual IIS servers.

Perform a safe rolling deployment


If your deployment group consists of many IIS target servers, you can deploy to a subset of servers at a time. This
ensures that your application is available to your customers at all times. Simply select the Deployment group job
and use the slider to configure the Maximum number of targets in parallel .
Deploy a database with your app
To deploy a database with your app:
1. Add both the IIS target servers and database servers to your deployment group. Tag all the IIS servers as
web and all database servers as database .

2. Add two machine group jobs to stages in the release pipeline, and a task in each job as follows:
First Run on deployment group job for configuration of web servers.
Deployment group : Select the deployment group you created in the previous example.
Required tags : web

Then add an IIS Web App Deploy task to this job.


Second Run on deployment group job for configuration of database servers.
Deployment group : Select the deployment group you created in the previous example.
Required tags : database

Then add a SQL Ser ver Database Deploy task to this job.
Deploy with System Center Virtual Machine Manager
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
You can automatically provision new virtual machines in System Center Virtual Machine Manager (SCVMM) and
deploy to those virtual machines after every successful build.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

SCVMM connection
You need to first configure how Azure Pipelines connects to SCVMM. You cannot use Microsoft-hosted agents to
run SCVMM tasks since the VMM Console is not installed on hosted agents. You must set up a self-hosted build
and release agent on the same network as your SCVMM server.
You need to first configure how TFS connects to SCVMM. You must have a build and release agent that can
communicate with the SCVMM server.
1. Install the Vir tual Machine Manager (VMM) console on the agent machine by following these
instructions. Supported version: System Center 2012 R2 Virtual Machine Manager.
2. Install the System Center Vir tual Machine Manager (SCVMM) extension from Visual Studio
Marketplace into TFS or Azure Pipelines:
If you are using Azure Pipelines , install the extension from this location in Visual Studio Marketplace.
If you are using Team Foundation Ser ver , download the extension from this location in Visual Studio
Marketplace, upload it to your Team Foundation Server, and install it.
3. Create an SCVMM service connection in your project:
In your Azure Pipelines or TFS project in your web browser, navigate to the project settings and select
Ser vice connections .
In the Ser vice connections tab, choose New ser vice connection , and select SCVMM .
In the Add new SCVMM Connection dialog, enter the values required to connect to the SCVMM
Server:
Connection Name : Enter a user-friendly name for the service connection such as
MySCVMMSer ver .
SCVMM Ser ver Name : Enter the fully qualified domain name and port number of the SCVMM
server, in the form machine.domain.com:por t .
Username and Password : Enter the credentials required to connect to the vCenter Server.
Username formats such as username , domain\username , machine-name\username , and
.\username are supported. UPN formats such as [email protected] and built-in system
accounts such as NT Authority\System are not supported.

Create new virtual machines from a template, VHD, or stored VM


One of the common actions that you can perform with every build is to create a new virtual machine to deploy the
build to. You use the SCMVMM task from the extension to do this and configure the properties of the task as
follows:
Display name : The name for the task as it appears in the task list.
SCVMM Ser vice Connection : Select a SCVMM service connection you already defined, or create a new one.
Action : Select New Vir tual Machine using Template/Stored VM/VHD .
Create vir tual machines from VM Templates : Set this option if you want to use a template.
Vir tual machine names : Enter the name of the virtual machine, or a list of the virtual machine names
on separate lines. Example FabrikamDevVM
VM template names : Enter the name of the template, or a list of the template names on separate lines.
Set computer name as defined in the VM template : If not set, the computer name will be the same
as the VM name.
Create vir tual machines from stored VMs : Set this option if you want to use a stored VM.
Vir tual machine names : Enter the name of the virtual machine, or a list of the virtual machine names
on separate lines. Example FabrikamDevVM
Stored VMs : Enter the name of the stored VM, or a list of the VMs on separate lines in the same order
as the virtual machine names.
Create vir tual machines from VHD : Set this option if you want to use a stored VM.
Vir tual machine names : Enter the name of the virtual machine, or a list of the virtual machine names
on separate lines. Example FabrikamDevVM
VHDs : Enter the name of the VHD or VHDX, or a list of names on separate lines in the same order as the
virtual machine names.
CPU count : Specify the number of processor cores required for the virtual machines.
Memor y : Specify the memory in MB required for the virtual machines.
Clear existing network adapters : Set this option if you want to remove the network adapters and specify
new ones in the Network Vir tualization options.
Deploy the VMs to : Choose either Cloud or Host to select the set of virtual machines to which the action will
be applied.
Host Name or Cloud Name : Depending on the previous selection, enter either a cloud name or a host
machine name.
Placement path for VM : If you selected Host as the deployment target, enter the path to be used during
virtual machine placement. Example C:\ProgramData\Microsoft\Windows\Hyper-V
Additional Arguments : Enter any arguments to pass to the virtual machine creation template. Example
-StartVM -StartAction NeverAutoTurnOnVM -StopAction SaveVM
Wait Time : The time to wait for the virtual machine to reach ready state.
Network Vir tualization : Set this option to enable network virtualization for your virtual machines. For more
information, see Create a virtual network isolated environment.
Show minimal logs : Set this option if you don't want to create detailed live logs about the VM provisioning
process.

Delete virtual machines


After validating your build, you would want to delete the virtual machines that you created. You use the SCMVMM
task from the extension to do this and configure the properties of the task as follows:
Display name : The name for the task as it appears in the task list.
SCVMM Ser vice Connection : Select a SCVMM service connection you already defined, or create a new one.
Action : Select New Vir tual Machine using Template/Stored VM/VHD .
VM Names : Enter the name of the virtual machine, or a comma-separated list of the virtual machine names.
Example FabrikamDevVM,FabrikamTestVM
Select VMs From : Choose either Cloud or Host to select the set of virtual machines to which the action will
be applied.
Host Name or Cloud Name : Depending on the previous selection, enter either a cloud name or a host
machine name.

Start and stop virtual machines


You can start a virtual machine prior to deploying a build, and then stop the virtual machine after running tests.
Use the SCVMM task as follows in order to achieve this:
Display name : The name for the task as it appears in the task list.
SCVMM Ser vice Connection : Select a SCVMM service connection you already defined, or create a new one.
Action : Select Star t Vir tual Machine or Stop Vir tual Machine .
VM Names : Enter the name of the virtual machine, or a comma-separated list of the virtual machine names.
Example FabrikamDevVM,FabrikamTestVM
Select VMs From : Choose either Cloud or Host to select the set of virtual machines to which the action will
be applied.
Host Name or Cloud Name : Depending on the previous selection, enter either a cloud name or a host
machine name.
Wait Time : The time to wait for the virtual machine to reach ready state.

Create, restore, and delete checkpoints


A quick alternative to bringing up a virtual machine in desired state prior to running tests is to restore it to a
known checkpoint. Use the SCVMM task as follows in order to do this:
Display name : The name for the task as it appears in the task list.
SCVMM Ser vice Connection : Select a SCVMM service connection you already defined, or create a new one.
Action : Select one of the checkpoint actions Create Checkpoint , Restore Checkpoint , or Delete
Checkpoint .
VM Names : Enter the name of the virtual machine, or a comma-separated list of the virtual machine names.
Example FabrikamDevVM,FabrikamTestVM
Checkpoint Name : For the Create Checkpoint action, enter the name of the checkpoint that will be applied
to the virtual machines. For the Delete Checkpoint or Restore Checkpoint action, enter the name of an
existing checkpoint.
Description for Checkpoint : Enter a description for the new checkpoint when creating it.
Select VMs From : Choose either Cloud or Host to select the set of virtual machines to which the action will
be applied.
Host Name or Cloud Name : Depending on the previous selection, enter either a cloud name or a host
machine name.

Run custom PowerShell scripts for SCVMM


For functionality that is not available through the in-built actions, you can run custom SCVMM PowerShell scripts
using the task. The task helps you with setting up the connection with SCVMM using the credentials configured in
the service connection, and then runs the script.
Display name : The name for the task as it appears in the task list.
SCVMM Ser vice Connection : Select a SCVMM service connection you already defined, or create a new one.
Action : Select Run PowerShell Script for SCVMM .
Script Type : Select either Script File Path or Inline Script .
Script Path : If you selected Script File Path , enter the path of the PowerShell script to execute. It must be a
fully-qualified path, or a path relative to the default working directory.
Inline Script : If you selected Inline Script , enter the PowerShell script lines to execute.
Script Arguments : Enter any arguments to be passed to the PowerShell script. You can use either ordinal
parameters or named parameters.
Working folder : Specify the current working directory for the script when it runs. The default if not provided
is the folder containing the script.

Deploying build to virtual machines


Once you have the virtual machines set up, deploying a build to those virtual machines is no different than
deploying to any other machine. For instance, you can:
Use the PowerShell on Target Machines task to run remote scripts on those machines using Windows Remote
Management.
Use Deployment groups to run scripts and other tasks on those machines using build and release agent.

See also
Create a virtual network isolated environment for build-deploy-test scenarios
Deploy to VMware vCenter Server
2/26/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

You can automatically provision virtual machines in a VMware environment and deploy to those virtual machines
after every successful build.

VMware connection
You need to first configure how Azure Pipelines connects to vCenter. You cannot use Microsoft-hosted agents to
run VMware tasks since the vSphere SDK is not installed on these machines. You have to a set up a self-hosted
agent that can communicate with the vCenter server.
You need to first configure how Azure DevOps Server connects to vCenter. You have to a set up a self-hosted agent
that can communicate with the vCenter server.
You need to first configure how TFS connects to vCenter. You have to a set up a self-hosted agent that can
communicate with the vCenter server.
1. Install the VMware vSphere Management SDK to call VMware API functions that access vSphere web
services. To install and configure the SDK on the agent machine:
Download and install the latest version of the Java Runtime Environment from this location.
Go to this location and sign in with your existing credentials or register with the website. Then
download the vSphere 6.0 Management SDK .
Create a directory for the vSphere Management SDK such as C:\vSphereSDK . Do not include spaces
in the directory names to avoid issues with some of the batch and script files included in the SDK.
Unpack the vSphere Management SDK into the new folder you just created.
Add the full path and name of the precompiled VMware Java SDK file vim25.jar to the machine's
CLASSPATH environment variable. If you used the path and name C:\vSphereSDK for the SDK files,
as shown above, the full path will be:
C:\vSphereSDK\SDK\vsphere-ws\java\JAXWS\lib\vim25.jar

2. Install the VMware extension from Visual Studio Marketplace into TFS or Azure Pipelines.
3. Follow these steps to create a vCenter Server service connection in your project:
Open your Azure Pipelines or TFS project in your web browser. Choose the Settings icon in the
menu bar and select Ser vices .
In the Ser vices tab, choose New ser vice connection , and select VMware vCenter Ser ver .
In the Add new VMware vCenter Ser ver Connection dialog, enter the values required to
connect to the vCenter Server:
Connection Name : Enter a user-friendly name for the service connection such as Fabrikam
vCenter .
vCenter Ser ver URL : Enter the URL of the vCenter server, in the form
https://machine.domain.com/ . Note that only HTTPS connections are supported.
Username and Password : Enter the credentials required to connect to the vCenter Server.
Username formats such as username , domain\username , machine-name\username , and
.\username are supported. UPN formats such as [email protected] and built-in system
accounts such as NT Authority\System are not supported.

Managing VM snapshots
Use the VMware Resource Deployment task from the VMware extension and configure the properties as
follows to take snapshot of virtual machines, or to revert or delete them:
VMware Ser vice Connection : Select the VMware vCenter Server connection you created earlier.
Action : Select one of the actions: Take Snapshot of Vir tual Machines , Rever t Snapshot of Vir tual
Machines , or Delete Snapshot of Vir tual Machines .
Vir tual Machine Names : Enter the names of one or more virtual machines. Separate multiple names with
a comma; for example, VM1,VM2,VM3
Datacenter : Enter the name of the datacenter where the virtual machines will be created.
Snapshot Name : Enter the name of the snapshot. This snapshot must exist if you use the revert or delete
action.
Host Name : Depending on the option you selected for the compute resource type, enter the name of the
host, cluster, or resource pool.
Datastore : Enter the name of the datastore that will hold the virtual machines' configuration and disk files.
Description : Optional. Enter a description for the Take Snapshot of Vir tual Machines action, such as
$(Build.DefinitionName).$(Build.BuildNumber) . This can be used to track the execution of the build or
release that created the snapshot.
Skip Cer tificate Authority Check : If the vCenter Server's certificate is self-signed, select this option to
skip the validation of the certificate by a trusted certificate authority.

To verify if a self-signed certificate is installed on the vCenter Server, open the VMware vSphere Web
Client in your browser and check for a certificate error page. The vSphere Web Client URL will be of the
form https://machine.domain/vsphere-client/ . Good practice guidance for vCenter Server certificates
can be found in the VMware Knowledge Base (article 2057223).

Provisioning virtual machines


To configure the VMware Resource Deployment task to provision a new virtual machine from a template, use
these settings:
VMware Ser vice Connection : Select the VMware vCenter Server connection you created earlier.
Action : Deploy Virtual Machines using Template

Template : The name of the template that will be used to create the virtual machines. The template must
exist in the location you enter for the Datacenter parameter.
Vir tual Machine Names : Enter the names of one or more virtual machines. Separate multiple names with
a comma; for example, VM1,VM2,VM3

Datacenter : Enter the name of the datacenter where the virtual machines will be created.
Compute Resource Type : Select the type of hosting for the virtual machines: VMware ESXi Host , Cluster ,
or Resource Pool
Host Name : Depending on the option you selected for the compute resource type, enter the name of the
host, cluster, or resource pool.
Datastore : Enter the name of the datastore that will hold the virtual machines' configuration and disk files.
Description : Optional. Enter a description to identify the deployment.
Skip Cer tificate Authority Check : If the vCenter Server's certificate is self-signed, select this option to
skip the validation of the certificate by a trusted certificate authority. See the note for the previous step to
check for the presence of a self-signed certificate.

Deploying build to virtual machines


Once you have the virtual machines set up, deploying a build to those virtual machines is no different than
deploying to any other machine. For instance, you can:
Use the PowerShell on Target Machines task to run remote scripts on those machines using Windows Remote
Management.
Use Deployment groups to run scripts and other tasks on those machines using build and release agent.
Build an image
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines
Azure Pipelines can be used to build images for any repository containing a Dockerfile. Building of both Linux
and Windows containers is possible based on the agent platform used for the build.

Example
Get the code
Fork the following repository containing a sample application and a Dockerfile:

https://github.com/MicrosoftDocs/pipelines-javascript-docker

Create pipeline with build step


1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select New Pipeline .
3. Select GitHub as the location of your source code and select your repository.

NOTE
You might be redirected to GitHub to sign in. If so, enter your GitHub credentials. You might be redirected to
GitHub to install the Azure Pipelines app. If so, select Approve and install.

4. Select Star ter pipeline . In the Review tab, replace the contents of azure-pipelines.yml with the
following snippet -

trigger:
- main

pool:
vmImage: 'Ubuntu-16.04'

variables:
imageName: 'pipelines-javascript-docker'

steps:
- task: Docker@2
displayName: Build an image
inputs:
repository: $(imageName)
command: build
Dockerfile: app/Dockerfile

5. Select Save and run , after which you're prompted for a commit message as Azure Pipelines adds the
azure-pipelines.yml file to your repository. After editing the message, select Save and run again to see
the pipeline in action.
TIP
Learn more about how to push the image to Azure Container Registry or push it other container registries such
as Google Container Registry or Docker Hub. Learn more about the Docker task used in the above sample.
Instead of using the recommended Docker task, it is also possible to invoke docker commands directly using a
command line task(script)

Windows container images


Windows container images can be built using either Microsoft hosted Windows agents or Windows platform
based self-hosted agents (all Microsoft hosted Windows platform-based agents are shipped with Moby engine
and client needed for Docker builds). Learn more about the Windows agent options available with Microsoft
hosted agents.

NOTE
Linux container images can be built using Microsoft hosted Ubuntu-16.04 agents or Linux platform based self-hosted
agents. Currently the Microsoft hosted MacOS agents can't be used to build container images as Moby engine needed
for building the images is not pre-installed on these agents.

BuildKit
BuildKit introduces build improvements in the areas of performance, storage management, feature
functionality, and security. To enable BuildKit based docker builds, set the DOCKER_BUILDKIT variable as shown
in the following snippet:

variables:
imageName: 'pipelines-javascript-docker'
DOCKER_BUILDKIT: 1

steps:
- task: Docker@2
displayName: Build an image
inputs:
repository: $(imageName)
command: build
Dockerfile: app/Dockerfile

NOTE
BuildKit is not currently supported on Windows hosts.

Pre-cached images on hosted agents


Some commonly used images are pre-cached on the Microsoft-hosted agents to avoiding long time intervals
spent in pulling these images from container registry for every job. Images such as
microsoft/dotnet-framework , microsoft/aspnet , microsoft/windowsservercore , microsoft/nanoserver , and
microsoft/aspnetcore-build are pre-cached on Windows agents while jekyll/builder and
mcr.microsoft.com/azure-pipelines/node8-typescript are pre-cached on Linux agents. The list of pre-cached
images is available in the release notes of azure-pipelines-image-generation repository.

Self-hosted agents
Docker needs to be installed on self-hosted agent machines prior to runs that try to build container images. To
address this issue, a step corresponding to Docker installer task can be placed in the pipeline definition prior to
the step related to Docker task.

Script-based docker builds


Note that it is also possible to build (or any Docker command) images by running docker on script as shown
below:

docker build -f Dockerfile -t foobar.azurecr.io/hello:world .

The above command results in an equivalent image in terms of content as the one built by using the Docker
task. The Docker task itself internally calls docker binary on script, but also stitches together a few more
commands to provide a few additional benefits as described in the Docker task's documentation.

FAQ
Is reutilizing layer caching during builds possible on Azure Pipelines?
In the current design of Microsoft-hosted agents, every job is dispatched to a newly provisioned virtual
machine (based on the image generated from azure-pipelines-image-generation repository templates). These
virtual machines are cleaned up after the job reaches completion, not persisted and thus not reusable for
subsequent jobs. The ephemeral nature of virtual machines prevents the reuse of cached Docker layers.
However, Docker layer caching is possible using self-hosted agents as the ephemeral lifespan problem is not
applicable for these agents.
How to build Linux container images for architectures other than x64?
When you use Microsoft-hosted Linux agents, you create Linux container images for the x64 architecture. To
create images for other architectures (for example, x86, ARM, and so on), you can use a machine emulator
such as QEMU. The following steps illustrate how to create an ARM container image:
1. Author your Dockerfile so that an Intel binary of QEMU exists in the base image. For example, the raspbian
image already has this.

FROM balenalib/rpi-raspbian

2. Run the following script in your job before building the image:

# register QEMU binary - this can be done by running the following image
docker run --rm --privileged multiarch/qemu-user-static:register --reset
# build your image

How to run tests and publish test results for containerized applications?
For different options on testing containerized applications and publishing the resulting test results, check out
Publish Test Results task
Push an image
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Azure Pipelines can be used to push images to container registries such as Azure Container Registry (ACR),
Docker Hub, Google Container Registries, and others.

Push step in pipeline


The following YAML snippet showcases the usage of Docker registry service connection along with a Docker task
to login and push to a container registry. Instances of Docker registry service connection serve as secure options
for storing credentials needed to login to the container registry before pushing the image. These service
connections can be directly referenced in Docker task to login to the registry without the need to add a script
task for docker login and setting up of secret variables for username and password.

- task: Docker@2
displayName: Push image
inputs:
containerRegistry: |
$(dockerHub)
repository: $(imageName)
command: push
tags: |
test1
test2

Azure Container Registry


Under Azure Container Registry option of Docker registry service connection, the subscription (associated with
the AAD identity of the user signed into Azure DevOps) and container registry within the subscription can be
chosen to create the service connection. These service connections can be subsequently referenced from a
pipeline task as shown in the YAML snippet above.
For creating a new pipeline for a repository containing Dockerfile, the Build and Push to Azure Container
Registry document describes the Docker template automatically recommended by Azure Pipelines upon
detecting of Dockerfile in the repository. The Azure subscription and Azure Container Registry inputs provided
for template configuration are used by Azure Pipelines to automatically create the Docker registry service
connection and even construct a functioning build and push pipeline by referencing the created service
connection.

Docker Hub
Choose the Docker Hub option under Docker registry service connection and provide the username and
password required for verifying and creating the service connection.

Google Container Registry


The following steps walk through the creation of Docker registry service connection associated with Google
Container Registry:
1. Open your project in the GCP Console and then open Cloud Shell
2. To save time typing your project ID and Compute Engine zone options, set default configuration values by
running the following commands:

gcloud config set project [PROJECT_NAME]


gcloud config set compute/zone [ZONE]

3. Replace [PROJECT_NAME] with the name of your GCP project and replace [ZONE] with the name of the
zone that you're going to use for creating resources. If you're unsure about which zone to pick, use
us-central1-a . For example:

gcloud config set project azure-pipelines-test-project-12345


gcloud config set compute/zone us-central1-a

4. Enable the Container Registry API for your project:

gcloud services enable containerregistry.googleapis.com

5. Create a service account for Azure Pipelines to publish Docker images:

gcloud iam service-accounts create azure-pipelines-publisher --display-name "Azure Pipelines


Publisher"

6. Assign the Storage Admin IAM role to the service account:

PROJECT_NUMBER=$(gcloud projects describe \


$(gcloud config get-value core/project) \
--format='value(projectNumber)')

AZURE_PIPELINES_PUBLISHER=$(gcloud iam service-accounts list \


--filter="displayName:Azure Pipelines Publisher" \
--format='value(email)')

gcloud projects add-iam-policy-binding \


$(gcloud config get-value core/project) \
--member serviceAccount:$AZURE_PIPELINES_PUBLISHER \
--role roles/storage.admin

7. Generate a service account key:

gcloud iam service-accounts keys create \


azure-pipelines-publisher.json --iam-account $AZURE_PIPELINES_PUBLISHER

tr -d '\n' < azure-pipelines-publisher.json > azure-pipelines-publisher-oneline.json

Launch Code Editor by clicking the button in the upper-right corner of Cloud Shell:

8. Open the file named azure-pipelines-publisher-oneline.json . You'll need the content of this file in one of
the following steps:
9. In your Azure DevOps organization, select Project settings and then select Pipelines -> Ser vice
connections .
10. Click New ser vice connection and choose Docker Registr y
11. In the dialog, enter values for the following fields:
Docker Registr y: https://gcr.io/[PROJECT-ID] , where [PROJECT-ID] is the name of your GCP project.
**Docker ID: _json_key
Docker Password: Paste the contents of azure-pipelines-publisher-oneline.json
Ser vice connection name: gcrServiceConnection

12. Click Save to create the service connection


Docker Content Trust
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Docker Content Trust (DCT) provides the ability to use digital signatures for data sent to and received from remote
Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific
image tags.

NOTE
A prerequisite for signing an image is a Docker Registry with a Notary server attached (Such as the Docker Hub or Azure
Container Registry)

Signing images in Azure Pipelines


Prerequisites on development machine
1. Use Docker trust's built in generator or manually generate delegation key pair. If the built-in generator is used,
the delegation private key is imported into the local Docker trust store. Else, the private key will need to be
manually imported into the local Docker trust store
2. Using the public key generated from the step above, upload the first key to a delegation and initiate the
repository
Set up pipeline for signing images
1. Fetch the delegation private key, which is present in the local Docker trust store of your development
machine used earlier, and add the same as a secure file in Pipelines
2. Authorize this secure file for use in all pipelines
3. Create a pipeline based on the following YAML snippet -
pool:
vmImage: 'Ubuntu 16.04'

variables:
system.debug: true
containerRegistryServiceConnection: serviceConnectionName
imageRepository: foobar/content-trust
tag: test

steps:
- task: Docker@2
inputs:
command: login
containerRegistry: $(containerRegistryServiceConnection)

- task: DownloadSecureFile@1
name: privateKey
inputs:
secureFile: cc8f3c6f998bee63fefaaabc5a2202eab06867b83f491813326481f56a95466f.key
- script: |
mkdir -p $(DOCKER_CONFIG)/trust/private
cp $(privateKey.secureFilePath) $(DOCKER_CONFIG)/trust/private

- task: Docker@2
inputs:
command: build
Dockerfile: '**/Dockerfile'
containerRegistry: $(containerRegistryServiceConnection)
repository: $(imageRepository)
tags: |
$(tag)
arguments: '--disable-content-trust=false'

- task: Docker@2
inputs:
command: push
containerRegistry: $(containerRegistryServiceConnection)
repository: $(imageRepository)
tags: |
$(tag)
arguments: '--disable-content-trust=false'
env:
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: $(DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE)

NOTE
In the above snippet, the variable DOCKER_CONFIG is set by the login action done by Docker task. It is recommended
to setup DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE as a secret variable for the pipeline as the alternative
approach of using a pipeline variable in YAML would expose the passphrase in plaintext form.
Deploy to Kubernetes
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Azure Pipelines can be used to deploy to Kubernetes clusters offered by multiple cloud providers. This document
contains the concepts associated with setting up deployments for any Kubernetes cluster.
While it is possible to use script for loading kubeconfig files onto the agent from a remote location or secure files
and then use kubectl for performing the deployments, the KubernetesManifest task and Kubernetes service
connection can be used to do this in a simpler and more secure way.

KubernetesManifest task
KubernetesManifest task has the added benefits of being able to check for object stability before marking a task as
success/failure, perform artifact substitution, add pipeline traceability-related annotations onto deployed objects,
simplify creation and referencing of imagePullSecrets, bake manifests using Helm or kustomization.yaml or Docker
compose files, and aid in deployment strategy rollouts.

Kubernetes resource in environments


Kubernetes resource in environments provides a secure way of specifying the credential required to connect to a
Kubernetes cluster for performing deployments.
Resource creation
In the Azure Kubernetes Ser vice provider option, once the subscription, cluster and namespace inputs are
provided, in addition to fetching and securely storing the required credentials, for an RBAC enabled cluster
ServiceAccount and RoleBinding objects are created such that the ServiceAccount is able to perform actions only
on the chosen namespace.
The Generic provider (reusing existing ServiceAccount) option can be used to configure a connection to any
cloud provider's cluster (AKS/EKS/GKE/OpenShift/etc.).

Example
jobs:
- deployment:
displayName: Deploy to AKS
pool:
vmImage: ubuntu-latest
environment: contoso.aksnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Create secret
inputs:
action: createSecret
namespace: aksnamespace
secretType: dockerRegistry
secretName: foo-acr-secret
dockerRegistryEndpoint: fooACR

- task: KubernetesManifest@0
displayName: Create secret
inputs:
action: createSecret
namespace: aksnamespace
secretType: dockerRegistry
secretName: bar-acr-secret
dockerRegistryEndpoint: barACR

- task: KubernetesManifest@0
displayName: Deploy
inputs:
action: deploy
namespace: aksnamespace
manifests: manifests/deployment.yml|manifests/service.yml
containers: |
foo.azurecr.io/demo:$(tagVariable1)
bar.azurecr.io/demo:$(tagVariable2)
imagePullSecrets: |
foo-acr-secret
bar-acr-secret

Note that to allow image pull from private registries, prior to the deploy action, the createSecret action is used
along with instances of Docker registry service connection to create imagePullSecrets that are subsequently
referenced in the step corresponding to deploy action.

TIP
If setting up an end-to-end CI-CD pipeline from scratch for a repository containing a Dockerfile, checkout the Deploy to
Azure Kubernetes template, which constructs an end-to-end YAML pipeline along with creation of an environment and
Kubernetes resource to help visualize these deployments.
While YAML based pipeline currently supports triggers on a single Git repository, if triggers are required for manifest files
stored in another Git repository or if triggers are required for Azure Container Registry or Docker Hub, usage of release
pipelines instead of a YAML based pipeline is recommended for doing the Kubernetes deployments.

Alternatives
Instead of using the KubernetesManifest task for deployment, one can also use the following alternatives:
Kubectl task
kubectl invocation on script. For example: script: kubectl apply -f manifest.yml
Bake manifests
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Bake action of the Kubernetes manifest task is useful for turning templates into manifests with the help of a
template engine. The bake action of Kubernetes manifest task is intended to provide visibility into the
transformation between the input templates and the end manifest files that are used in the deployments. Helm 2,
kustomize, and kompose are supported as templating options under the bake action.
The baked manifest files are intended to be consumed downstream (subsequent task) where these manifest files
are used as inputs for the deploy action of the Kubernetes manifest task.

Helm 2 example
- deployment:
displayName: Bake and deploy to AKS
pool:
vmImage: ubuntu-latest
environment: contoso.aksnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
name: bake
displayName: Bake K8s manifests from Helm chart
inputs:
action: bake
renderType: helm2
helmChart: charts/sample
overrides: 'image.repository:nginx'

- task: KubernetesManifest@0
displayName: Deploy K8s manifests
inputs:
kubernetesServiceConnection: k8sSC1
manifests: $(bake.manifestsBundle)
containers: |
nginx: 1.7.9

NOTE
Instead of transforming the Helm charts into manifest files in the template as shown above, if one intends to use Helm for
directly managing releases and rollbacks, checkout the Package and Deploy Helm Charts task.

Kustomize example
steps:
- task: KubernetesManifest@0
name: bake
displayName: Bake K8s manifests from kustomization path
inputs:
action: bake
renderType: kustomize
kustomizationPath: folderContainingKustomizationFile

- task: KubernetesManifest@0
displayName: Deploy K8s manifests
inputs:
kubernetesServiceConnection: k8sSC1
manifests: $(bake.manifestsBundle)

Kompose example
steps:
- task: KubernetesManifest@0
name: bake
displayName: Bake K8s manifests from Docker Compose
inputs:
action: bake
renderType: kompose
dockerComposeFile: docker-compose.yaml

- task: KubernetesManifest@0
displayName: Deploy K8s manifests
inputs:
kubernetesServiceConnection: k8sSC1
manifests: $(bake.manifestsBundle)
Multi-cloud Kubernetes deployments
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines
With Kubernetes having a standard interface and running the same way on all cloud providers, Azure Pipelines can
be used for deploying to Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Amazon Elastic
Kubernetes Service (EKS), or clusters from any other cloud providers. This document contains information on how
to connect to each of these clusters, and how to perform parallel deployments to multiple clouds.

Setup environment and Kubernetes resources


Kubernetes resources belonging to environments can be targeted from deployment jobs to enable pipeline
traceability and ability to diagnose resource health.

NOTE
Deployments to Kubernetes clusters are possible using regular jobs as well, but the benefits of pipeline traceability and ability
to diagnose resource health are not available in this option.

To set up multi-cloud deployment, create an environment and subsequently add Kubernetes resources associated
with namespaces of Kubernetes clusters. Follow the steps under the linked sections based on the cloud provider of
your Kubernetes cluster -
Azure Kubernetes Service
Generic provider using existing service account (For GKE/EKS/...)

TIP
The generic provider approach based on existing service account works with clusters from any cloud provider, including
Azure. The incremental benefit of using the Azure Kubernetes Service option instead is that it involves creation of new
ServiceAccount and RoleBinding objects (instead of reusing an existing ServiceAccount) so that the newly created RoleBinding
object limits the operations of the ServiceAccount to the chosen namespace only.

Parallel deployments to multiple clouds


The following YAML snippet showcases how to perform parallel deployments to clusters from multiple clouds. In
this example, deployments are done to resources corresponding to namespaces from AKS, GKE, EKS, and OpenShift
clusters. These four namespaces are associated with Kubernetes resources under the 'contoso' environment.

trigger:
- master

jobs:
- deployment:
displayName: Deploy to AKS
pool:
vmImage: ubuntu-latest
environment: contoso.aksnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: aksnamespace
manifests: manifests/*
- deployment:
displayName: Deploy to GKE
pool:
vmImage: ubuntu-latest
environment: contoso.gkenamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: gkenamespace
manifests: manifests/*
- deployment:
displayName: Deploy to EKS
pool:
vmImage: ubuntu-latest
environment: contoso.eksnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: eksnamespace
manifests: manifests/*
- deployment:
displayName: Deploy to OpenShift
pool:
vmImage: ubuntu-latest
environment: contoso.openshiftnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: openshiftnamespace
manifests: manifests/*
- deployment:
displayName: Deploy to DigitalOcean
pool:
vmImage: ubuntu-latest
environment: contoso.digitaloceannamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: digitaloceannamespace
manifests: manifests/*

NOTE
When using the service account option, ensure that a RoleBinding exists, which grants permissions in the edit
ClusterRole to the desired service account. This is needed so that the service account can be used by Azure Pipelines for
creating objects in the chosen namespace.
Deployment strategies for Kubernetes in Azure
Pipelines
2/26/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Kubernetes manifest task currently supports canary deployment strategy. This document explains the guidelines
and best practices around the usage of this task for setting up canary deployments to Kubernetes.

Overview of canary deployment strategy


Canary deployment strategy involves partially deploying the new changes in such a way that the new changes
coexist with the current deployments before performing a full rollout. In this phase, usually a final check and
comparison of the two versions is done on the basis of application health checks and performance monitoring
using metrics originating from both the versions. If the canary is found to be at least at par or better than the
currently deployed version, complete rollout of the new changes is initiated. If the canary is found to be performing
worse than the currently deployed versions, the new changes are rejected, and a complete rollout, which could lead
to regression, is thus avoided.

Canary deployments for Kubernetes


The label selector relationship between pods and services in Kubernetes allow for setting up deployments in such a
way that a single service routes requests to both the stable and the canary variants. Kubernetes manifest task
utilizes this to facilitate canary deployments in the following way
If the task is provided the inputs of action: deploy and strategy: canary , for each workload (Deployment,
ReplicaSet, Pod, ...) defined in the input manifest files, a -baseline and -canary variant of the deployment are
created. For example
Assume there exists a deployment named sampleapp in the input manifest file and that after completion
of run number 22 of the pipeline, the stable variant of this deployment named sampleapp is deployed in
the cluster
In the subsequent run (in this case run number 23), Kubernetes manifest task with action: deploy and
strategy: canary would result in creation of sampleapp-baseline and sampleapp-canary deployments
whose number of replicas are determined by the product of percentage task input with the value of the
desired number of replicas for the final stable variant of sampleapp as per the input manifest files
Excluding the number of replicas, the baseline version has the same configuration as the stable variant
while the canary version has the new changes that are being introduced by the current run (in this case,
run number 23)
If a manual intervention is set up in the pipeline after the above mentioned step, it would allow for an
opportunity to pause the pipeline so that the pipeline admin can evaluate key metrics for the baseline and
canary versions and take the decision on whether the canary changes are safe and good enough for a complete
rollout.
action: promote and strategy: canary or action: reject and strategy: canary inputs of the Kubernetes
manifest tasks can be used to promote or reject the canary changes respectively. Note that in either cases, at the
end of this step, only the stable variant of the workloads declared in the input manifest files will be remain
deployed in the cluster, while the ephemeral baseline and canary versions are cleaned up.
NOTE
The percentage input mentioned above doesn't result in percentage traffic split, but rather refers to the percentage used for
calculating the number of replicas for baseline and canary variants. A service mesh's service discovery and load balancing
capabilities would help in achieving a true request based traffic split. Support for canary deployments utilizing Service Mesh
Interface is currently being worked on and will be added to the Kubernetes manifest task soon.

Compare canary against baseline and not against stable variant


While it is possible to compare the canary deployment against the current production deployment, it is better to
instead compare the canary against an equivalent baseline. As the baseline anyways uses the same version and
configuration as the currently deployed version, canary and baseline are better candidates for comparison as they
are identical with respect to the following aspects:
Same time of deployment
Same size of deployment
Same type and amount of traffic This minimizes the effects of factors such as cache warmup time, the heap size,
and so on, in the canary variant, which could have adversely impacted the analysis.

End-to-end example
An end-to-end example of setting up build and release pipelines to perform canary deployments on Kubernetes
clusters for each change made to application code is available under the how-to guides. This example also
demonstrates the usage of Prometheus for comparing the baseline and canary metrics when the pipeline is paused
using a manual intervention task.
Build and push to Azure Container Registry
2/26/2020 • 3 minutes to read • Edit Online

Azure Pipelines
In this step-by-step guide, you'll learn how to create a pipeline that continuously builds a repository that contains a
Dockerfile. Every time you change your code, the images are automatically pushed to Azure Container Registry.

Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.

TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.

Get the code


Fork the following repository containing a sample application and a Dockerfile:

https://github.com/MicrosoftDocs/pipelines-javascript-docker

Create a container registry


Sign in to the Azure Portal, and then select the Cloud Shell button in the upper-right corner.

# Create a resource group


az group create --name myapp-rg --location eastus

# Create a container registry


az acr create --resource-group myapp-rg --name myContainerRegistry --sku Basic

Sign in to Azure Pipelines


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name and
displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner
of the dashboard.
Create the pipeline
Connect and select repository
1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .
When the Configure tab appears, select Docker .
1. If you are prompted, select the subscription in which you created your registry.
2. Select the container registry that you created above.
3. Select Validate and configure .
As Azure Pipelines creates your pipeline, it:
Creates a Docker registry service connection to enable your pipeline to push images into your
container registry.
Generates an azure-pipelines.yml file, which defines your pipeline.
4. When your new pipeline appears, take a look at the YAML to see what it does (for more information, see
How we build your pipeline below). When you're ready, select Save and run .
5. The commit that will create your new pipeline appears. Select Save and run .
6. If you want, change the Commit message to something like Add pipeline to our repository. When you're
ready, select Save and run to commit the new pipeline into your repository, and then begin the first run of
your new pipeline!
As your pipeline runs, select the build job to watch your pipeline in action.

How we build your pipeline


When you finished selecting options and then proceeded to validate and configure the pipeline (see above) Azure
Pipelines created a pipeline for you, using the Docker container template.
The build stage uses the Docker task to build and push the image to the container registry.
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build job
pool:
vmImage: $(vmImageName)
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)

Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:

az group delete --name myapp-rg

Type y when prompted.

Learn more
We invite you to learn more about:
The services:
Azure Container Registry
The template used to create your pipeline: docker-container
The method your pipeline uses to connect to the service: Docker registry service connections
Some of the tasks used in your pipeline, and how you can customize them:
Docker task
Kubernetes manifest task
Some of the key concepts for this kind of pipeline:
Jobs
Docker registry service connections (the method your pipeline uses to connect to the service)
Build and deploy to Azure Kubernetes Service
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines
Azure Kubernetes Service manages your hosted Kubernetes environment, making it quicker and easier for you to
deploy and manage containerized applications. This service also eliminates the burden of ongoing operations and
maintenance by provisioning, upgrading, and scaling resources on demand, without taking your applications
offline.
In this step-by-step guide, you'll learn how to create a pipeline that continuously builds and deploys your app.
Every time you change your code in a repository that contains a Dockerfile, the images are pushed to your Azure
Container Registry, and the manifests are then deployed to your Azure Kubernetes Service cluster.

Prerequisites
To ensure that your Azure DevOps project has the authorization required to access your Azure subscription, create
an Azure Resource Manager service connection. The service connection is required when you create a pipeline in
the project to deploy to Azure Kubernetes Service. Otherwise, the drop-down lists for Cluster and Container
Registr y are empty.
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.

TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.

Get the code


Fork the following repository containing a sample application and a Dockerfile:

https://github.com/MicrosoftDocs/pipelines-javascript-docker

Create the Azure resources


Sign in to the Azure Portal, and then select the Cloud Shell button in the upper-right corner.
Create a container registry
# Create a resource group
az group create --name myapp-rg --location eastus

# Create a container registry


az acr create --resource-group myapp-rg --name myContainerRegistry --sku Basic

# Create a Kubernetes cluster


az aks create \
--resource-group myapp-rg \
--name myapp \
--node-count 1 \
--enable-addons monitoring \
--generate-ssh-keys \
--kubernetes-version 1.16.10

Sign in to Azure Pipelines


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name
and displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner
of the dashboard.

Create the pipeline


Connect and select repository
1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .
When the Configure tab appears, select Deploy to Azure Kubernetes Ser vice .
1. If you are prompted, select the subscription in which you created your registry and cluster.
2. Select the myapp cluster.
3. For Namespace , select Existing , and then select default .
4. Select the name of your container registry.
5. You can leave the image name and the service port set to the defaults.
6. Set the Enable Review App for Pull Requests checkbox for review app related configuration to be
included in the pipeline YAML auto-generated in subsequent steps.
7. Select Validate and configure .
As Azure Pipelines creates your pipeline, it:
Creates a Docker registry service connection to enable your pipeline to push images into your
container registry.
Creates an environment and a Kubernetes resource within the environment. For an RBAC enabled
cluster, the created Kubernetes resource implicitly creates ServiceAccount and RoleBinding objects in
the cluster so that the created ServiceAccount can't perform operations outside the chosen
namespace.
Generates an azure-pipelines.yml file, which defines your pipeline.
Generates Kubernetes manifest files. These files are generated by hydrating the deployment.yml and
service.yml templates based on selections you made above.
8. When your new pipeline appears, review the YAML to see what it does. For more information, see how we
build your pipeline below. When you're ready, select Save and run .
9. The commit that will create your new pipeline appears. You can see the generated files mentioned above.
Select Save and run .
10. If you want, change the Commit message to something like Add pipeline to our repository. When you're
ready, select Save and run to commit the new pipeline into your repo, and then begin the first run of your
new pipeline!

See the pipeline run, and your app deployed


As your pipeline runs, watch as your build stage, and then your deployment stage, go from blue (running) to green
(completed). You can select the stages and jobs to watch your pipeline in action.
After the pipeline run is finished, explore what happened and then go see your app deployed. From the pipeline
summary:
1. Select the Environments tab.
2. Select View environment .
3. Select the instance if your app for the namespace you deployed to. If you stuck to the defaults we
mentioned above, then it will be the myapp app in the default namespace.
4. Select the Ser vices tab.
5. Select and copy the external IP address to your clipboard.
6. Open a new browser tab or window and enter <IP address>:8080.
If you're building our sample app, then Hello world appears in your browser.

How we build your pipeline


When you finished selecting options and then proceeded to validate and configure the pipeline (see above) Azure
Pipelines created a pipeline for you, using the Deploy to Azure Kubernetes Service template.
The build stage uses the Docker task to build and push the image to the Azure Container Registry.
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build job
pool:
vmImage: $(vmImageName)
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)

- task: PublishPipelineArtifact@1
inputs:
artifactName: 'manifests'
path: 'manifests'

The deployment job uses the Kubernetes manifest task to create the imagePullSecret required by Kubernetes
cluster nodes to pull from the Azure Container Registry resource. Manifest files are then used by the Kubernetes
manifest task to deploy to the Kubernetes cluster.
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
jobs:
- deployment: Deploy
displayName: Deploy job
pool:
vmImage: $(vmImageName)
environment: 'azooinmyluggagepipelinesjavascriptdocker.aksnamespace'
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact@2
inputs:
artifactName: 'manifests'
downloadPath: '$(System.ArtifactsDirectory)/manifests'

- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
namespace: $(k8sNamespace)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)

- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespace)
manifests: |
$(System.ArtifactsDirectory)/manifests/deployment.yml
$(System.ArtifactsDirectory)/manifests/service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)

Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:

az group delete --name myapp-rg

Type y when prompted.

az group delete --name MC_myapp-rg_myapp_eastus

Type y when prompted.

Learn more
We invite you to learn more about:
The services:
Azure Kubernetes Service
Azure Container Registry
The template used to create your pipeline: Deploy to existing Kubernetes cluster template
Some of the tasks used in your pipeline, and how you can customize them:
Docker task
Kubernetes manifest task
Some of the key concepts for this kind of pipeline:
Environments
Deployment jobs
Stages
Docker registry service connections (the method your pipeline uses to connect to the service)
Canary deployment strategy for Kubernetes
deployments
11/2/2020 • 13 minutes to read • Edit Online

Azure Pipelines
Canary deployment strategy involves deploying new versions of an application next to stable production versions
to see how the canary version compares against the baseline before promoting or rejecting the deployment. This
step-by-step guide covers usage of Kubernetes manifest task's canary strategy support for setting up canary
deployments for Kubernetes and the associated workflow in terms of instrumenting code and using the same for
comparing baseline and canary before taking a manual judgment on promotion/rejection of the canary.

Prerequisites
A repository in Azure Container Registry or Docker Hub (Azure Container Registry, Google Container Registry,
Docker Hub) with push privileges.
Any Kubernetes cluster (Azure Kubernetes Service, Google Kubernetes Engine, Amazon Elastic Kubernetes
Service).

Sample code
Fork the following repository on GitHub -

https://github.com/MicrosoftDocs/azure-pipelines-canary-k8s

Here's a brief overview of the files in the repository that are used during the course of this guide -
./app:
app.py - Simple Flask based web server instrumented using Prometheus instrumentation library for
Python applications. A custom counter is set up for the number of 'good' and 'bad' responses given out
based on the value of success_rate variable.
Dockerfile - Used for building the image with each change made to app.py. With each change made to
app.py, build pipeline (CI) is triggered and the image gets built and pushed to the container registry.
./manifests:
deployment.yml - Contains specification of the sampleapp Deployment workload corresponding to the
image published earlier. This manifest file is used not just for the stable version of Deployment object, but
for deriving the -baseline and -canary variants of the workloads as well.
service.yml - Creates sampleapp service for routing requests to the pods spun up by the Deployments
(stable, baseline, and canary) mentioned above.
./misc
service-monitor.yml - Used for setup of a ServiceMonitor object to set up Prometheus metric scraping.
fortio-deploy.yml - Used for setup of fortio deployment that is subsequently used as a load-testing tool
to send a stream of requests to the sampleapp service deployed earlier. With sampleapp service's selector
being applicable for all the three pods resulting from the Deployment objects that get created during the
course of this how-to guide - sampleapp , sampleapp-baseline and sampleapp-canary , the stream of
requests sent to sampleapp get routed to pods under all these three deployments.
NOTE
While Prometheus is used for code instrumentation and monitoring in this how-to guide, any equivalent solution like Azure
Application Insights can be used as an alternative as well.

Install prometheus-operator
Use the following command from your development machine (with kubectl and Helm installed and context set to
the cluster you want to deploy against) to install Prometheus on your cluster. Grafana, which is used later in this
how-to guide for visualizing the baseline and canary metrics on dashboards, is installed as part of this Helm chart -

helm install --name sampleapp stable/prometheus-operator

Create service connections


Navigate to Project settings -> Pipelines -> Ser vice connections .
Create a Docker registry service connection associated with your container registry. Name it azure-pipelines-
canar y-k8s .
Create a Kubernetes service connection for the Kubernetes cluster and namespace you want to deploy to. Name
it azure-pipelines-canar y-k8s .

Setup continuous integration


1. Navigate to Pipelines -> New pipeline and select your repository.
2. Upon reaching Configure tab, choose Star ter pipeline
3. In Review tab, replace the contents of the pipeline YAML with the following snippet -

trigger:
- master

pool:
vmImage: Ubuntu-16.04

variables:
imageName: azure-pipelines-canary-k8s

steps:
- task: Docker@2
displayName: Build and push image
inputs:
containerRegistry: dockerRegistryServiceConnectionName #replace with name of your Docker registry
service connection
repository: $(imageName)
command: buildAndPush
Dockerfile: app/Dockerfile
tags: |
$(Build.BuildId)

If the Docker registry service connection created by you was associated with foobar.azurecr.io , then the image
is to foobar.azurecr.io/azure-pipelines-canary-k8s:$(Build.BuildId) based on the above configuration.

Edit manifest file


In manifests/deployment.yml, replace <foobar> with your container registry's URL. For example after replacement,
the image field should look something like contosodemo.azurecr.io/azure-pipelines-canary-k8s .

Setup continuous deployment


Deploy canary stage
YAML
Classic
1. Navigate to Pipelines -> Environments -> New environment
2. Configure the new environment as follows -
Name : akscanary
Resource : choose Kubernetes
3. Click on Next and now configure your Kubernetes resource as follows -
Provider : Azure Kubernetes Service
Azure subscription : Choose the subscription that holds your kubernetes cluster
Cluster : Choose your cluster
Namespace : Create a new namespace with the name canar ydemo
4. Click on Validate and Create
5. Navigate to Pipelines -> Select the pipeline you just created -> Edit
6. Change the step you created previously to now use a Stage. And add two additional steps to copy the
manifests and mics directories as artifacts for use by consecutive stages. You might also want to move a
couple of values to variables for easier usage later in your pipeline. Your complete YAML should now look
like this:
trigger:
- master

pool:
vmImage: Ubuntu-16.04

variables:
imageName: azure-pipelines-canary-k8s
dockerRegistryServiceConnection: dockerRegistryServiceConnectionName #replace with name of your Docker
registry service connection
imageRepository: 'azure-pipelines-canary-k8s'
containerRegistry: containerRegistry #replace with the name of your container registry, Should be in
the format foobar.azurecr.io
tag: '$(Build.BuildId)'

stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: Ubuntu-16.04
steps:
- task: Docker@2
displayName: Build and push image
inputs:
containerRegistry: $(dockerRegistryServiceConnection)
repository: $(imageName)
command: buildAndPush
Dockerfile: app/Dockerfile
tags: |
$(tag)

- upload: manifests
artifact: manifests

- upload: misc
artifact: misc

7. Add an additional stage at the bottom of your YAML file to deploy the canary version.
- stage: DeployCanary
displayName: Deploy canary
dependsOn: Build
condition: succeeded()

jobs:
- deployment: Deploycanary
displayName: Deploy canary
pool:
vmImage: Ubuntu-16.04
environment: 'akscanary.canarydemo'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: azure-pipelines-canary-k8s
dockerRegistryEndpoint: azure-pipelines-canary-k8s

- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: 'deploy'
strategy: 'canary'
percentage: '25'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/service.yml
containers: '$(containerRegistry)/$(imageRepository):$(tag)'
imagePullSecrets: azure-pipelines-canary-k8s

- task: KubernetesManifest@0
displayName: Deploy Forbio and ServiceMonitor
inputs:
action: 'deploy'
manifests: |
$(Pipeline.Workspace)/misc/*

8. Save your pipeline by committing directly to the main branch. This commit should already run your pipeline
successfully.
Manual intervention for promoting or rejecting canary
YAML
Classic
1. Navigate to Pipelines -> Environments -> New environment
2. Configure the new environment as follows -
Name : akspromote
Resource : choose Kubernetes
3. Click on Next and now configure your Kubernetes resource as follows -
Provider : Azure Kubernetes Service
Azure subscription : Choose the subscription that holds your kubernetes cluster
Cluster : Choose your cluster
Namespace : Choose the namespace canarydemo namespace you created earlier
4. Click on Validate and Create
5. Select your new akspromote environment from the list of environments.
6. Click on the button with the three dots in the top right -> Approvals and checks -> Approvals
7. Configure your approval as follows -
Approvers : Add your own user account
Advanced : Make sure the Allow approvers to approve their own runs checkbox is checked.
8. Click on Create
9. Navigate to Pipelines -> Select the pipeline you just created -> Edit
10. Add an additional stage PromoteRejectCanary at the end of your YAML file to promote the changes.

- stage: PromoteRejectCanary
displayName: Promote or Reject canary
dependsOn: DeployCanary
condition: succeeded()

jobs:
- deployment: PromoteCanary
displayName: Promote Canary
pool:
vmImage: Ubuntu-16.04
environment: 'akspromote.canarydemo'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: promote canary
inputs:
action: 'promote'
strategy: 'canary'
manifests: '$(Pipeline.Workspace)/manifests/*'
containers: '$(containerRegistry)/$(imageRepository):$(tag)'
imagePullSecrets: '$(imagePullSecret)'

11. Add an additional stage RejectCanary at the end of your YAML file to roll back the changes.

- stage: RejectCanary
displayName: Reject canary
dependsOn: PromoteRejectCanary
condition: failed()

jobs:
- deployment: RejectCanary
displayName: Reject Canary
pool:
vmImage: Ubuntu-16.04
environment: 'akscanary.canarydemo'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: reject canary
inputs:
action: 'reject'
strategy: 'canary'
manifests: '$(Pipeline.Workspace)/manifests/*'

12. Save your YAML pipeline by clicking on Save and commit it directly to the main branch.
Deploy a stable version
YAML
Classic
Currently for the first run of the pipeline, the stable version of the workloads and their baseline/canary version do
not exist in the cluster. To deploy the stable version -
1. In app/app.py , change success_rate = 5 to success_rate = 10 .This change triggers the pipeline leading to
build and push of the image to container registry. It will also trigger the DeployCanary stage.
2. Given you have configured an approval on the akspromote environment, the release will wait before executing
that stage.
3. In the summary of the run click on Review and next click on Approve in the subsequent fly-out. This will result
in the stable version of the workloads ( sampleapp deployment in manifests/deployment.yml) being deployed to
the namespace

Initiate canary workflow


Once the above release has been completed, the stable version of workload sampleapp now exists in the cluster. To
understand how baseline and canaries are created for comparison purposes with every subsequent deployment,
perform the following changes to the simulation application -
1. In app/app.py , change success_rate = 10 to success_rate = 20

The above change triggers build pipeline resulting in build and push of image to the container registry, which in
turn triggers the release pipeline and the commencement of Deploy canar y stage.

Simulate requests
On your development machine, run the following commands and keep it running to send a constant stream of
requests at the sampleapp service. sampleapp service routes the requests to the pods spun by stable sampleapp
deployment and the pods spun up by sampleapp-baseline and sampleapp-canary deployments as the selector
specified for sampleapp is applicable for all these pods.

FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')


kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -allow-initial-errors -t 0
http://sampleapp:8080/

Setup Grafana dashboard


1. Run the following port forwarding command on your local development machine to be able to access
Grafana -

kubectl port-forward svc/sampleapp-grafana 3000:80

2. In a browser, open the following URL -

http://localhost:3000/login

3. When prompted for login credentials, unless the adminPassword value was overridden during prometheus-
operator Helm chart installation, use the following values -
username: admin
password: prom-operator
4. In the left navigation menu, choose + -> Dashboard -> Graph
5. Click anywhere on the newly added panel and type e to edit the panel.
6. In the Metrics tab, enter the following query -

rate(requests_total{pod=~"sampleapp-.*", custom_status="good"}[1m])

7. In the General tab, change the name of this panel to All sampleapp pods
8. In the overview bar at the top of the page, change the duration range to Last 5 minutes or Last 15
minutes .
9. Click on the save icon in the overview bar to save this panel.
10. While the above panel visualizes success rate metrics from all the variants - stable (from sampleapp
deployment), baseline (from sampleapp-baseline deployment) and canary (from sampleapp-canary
deployment), you can visualize just the baseline and canary metrics by adding another panel with the
following configuration -
General tab -> Title : sampleapp baseline and canary
Metrics tab -> query to be used:

rate(requests_total{pod=~"sampleapp-baseline-.*|sampleapp-canary-.*", custom_status="good"}[1m])

NOTE
Note that the panel for baseline and canary metrics will only have metrics available for comparison when the Deploy
canar y stage has successfully completed and the Promote/reject canar y stage is waiting on manual intervention.

TIP
Setup annotations for Grafana dashboards to visually depict stage completion events for Deploy canar y and
Promote/reject canar y so that you know when to start comparing baseline with canary and when the
promotion/rejection of canary has completed respectively.

Compare baseline and canary


1. At this point, with Deploy canar y stage having successfully completed (based on the change of
success_rate from '10' to '20') and with the Promote/reject canar y stage is waiting on manual
intervention, one can compare the success rate (as determined by custom_status=good) of baseline and
canary variants in the Grafana dashboard. It should look similar to the below image -
2. Based on the observation that the success rate is higher for canary, promote the canary by clicking on
Resume in the manual intervention task
Train and deploy machine learning models
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines
You can use a pipeline to automatically train and deploy machine learning models with the Azure Machine Learning
service. Here you'll learn how to build a machine learning model, and then deploy the model as a web service. You'll
end up with a pipeline that you can use to train your model.

Prerequisites
Before you read this topic, you should understand how the Azure Machine Learning service works.
Follow the steps in Azure Machine Learning quickstart: portal to create a workspace.

Get the code


Fork this repo in GitHub:

https://github.com/MicrosoftDocs/pipelines-azureml

This sample includes an azure-pipelines.yml file at the root of the repository.

Sign in to Azure Pipelines


Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name and
displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner of
the dashboard.

Create the pipeline


You can use 1 of the following approach to create a new pipeline.
YAML
Classic
1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Pipelines , and then select Create Pipeline .
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your repository.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve & install .
When your new pipeline appears:
1. Replace myresourcegroup with the name of the Azure resource group that contains your Azure Machine
Learning service workspace.
2. Replace myworkspace with the name of your Azure Machine Learning service workspace.
3. When you're ready, select Save and run .
4. You're prompted to commit your changes to the azure-pipelines.yml file in your repository. After you're
happy with the message, select Save and run again.
If you want to watch your pipeline in action, select the build job.
You now have a YAML pipeline in your repository that's ready to train your model!

Azure Machine Learning service automation


There are two primary ways to use automation with the Azure Machine Learning service:
The Machine Learning CLI is an extension to the Azure CLI. It provides commands for working with the Azure
Machine Learning service.
The Azure Machine Learning SDK is Python package that provides programmatic access to the Azure Machine
Learning service.
The Python SDK includes automated machine learning to assist in automating the time consuming,
iterative tasks of machine learning model development.
The example with this document uses the Machine Learning CLI.

Planning
Before you using Azure Pipelines to automate model training and deployment, you must understand the files
needed by the model and what indicates a "good" trained model.
Machine learning files
In most cases, your data science team will provide the files and resources needed to train the machine learning
model. The following files in the example project would be provided by the data scientists:
Training script ( train.py ): The training script contains logic specific to the model that you are training.
Scoring file ( score.py ): When the model is deployed as a web service, the scoring file receives data from
clients and scores it against the model. The output is then returned to the client.
RunConfig settings ( sklearn.runconfig ): Defines how the training script is ran on the compute target that is
used for training.
Training environment ( myenv.yml ): Defines the packages needed to run the training script.
Deployment environment ( deploymentConfig.yml ): Defines the resources and compute needed for the
deployment environment.
Deployment environment ( inferenceConfig.yml ): Defines the packages needed to run and score the model in
the deployment environment.
Some of these files are directly used when developing a model. For example, the train.py and score.py files.
However the data scientist may be programmatically creating the run configuration and environment settings. If so,
they can create the .runconfig and training environment files, by using RunConfiguration.save(). Alternatively,
default run configuration files will be created for all compute targets already in the workspace when running the
following command.

az ml folder attach --experiment-name myexp -w myws -g mygroup

The files created by this command are stored in the .azureml directory.
Determine the best model
The example pipeline deploys the trained model without doing any performance checks. In a production scenario,
you may want to log metrics so that you can determine the "best" model.
For example, you have a model that is already deployed and has an accuracy of 90. You train a new model based on
new checkins to the repo, and the accuracy is only 80, so you don't want to deploy it. This is an example of a metric
that you can create automation logic around, as you can do a simple comparison to evaluate the model. In other
cases, you may have several metrics that are used to indicate the "best" model, and must be evaluated by a human
before deployment.
Depending on what "best" looks like for your scenario, you may need to create a release pipeline where someone
must inspect the metrics to determine if the model should be deployed.
You should work with your data scientists to understand what metrics are important for your model.
To log metrics during training, use the Run class.

Azure CLI Deploy task


The Azure CLI Deploy task is used to run Azure CLI commands. In the example, it installs the Azure Machine
Learning CLI extension and then uses individual CLI commands to train and deploy the model.

Azure Service Connection


The Azure CLI Deploy task requires an Azure service connection. The Azure service connection stores the
credentials needed to connect from Azure Pipelines to Azure.
The name of the connection used by the example is azmldemows

To create a service connection, see Create an Azure service connection.

Machine Learning CLI


The following Azure Machine Learning service CLI commands are used in the example for this documemt:

C OMMAND P URP O SE

az ml folder attach Associates the files in the project with your Azure Machine
Learning service workspace.

az ml computetarget create Creates a compute target that is used to train the model.

az ml experiment list Lists experiments for your workspace.

az ml run submit-script Submits the model for training.

az ml model register Registers a trained model with your workspace.

az ml model deploy Deploys the model as a web service.

az ml service list Lists deployed services.

az ml service delete Deletes a deployed service.

az ml pipeline list Lists Azure Machine Learning pipelines.


C OMMAND P URP O SE

az ml computetarget delete Deletes a compute target.

For more information on these commands, see the CLI extension reference.

Next steps
Learn how you can further integrate machine learning into your pipelines with the Machine Learning extension.
For more examples of using Azure Pipelines with Azure Machine Learning service, see the following repos:
MLOps (CLI focused)
MLOps (Python focused)
Overview of artifacts in Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

You can publish and consume many different types of packages and artifacts with Azure Pipelines. Your
continuous integration/continuous deployment (CI/CD) pipeline can publish specific package types to their
respective package repositories (NuGet, npm, Python, and so on). Or you can use build artifacts and pipeline
artifacts to help store build outputs and intermediate files between build steps. You can then add onto, build, test,
or even deploy those artifacts.

NOTE
Aside from being published, Build and Release artifacts will be available as long as that Build or Release is retained unless
otherwise specified. For more information on retaining Build and Release artifacts, see the Retention Policy documentation.

Supported artifact types


The following table describes supported artifact types in Azure Pipelines.

SUP P O RT ED A RT IFA C T T Y P ES DESC RIP T IO N

Build artifacts Build artifacts are the files that you want your build to
produce. Build artifacts can be nearly anything that your
team needs to test or deploy your app. For example, you've
got .dll and .exe executable files and a .PDB symbols file of a
.NET or C++ Windows app.

Pipeline artifacts You can use pipeline artifacts to help store build outputs and
move intermediate files between jobs in your pipeline.
Pipeline artifacts are tied to the pipeline that they're created
in. You can use them within the pipeline and download them
from the build, as long as the build is retained. Pipeline
artifacts are the new generation of build artifacts. They take
advantage of existing services to dramatically reduce the time
it takes to store outputs in your pipelines. Only available in
Azure DevOps Ser vices .

Maven You can publish Maven artifacts to Azure Artifacts feeds or


Maven repositories.

npm You can publish npm packages to Azure Artifacts or npm


registries.

NuGet You can publish NuGet packages to Azure Artifacts, other


NuGet services (like NuGet.org), or internal NuGet
repositories.
SUP P O RT ED A RT IFA C T T Y P ES DESC RIP T IO N

PyPI You can publish Python packages to Azure Artifacts or PyPI


repositories.

Symbols Symbol files contain debugging information for compiled


executables. You can publish symbols to symbol servers.
Symbol servers enable debuggers to automatically retrieve
the correct symbol files without knowing specific product,
package, or build information.

Universal Universal Packages store one or more files together in a


single unit that has a name and version. Unlike pipeline
artifacts that reside in the pipeline, Universal Packages reside
within a feed in Azure Artifacts.

NOTE
Build and Release artifacts will be available as long as that Build or Release run is retained, unless you specify how long to
retain the artifacts. For more information on retaining Build and Release artifacts, see the Retention Policy documentation.

How do I publish and consume artifacts?


Each kind of artifact has a different way of being published and consumed. Some artifacts are specific to
particular development tools, such as .NET, Node.js/JavaScript, Python, and Java. Other artifact types offer more
generic file storage, such as pipeline artifacts and Universal Packages. Refer to the earlier table for specific
guidance on each kind of artifact that we support.
Publish and download artifacts in Azure Pipelines
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines
Pipeline artifacts provide a way to share files between stages in a pipeline or between different pipelines. They
are typically the output of a build process that needs to be consumed by another job or be deployed. Artifacts
are associated with the run they were produced in and remain available after the run has completed.

NOTE
Both PublishPipelineArtifact@1 and DownloadPipelineArtifact@2 require a minimum agent version of 2.153.1

Publishing artifacts
NOTE
This feature is only available on Azure DevOps Services. Typically, new features are introduced in the cloud service first,
and then made available on-premises in the next major version or update of Azure DevOps Server. To learn more, see
Azure DevOps Feature Timeline.

To publish (upload) an artifact for the current run of a CI/CD or classic pipeline:
YAML
YAML (task)
Classic
Azure CLI

steps:
- publish: $(System.DefaultWorkingDirectory)/bin/WebApp
artifact: WebApp

NOTE
The publish keyword is a shortcut for the Publish Pipeline Ar tifact task.

Keep in mind:
Although artifact name is optional, it is a good practice to specify a name that accurately reflects the
contents of the artifact.
The path of the file or folder to publish is required. It can be absolute or relative to
$(System.DefaultWorkingDirectory) .

If you plan to consume the artifact from a job running on a different operating system or file system, you
must ensure all file paths in the artifact are valid for the target environment. For example, a file name
containing a \ or * character will typically fail to download on Windows.
NOTE
You will not be billed by Azure Artifacts for storage of Pipeline Artifacts, Build Artifacts, and Pipeline Caching. For more
information, see Which artifacts count toward my total billed storage.

Cau t i on

Deleting a build that published Artifacts to a file share will result in the deletion of all Artifacts in that UNC path.
Limiting which files are included
.artifactignore files use the identical file-globbing syntax of .gitignore (with very few limitations) to provide
a version-controlled way to specify which files should not be added to a pipeline artifact.
Using an .artifactignore file, it is possible to omit the path from the task configuration, if you want to create a
Pipeline Artifact containing everything in and under the working directory, minus all of the ignored files and
folders. For example, to include only files in the artifact with a .exe extension:

**/*
!*.exe

The above statement instructs the universal package task and the pipeline artifacts task to ignore all files except
the ones with .exe extension.

NOTE
.artifactignore follows the same syntax as .gitignore with some minor limitations. The plus sign character + is not
supported in URL paths as well as some of the builds semantic versioning metadata ( + suffix) in some packages types
such as Maven.

To learn more, see Use the .artifactignore file or the .gitignore documentation.

IMPORTANT
Deleting and/or overwriting Pipeline Artifacts is not currently supported. The recommended workflow if you want to re-
run a failed pipeline job is to include the job ID in the artifact name. $(system.JobId) is the appropriate variable for this
purpose. See System variables to learn more about predefined variables.

Downloading artifacts
To download a specific artifact in CI/CD or classic pipelines:
YAML
YAML (task)
Classic
Azure CLI

steps:
- download: current
artifact: WebApp
NOTE
The download keyword is a shortcut to the Download Pipeline Ar tifact task.

In this context, current means the current run of this pipeline (i.e. artifacts published earlier in the run). For
release and deployment jobs this also include any source artifacts.
For additional configuration options, see the download keyword in the YAML schema.
Keep in mind:
The Download Pipeline Ar tifact task can download both build artifacts (published with the Publish
Build Artifacts task) and pipeline artifacts.
By default, files are downloaded to $(Pipeline.Workspace)/{artifact} , where artifact is the name of
the artifact. The folder structure of the artifact is always preserved.
File matching patterns can be used to limit which files from the artifact(s) are downloaded. For more
information on how pattern matching works, see artifact selection.
For advanced scenarios, including downloading artifacts from other pipelines, see the Download Pipeline
Artifact task.
Artifact selection
A single download step can download one or more artifacts. To download multiple artifacts, do not specify an
artifact name and optionally use file matching patterns to limit which artifacts and files are downloaded. The
default file matching pattern is ** , meaning all files in all artifacts.
Single artifact
When an artifact name is specified:
1. Only files for this artifact are downloaded. If this artifact does not exist, the task will fail.
2. Unless the specified download path is absolute, a folder with the same name as the artifact is created
under the download path, and the artifact's files are placed in it.
3. File matching patterns are evaluated relative to the root of the artifact. For example, the pattern *.jar
matches all files with a .jar extension at the root of the artifact.

For example, to download all *.js from the artifact WebApp :


YAML
YAML (task)
Classic
Azure CLI

steps:
- download: current
artifact: WebApp
patterns: '**/*.js'

Files (with the directory structure of the artifact preserved) are downloaded under
$(Pipeline.Workspace)/WebApp .

Multiple artifacts
When no artifact name is specified:
1. Files from multiple artifacts can be downloaded, and the task does not fail if no files are downloaded.
2. A folder is always created under the download path for each artifact with files being downloaded.
3. File matching patterns should assume the first segment of the pattern is (or matches) an artifact name.
For example, WebApp/** matches all files from the WebApp artifact. The pattern */*.dll matches all files
with a .dll extension at the root of each artifact.
For example, to download all .zip files from all source artifacts:
YAML
YAML (task)
Classic
Azure CLI

steps:
- download: current
patterns: '**/*.zip'

Artifacts in release and deployment jobs


If you're using pipeline artifacts to deliver artifacts into a classic release pipeline or deployment job, you do not
need to add a download step --- a step is injected automatically. If you need to control over the location where
files are downloaded, you can add a Download Pipeline Ar tifact task or use the download YAML keyword.

NOTE
Artifacts are only downloaded automatically in deployment jobs. In a regular build job, you need to explicitly use the
download step keyword or Download Pipeline Ar tifact task.

To stop artifacts from being downloaded automatically, add a download step and set its value to none:

steps:
- download: none

Migrating from build artifacts


Pipeline artifacts are the next generation of build artifacts and are the recommended way to work with artifacts.
Artifacts published using the Publish Build Ar tifacts task can continue to be downloaded using Download
Build Ar tifacts , but can also be downloaded using the latest Download Pipeline Ar tifact task.
When migrating from build artifacts to pipeline artifacts:
1. For build artifacts, it's common to copy files to $(Build.ArtifactStagingDirectory) and then use the
Publish Build Ar tifacts task to publish this folder. With the Publish Pipeline Ar tifact task, just
publish directly from the path containing the files.
2. By default, the Download Pipeline Ar tifact task downloads files to $(Pipeline.Workspace) . This is the
default and recommended path for all types of artifacts.
3. File matching patterns for the Download Build Ar tifacts task are expected to start with (or match) the
artifact name, regardless if a specific artifact was specified or not. In the Download Pipeline Ar tifact
task, patterns should not include the artifact name when an artifact name has already been specified. For
more information, see single artifact selection.
TIP
For more information on billing and usage tiers, check out the Azure DevOps pricing tool.

FAQ
Can this task publish ar tifacts to a shared folder or network path?
Not currently, but this feature is planned.
What are build ar tifacts?
Build artifacts are the files generated by your build. See Build Artifacts to learn more about how to publish and
consume your build artifacts.
Artifacts in Azure Pipelines
11/2/2020 • 7 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

NOTE
We recommend upgrading from build ar tifacts ( PublishBuildArtifacts@1 and DownloadBuildArtifacts@0 ) to
pipeline ar tifacts ( PublishPipelineArtifact@1 and DownloadPipelineArtifact@2 ) for faster output storage speeds.

Artifacts are the files that you want your build to produce. Artifacts can be anything that your team needs to test
or deploy your app.

How do I publish artifacts?


Artifacts can be published at any stage of pipeline. You can use two methods for configuring what to publish as an
artifact and when to publish it: alongside your code with YAML , or in the Azure Pipelines UI with the classic
editor .

Example: Publish a text file as an artifact


YAML
Classic

- powershell: gci env:* | sort-object name | Format-Table -AutoSize | Out-File


$env:BUILD_ARTIFACTSTAGINGDIRECTORY/environment-variables.txt

- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop

pathToPublish : the folder or file path to publish. It can be an absolute or a relative path, and wildcards are not
supported.
ar tifactName : the name of the artifact that you want to create.

NOTE
You cannot use Bin , App_Data and other folder names reserved by IIS as an artifact name because this content is not
served in response to Web requests. Please see ASP.NET Web Project Folder Structure for more details.

YAML is not supported in TFS.

Example: Publish two sets of artifacts


YAML
Classic

- powershell: gci env:* | sort-object name | Format-Table -AutoSize | Out-File


$env:BUILD_ARTIFACTSTAGINGDIRECTORY/environment-variables.txt

- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop1
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop2

pathToPublish : the folder or file path to publish. It can be an absolute or a relative path, and wildcards are not
supported.
ar tifactName : the name of the artifact that you want to create.

NOTE
You cannot use Bin , App_Data and other folder names reserved by IIS as an artifact name because this content is not
served in response to Web requests. Please see ASP.NET Web Project Folder Structure for more details.

YAML is not supported in TFS.

Example: Assemble C++ artifacts into one location and publish as an


artifact
YAML
Classic

- powershell: gci env:* | sort-object name | Format-Table -AutoSize | Out-File


$env:BUILD_ARTIFACTSTAGINGDIRECTORY/environment-variables.txt

- task: CopyFiles@2
inputs:
sourceFolder: '$(Build.SourcesDirectory)'
contents: '**/$(BuildConfiguration)/**/?(*.exe|*.dll|*.pdb)'
targetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop

sourceFolder : the folder that contains the files you want to copy. If you leave this value empty, copying will be
done from the root folder of your repo ( $(Build.SourcesDirectory) ).
contents : location(s) of the file(s) that will be copied to the destination folder.
targetFolder : destination folder.
pathToPublish : the folder or file path to publish. It can be an absolute or a relative path, and wildcards are not
supported.
ar tifactName : the name of the artifact that you want to create.
NOTE
You cannot use Bin , App_Data and other folder names reserved by IIS as an artifact name because this content is not
served in response to Web requests. Please see ASP.NET Web Project Folder Structure for more details.

YAML is not supported in TFS.

How do I consume artifacts?


You can consume your artifacts in different ways: you can use it in your release pipeline, pass it between your
pipeline jobs, download it directly from your pipeline and even download it from feeds and upstream sources.
Consume artifacts in release pipelines
You can download artifacts produced by either a build pipeline (created in a classic editor) or a YAML pipeline
(created through a YAML file) in a release pipeline and deploy them to the target of your choice.
Consume an artifact in the next job of your pipeline
You can consume an artifact produced by one job in a subsequent job of the pipeline, even when that job is in a
different stage (YAML pipelines). This can be useful to test your artifact.
Download to debug
You can download an artifact directly from a pipeline for use in debugging.
YAML
Classic

- powershell: gci env:* | sort-object name | Format-Table -AutoSize | Out-File


$env:BUILD_ARTIFACTSTAGINGDIRECTORY/environment-variables.txt

- task: DownloadBuildArtifacts@0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: 'drop'
downloadPath: '$(System.ArtifactsDirectory)'

buildType : specify which build artifacts will be downloaded: current (the default value) or from a specific
build.
downloadType : choose whether to download a single artifact or all artifacts of a specific build.
ar tifactName : the name of the artifact that will be downloaded.
downloadPath : path on the agent machine where the artifacts will be downloaded.
YAML is not supported in TFS.

NOTE
In case you are using a deployment task, you can then reference your build artifacts by using $(Agent.BuildDirectory)
variable. See Agent variables for more information on how to use predefined variables.

Tips
Ar tifact publish location argument: Azure Pipelines/TFS (TFS 2018 RTM and older : Artifact type:
Server) is the best and simplest choice in most cases. This choice causes the artifacts to be stored in Azure
Pipelines or TFS. But if you're using a private Windows agent, you've got the option to drop to a UNC file
share.
Use forward slashes in file path arguments so that they work for all agents. Backslashes don't work for
macOS and Linux agents.
Build artifacts are stored on a Windows filesystem, which causes all UNIX permissions to be lost, including
the execution bit. You might need to restore the correct UNIX permissions after downloading your artifacts
from Azure Pipelines or TFS.
On Azure Pipelines and some versions of TFS, two different variables point to the staging directory:
Build.ArtifactStagingDirectory and Build.StagingDirectory . These are interchangeable.

The directory referenced by Build.ArtifactStagingDirectory is cleaned up after each build.


Deleting a build that published Artifacts to a file share will result in the deletion of all Artifacts in that UNC
path.
You can get build artifacts from the REST API.

Related tasks for publishing artifacts


Use these tasks to publish artifacts:

Utility: Copy Files By copying files to $(Build.ArtifactStagingDirectory) , you can publish multiple files of
different types from different places specified by your matching patterns.
Utility: Delete Files You can prune unnecessary files that you copied to the staging directory.
Utility: Publish Build Artifacts

Explore, download, and deploy your artifacts


When the build is done, if you watched it run, select the Summar y tab and see your artifact in the Build
ar tifacts published section.

When the build is done, if you watched it run, select the name of the completed build and then select the
Ar tifacts tab to see your artifact.
From here, you can explore or download the artifacts.
You can also use Azure Pipelines to deploy your app by using the artifacts that you've published. See Artifacts in
Azure Pipelines releases.

Publish from TFS to a UNC file share


If you're using a private Windows agent, you can set the ar tifact publish location option (TFS 2018 RTM and
older : artifact type) to publish your files to a UNC file share .

NOTE
Use a Windows build agent. This option doesn't work for macOS and Linux agents.

Choose file share to copy the artifact to a file share. Common reasons to do this:
The size of your drop is large and consumes too much time and bandwidth to copy.
You need to run some custom scripts or other tools against the artifact.
If you use a file share, specify the UNC file path to the folder. You can control how the folder is created for each
build by using variables. For example: \\my\share\$(Build.DefinitionName)\$(Build.BuildNumber) .

Publish artifacts from TFS 2015 RTM


If you're using TFS 2015 RTM, the steps in the preceding examples are not available. Instead, you copy and publish
your artifacts by using a single task: Build: Publish Build Artifacts.

Next steps
Publish and download artifacts in Azure Pipelines
Define your multi-stage classic pipeline
Releases in Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
This topic covers classic release pipelines. If you author your pipelines using YAML, see runs.

A release is the package or container that holds a versioned set of artifacts specified in a release pipeline in your
DevOps CI/CD processes. It includes a snapshot of all the information required to carry out all the tasks and
actions in the release pipeline, such as the stages, the tasks for each one, the values of task parameters and
variables, and the release policies such as triggers, approvers, and release queuing options. There can be multiple
releases from one release pipeline, and information about each one is stored and displayed in Azure Pipelines for
the specified retention period.
A deployment is the action of running the tasks for one stage, which results in the application artifacts being
deployed, tests being run, and whatever other actions are specified for that stage. Initiating a release starts each
deployment based on the settings and policies defined in the original release pipeline. There can be multiple
deployments of each release even for one stage. When a deployment of a release fails for a stage, you can
redeploy the same release to that stage. To redeploy a release, simply navigate to the release you want to deploy
and select deploy.
The following schematic shows the relationship between release pipelines, releases, and deployments.

[!div class="mx-imgBorder"]
Releases can be created from a release pipeline in several ways:
By a continuous deployment trigger that creates a release when a new version of the source build artifacts
is available.
By using the Release command in the UI to create a release manually from the Releases or the Builds
summary.
By sending a command over the network to the REST interface.
However , the action of creating a release does not mean it will automatically or immediately start a deployment.
For example:
There may be deployment triggers defined for a stage, which force the deployment to wait; this could be for
a manual deployment, until a scheduled day and time, or for successful deployment to another stage.
A deployment started manually from the [Deploy] command in the UI, or from a network command sent
to the REST interface, may specify a final target stage other than the last stage in a release pipeline. For
example, it may specify that the release is deployed only as far as the QA stage and not to the production
stage.
There may be queuing policies defined for a stage, which specify which of multiple deployments will occur,
or the order in which releases are deployed.
There may be pre-deployment approvers or gates defined for a stage, and the deployment will not occur
until all necessary approvals have been granted.
Approvers may defer the release to a stage until a specified date and time.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Release artifacts and artifact sources
11/2/2020 • 21 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

NOTE
This topic covers classic release pipelines. To understand artifacts in YAML pipelines, see artifacts.

A release is a collection of artifacts in your DevOps CI/CD processes. An ar tifact is a deployable component of
your application. Azure Pipelines can deploy artifacts that are produced by a wide range of artifact sources, and
stored in different types of artifact repositories.
When authoring a release pipeline , you link the appropriate ar tifact sources to your release pipeline. For
example, you might link an Azure Pipelines build pipeline or a Jenkins project to your release pipeline.
When creating a release , you specify the exact version of these artifact sources; for example, the number of a
build coming from Azure Pipelines, or the version of a build coming from a Jenkins project.
After a release is created, you cannot change these versions. A release is fundamentally defined by the versioned
artifacts that make up the release. As you deploy the release to various stages, you will be deploying and
validating the same artifacts in all stages.
A single release pipeline can be linked to multiple ar tifact sources , of which one is the primary source. In this
case, when you create a release, you specify individual versions for each of these sources.

Artifacts are central to a number of features in Azure Pipelines. Some of the features that depend on the linking of
artifacts to a release pipeline are:
Auto-trigger releases . You can configure new releases to be automatically created whenever a new
version of an artifact is produced. For more information, see Continuous deployment triggers. Note that
the ability to automatically create releases is available for only some artifact sources.
Trigger conditions . You can configure a release to be created automatically, or the deployment of a
release to a stage to be triggered automatically, when only specific conditions on the artifacts are met. For
example, you can configure releases to be automatically created only when a new build is produced from a
certain branch.
Ar tifact versions . You can configure a release to automatically use a specific version of the build artifacts,
to always use the latest version, or to allow you to specify the version when the release is created.
Ar tifact variables . Every artifact that is part of a release has metadata associated with it, exposed to tasks
through variables. This metadata includes the version number of the artifact, the branch of code from
which the artifact was produced (in the case of build or source code artifacts), the pipeline that produced
the artifact (in the case of build artifacts), and more. This information is accessible in the deployment tasks.
For more information, see Artifact variables.
Work items and commits . The work items or commits that are part of a release are computed from the
versions of artifacts. For example, each build in Azure Pipelines is associated with a set of work items and
commits. The work items or commits in a release are computed as the union of all work items and commits
of all builds between the current release and the previous release. Note that Azure Pipelines is currently
able to compute work items and commits for only certain artifact sources.
Ar tifact download . Whenever a release is deployed to a stage, by default Azure Pipelines automatically
downloads all the artifacts in that release to the agent where the deployment job runs. The procedure to
download artifacts depends on the type of artifact. For example, Azure Pipelines artifacts are downloaded
using an algorithm that downloads multiple files in parallel. Git artifacts are downloaded using Git library
functionality. For more information, see Artifact download.

Artifact sources
There are several types of tools you might use in your application lifecycle process to produce or store artifacts.
For example, you might use continuous integration systems such as Azure Pipelines, Jenkins, or TeamCity to
produce artifacts. You might also use version control systems such as Git or TFVC to store your artifacts. Or you
can use repositories such as Azure Artifacts or a NuGet repository to store your artifacts. You can configure Azure
Pipelines to deploy artifacts from all these sources.
By default, a release created from the release pipeline will use the latest version of the artifacts. At the time of
linking an artifact source to a release pipeline, you can change this behavior by selecting one of the options to use
the latest build from a specific branch by specifying the tags, a specific version, or allow the user to specify the
version when the release is created from the pipeline.
If you link more than one set of artifacts, you can specify which is the primary (default).

IMPORTANT
The Artifacts Default version drop down list items depends on the repository type of the linked build definition.

The following options are supported by all the repository types: Specify at the time of release creation ,
Specific version , and Latest .

Latest from a specific branch with tags and Latest from the build pipeline default branch with tags
options are supported by the following repository types: TfsGit , GitHub , Bitbucket , and
GitHubEnterprise .

Latest from the build pipeline default branch with tags is not supported by XAML build definitions.

The following sections describe how to work with the different types of artifact sources.
Azure Pipelines
TFVC, Git, and GitHub
Jenkins
Azure Container Registry, Docker, and Kubernetes
Azure Artifacts (NuGet, Maven, npm, Python, and Universal Packages)
External or on-premises TFS
TeamCity
Other sources

Artifact sources - Azure Pipelines


You can link a release pipeline to any of the build pipelines in Azure Pipelines or TFS project collection.

NOTE
You must include a Publish Ar tifacts task in your build pipeline. For XAML build pipelines, an artifact with the name drop
is published implicitly.

Some of the differences in capabilities between different versions of TFS and Azure Pipelines are:
TFS 2015 : You can link build pipelines only from the same project of your collection. You can link multiple
definitions, but you cannot specify default versions. You can set up a continuous deployment trigger on
only one of the definitions. When multiple build pipelines are linked, the latest builds of all the other
definitions are used, along with the build that triggered the release creation.
TFS 2017 and newer and Azure Pipelines : You can link build pipelines from any of the projects in Azure
Pipelines or TFS. You can link multiple build pipelines and specify default values for each of them. You can
set up continuous deployment triggers on multiple build sources. When any of the builds completes, it will
trigger the creation of a release.
The following features are available when using Azure Pipelines sources:

F EAT URE B EH AVIO R W IT H A Z URE P IP EL IN ES SO URC ES

Auto-trigger releases New releases can be created automatically when new builds
(including XAML builds) are produced. See Continuous
Deployment for details. You do not need to configure
anything within the build pipeline. See the notes above for
differences between version of TFS.

Artifact variables A number of artifact variables are supported for builds from
Azure Pipelines.

Work items and commits Azure Pipelines integrates with work items in TFS and Azure
Pipelines. These work items are also shown in the details of
releases. Azure Pipelines integrates with a number of version
control systems such as TFVC and Git, GitHub, Subversion,
and Other Git repositories. Azure Pipelines shows the
commits only when the build is produced from source code in
TFVC or Git.

Artifact download By default, build artifacts are downloaded to the agent. You
can configure an option in the stage to skip the download of
artifacts.

Deployment section in build The build summary includes a Deployment section, which
lists all the stages to which the build was deployed.

By default, the releases execute in with a collection level Job authorization scope. That means releases can access
resources in all projects in the organization (or collection for Azure DevOps Server). This is useful when linking
build artifacts from other projects. You can enable Limit job authorization scope to current project for
release pipelines in project settings to restrict access to artifacts for releases in a project.
To set job authorization scope for the organization:
Navigate to your organization settings page in the Azure DevOps user interface.
Select Settings under Pipelines.
Turn on the toggle Limit job authorization scope to current project for release pipelines to limit the scope to
current project. This is the recommended setting, as it enhances security for your pipelines.
To set job authorization scope for a specific project:
Navigate to your project settings page in the Azure DevOps user interface.
Select Settings under Pipelines.
Turn on the toggle Limit job authorization scope to current project to limit the scope to project. This is the
recommended setting, as it enhances security for your pipelines.

NOTE
If the scope is set to project at the organization level, you cannot change the scope in each project.

All jobs in releases run with the job authorization scope set to collection. In other words, these jobs have access to
resources in all projects in your project collection.

Artifact sources - TFVC, Git, and GitHub


There are scenarios in which you may want to consume artifacts stored in a version control system directly,
without passing them through a build pipeline. For example:
You are developing a PHP or a JavaScript application that does not require an explicit build pipeline.
You manage configurations for various stages in different version control repositories, and you want to
consume these configuration files directly from version control as part of the deployment pipeline.
You manage your infrastructure and configuration as code (such as Azure Resource Manager templates)
and you want to manage these files in a version control repository.
Because you can configure multiple artifact sources in a single release pipeline, you can link both a build pipeline
that produces the binaries of the application as well as a version control repository that stores the configuration
files into the same pipeline, and use the two sets of artifacts together while deploying.
Azure Pipelines integrates with Team Foundation Version Control (TFVC) repositories, Git repositories, and GitHub
repositories.
You can link a release pipeline to any of the Git or TFVC repositories in any of the projects in your collection (you
will need read access to these repositories). No additional setup is required when deploying version control
artifacts within the same collection.
When you link a Git or GitHub repository and select a branch, you can edit the default properties of the artifact
types after the artifact has been saved. This is particularly useful in scenarios where the branch for the stable
version of the artifact changes, and continuous delivery releases should use this branch to obtain newer versions
of the artifact. You can also specify details of the checkout, such as whether checkout submodules and LFS-tracked
files, and the shallow fetch depth.
When you link a TFVC branch , you can specify the changeset to be deployed when creating a release.
The following features are available when using TFVC, Git, and GitHub sources:
F EAT URE B EH AVIO R W IT H T F VC , GIT, A N D GIT H UB SO URC ES

Auto-trigger releases You can configure a continuous deployment trigger for


pushes into the repository in a release pipeline. This can
automatically trigger a release when a new commit is made to
a repository. See Triggers.

Artifact variables A number of artifact variables are supported for version


control sources.

Work items and commits Azure Pipelines cannot show work items or commits
associated with releases when using version control artifacts.

Artifact download By default, version control artifacts are downloaded to the


agent. You can configure an option in the stage to skip the
download of artifacts.

By default, the releases execute in with a collection level Job authorization scope. That means releases can access
all repositories in the organization (or collection for Azure DevOps Server). You can enable Limit job
authorization scope to current project for release pipelines in project settings to restrict access to artifacts
for releases in a project.

Artifact sources - Jenkins


To consume Jenkins artifacts, you must create a service connection with credentials to connect to your Jenkins
server. For more information, see service connections and Jenkins service connection. You can then link a Jenkins
project to a release pipeline. The Jenkins project must be configured with a post build action to publish the
artifacts.
The following features are available when using Jenkins sources:

F EAT URE B EH AVIO R W IT H JEN K IN S SO URC ES

Auto-trigger releases You can configure a continuous deployment trigger for


pushes into the repository in a release pipeline. This can
automatically trigger a release when a new commit is made to
a repository. See Triggers.

Artifact variables A number of artifact variables are supported for builds from
Jenkins.

Work items and commits Azure Pipelines cannot show work items or commits for
Jenkins builds.

Artifact download By default, Jenkins builds are downloaded to the agent. You
can configure an option in the stage to skip the download of
artifacts.

Artifacts generated by Jenkins builds are typically propagated to storage repositories for archiving and sharing.
Azure blob storage is one of the supported repositories, allowing you to consume Jenkins projects that publish to
Azure storage as artifact sources in a release pipeline. Deployments download the artifacts automatically from
Azure to the agents. In this configuration, connectivity between the agent and the Jenkins server is not required.
Microsoft-hosted agents can be used without exposing the server to internet.
NOTE
Azure Pipelines may not be able to contact your Jenkins server if, for example, it is within your enterprise network. In this
case you can integrate Azure Pipelines with Jenkins by setting up an on-premises agent that can access the Jenkins server.
You will not be able to see the name of your Jenkins projects when linking to a build, but you can type this into the link
dialog field.

For more information about Jenkins integration capabilities, see Azure Pipelines Integration with Jenkins Jobs,
Pipelines, and Artifacts.

Artifact sources - Azure Container Registry, Docker, Kubernetes


When deploying containerized apps, the container image is first pushed to a container registry. After the push is
complete, the container image can be deployed to the Web App for Containers service or a Docker/Kubernetes
cluster. You must create a service connection with credentials to connect to your service to deploy images located
there, or to Azure. For more information, see service connections.
The following features are available when using Azure Container Registry, Docker, Kubernetes sources:

F EAT URE B EH AVIO R W IT H DO C K ER SO URC ES

Auto-trigger releases You can configure a continuous deployment trigger for


images. This can automatically trigger a release when a new
commit is made to a repository. See Triggers.

Artifact variables A number of artifact variables are supported for builds.

Work items and commits Azure Pipelines cannot show work items or commits.

Artifact download By default, builds are downloaded to the agent. You can
configure an option in the stage to skip the download of
artifacts.

NOTE
In the case of continuous deployment from multiple artifact sources (multiple registries/repositories), it isn't possible to map
artifact sources to trigger particular stages. A release will be created anytime there is a push to any of the artifact sources. If
you wish to map an artifact source to trigger a specific stage, the recommended way is to decompose the release pipeline
into multiple release pipelines.

Artifact sources - Azure Artifacts


Scenarios where you may want to consume these artifacts are:
1. You have your application build (such as TFS, Azure Pipelines, TeamCity, Jenkins) published as a package to
Azure Artifacts and you want to consume the artifact in a release.
2. As part of your application deployment, you need additional packages stored in Azure Artifacts.
When you link such an artifact to your release pipeline, you must select the Feed, Package, and the Default version
for the package. You can choose to pick up the latest version of the package, use a specific version, or select the
version at the time of release creation. During deployment, the package is downloaded to the agent folder and the
contents are extracted as part of the job execution.
The following features are available when using Azure Artifacts sources:
F EAT URE B EH AVIO R W IT H A Z URE A RT IFA C T S SO URC ES

Auto-trigger releases You can configure a continuous deployment trigger for


packages. This can automatically trigger a release when a
package is updated. See Triggers.

Artifact variables A number of artifact variables are supported for packages.

Work items and commits Azure Pipelines cannot show work items or commits.

Artifact download By default, packages are downloaded to the agent. You can
configure an option in the stage to skip the download of
artifacts.

Handling Maven Snapshots


When obtaining Maven artifacts and the artifact is a snapshot build, multiple versions of that snapshot may be
downloaded at once (e.g. myApplication-2.1.0.BUILD-20190920.220048-3.jar ,
myApplication-2.1.0.BUILD-20190820.221046-2.jar , myApplication-2.1.0.BUILD-20190820.220331-1.jar ). You will
likely need to add additional automation to keep only the latest artifact prior to subsequent deployment steps.
This can be accomplished with the following PowerShell snippet:

# Remove all copies of the artifact except the one with the lexicographically highest value.
Get-Item "myApplication*.jar" | Sort-Object -Descending Name | Select-Object -SkipIndex 0 | Remove-Item

For more information, see the Azure Artifacts overview.

Artifact sources - External or on-premises TFS


You can use Azure Pipelines to deploy artifacts published by an on-premises TFS server. You don't need to make
the TFS server visible on the Internet; you just set up an on-premises automation agent. Builds from an on-
premises TFS server are downloaded directly into the on-premises agent, and then deployed to the specified
target servers. They will not leave your enterprise network. This allows you to leverage all of your investments in
your on-premises TFS server, and take advantage of the release capabilities in Azure Pipelines.

TIP
Using this mechanism, you can also deploy artifacts published in one Azure Pipelines subscription in another Azure
Pipelines, or deploy artifacts published in one Team Foundation Server from another Team Foundation Server.

To enable these scenarios, you must install the TFS artifacts for Azure Pipelines extension from Visual Studio
Marketplace. Then create a service connection with credentials to connect to your TFS server (see service
connections for details).
You can then link a TFS build pipeline to your release pipeline. Choose External TFS Build in the Type list.
The following features are available when using external TFS sources:

F EAT URE B EH AVIO R W IT H EXT ERN A L T F S SO URC ES


F EAT URE B EH AVIO R W IT H EXT ERN A L T F S SO URC ES

Auto-trigger releases You cannot configure a continuous deployment trigger for


external TFS sources in a release pipeline. To automatically
create a new release when a build is complete, you would
need to add a script to your build pipeline in the external TFS
server to invoke Azure Pipelines REST APIs and to create a
new release.

Artifact variables A number of artifact variables are supported for external TFS
sources.

Work items and commits Azure Pipelines cannot show work items or commits for
external TFS sources.

Artifact download By default, External TFS artifacts are downloaded to the


agent. You can configure an option in the stage to skip the
download of artifacts.

NOTE
Azure Pipelines may not be able to contact an on-premises TFS server in case it's within your enterprise network. In that
case you can integrate Azure Pipelines with TFS by setting up an on-premises agent that can access the TFS server. You will
not be able to see the name of your TFS projects or build pipelines when linking to a build, but you can include those
variables in the link dialog fields. In addition, when you create a release, Azure Pipelines may not be able to query the TFS
server for the build numbers. Instead, enter the Build ID (not the build number) of the desired build in the appropriate
field, or select the Latest build.

Artifact sources - TeamCity


To integrate with TeamCity, you must first install the TeamCity artifacts for Azure Pipelines extension from
Marketplace.
To consume TeamCity artifacts, start by creating a service connection with credentials to connect to your TeamCity
server (see service connections for details).
You can then link a TeamCity build configuration to a release pipeline. The TeamCity build configuration must be
configured with an action to publish the artifacts.
The following features are available when using TeamCity sources:

F EAT URE B EH AVIO R W IT H T EA M C IT Y SO URC ES

Auto-trigger releases You cannot configure a continuous deployment trigger for


TeamCity sources in a release pipeline. To create a new release
automatically when a build is complete, add a script to your
TeamCity project that invokes the Azure Pipelines REST APIs
to create a new release.

Artifact variables A number of artifact variables are supported for builds from
TeamCity.

Work items and commits Azure Pipelines cannot show work items or commits for
TeamCity builds.
F EAT URE B EH AVIO R W IT H T EA M C IT Y SO URC ES

Artifact download By default, TeamCity builds are downloaded to the agent. You
can configure an option in the stage to skip the download of
artifacts.

NOTE
Azure Pipelines may not be able to contact your TeamCity server if, for example, it is within your enterprise network. In this
case you can integrate Azure Pipelines with TeamCity by setting up an on-premises agent that can access the TeamCity
server. You will not be able to see the name of your TeamCity projects when linking to a build, but you can type this into the
link dialog field.

Artifact sources - Custom artifacts


In addition to built-in artifact sources, Azure Artifacts supports integrating any custom artifact source with the
artifact extensibility model. You can plug in any custom artifact source, and Azure DevOps will provide a first-class
user experience and seamless integration.
For more information, see Azure DevOps artifact extensibility model.

Artifact sources - Other sources


Your artifacts may be created and exposed by other types of sources such as a NuGet repository. While we
continue to expand the types of artifact sources supported in Azure Pipelines, you can start using it without
waiting for support for a specific source type. Simply skip the linking of artifact sources in a release pipeline, and
add custom tasks to your stages that download the artifacts directly from your source.

Artifact source alias


To ensure the uniqueness of every artifact download, each artifact source linked to a release pipeline is
automatically provided with a specific download location known as the _source alias_ . This location can be
accessed through the variable:
$(System.DefaultWorkingDirectory)\[source alias]

This uniqueness also ensures that, if you later rename a linked artifact source in its original location (for example,
rename a build pipeline in Azure Pipelines or a project in Jenkins), you don't need to edit the task properties
because the download location defined in the agent does not change.
The source alias is, by default, the name of the source selected when you linked the artifact source, prefixed with
an underscore; depending on the type of the artifact source this will be the name of the build pipeline, job, project,
or repository. You can edit the source alias from the artifacts tab of a release pipeline; for example, when you
change the name of the build pipeline and you want to use a source alias that reflects the name of the build
pipeline.

Primary source
When you link multiple artifact sources to a release pipeline, one of them is designated as the primary artifact
source. The primary artifact source is used to set a number of pre-defined variables. It can also be used in naming
releases.

Artifact download
When you deploy a release to a stage, the versioned artifacts from each of the sources are, by default,
downloaded to the automation agent so that tasks running within that stage can deploy these artifacts. The
artifacts downloaded to the agent are not deleted when a release is completed. However, when you initiate the
next release, the downloaded artifacts are deleted and replaced with the new set of artifacts.
A new unique folder in the agent is created for every release pipeline when you initiate a release, and the artifacts
are downloaded into that folder. The $(System.DefaultWorkingDirectory) variable maps to this folder.
Azure Pipelines currently does not perform any optimization to avoid downloading the unchanged artifacts if the
same release is deployed again. In addition, because the previously downloaded contents are always deleted
when you initiate a new release, Azure Pipelines cannot perform incremental downloads to the agent.
You can, however, instruct Azure Pipelines to skip the automatic download of artifacts to the agent for a specific
job and stage of the deployment if you wish. Typically, you will do this when the tasks in that job do not require
any artifacts, or if you implement custom code in a task to download the artifacts you require.
In Azure Pipelines, you can, however, select which artifacts you want to download to the agent for a specific job
and stage of the deployment. Typically, you will do this to improve the efficiency of the deployment pipeline when
the tasks in that job do not require all or any of the artifacts, or if you implement custom code in a task to
download the artifacts you require.

Artifact variables
Azure Pipelines exposes a set of pre-defined variables that you can access and use in tasks and scripts; for
example, when executing PowerShell scripts in deployment jobs. When there are multiple artifact sources linked
to a release pipeline, you can access information about each of these. For a list of all pre-defined artifact variables,
see variables.

Additional information
Code repo sources in Azure Pipelines
Jenkins artifacts in Azure Pipelines
TeamCity extension for Continuous Integration
External TFS extension for Release Management

Related topics
Release pipelines
Stages

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Set up Azure Pipelines and Maven
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

This guide covers the basics of using Azure Pipelines to work with Maven artifacts in Azure Artifacts feeds.

Set up your feed


1. Select Build and Release .
2. Select Packages .
3. With your feed selected, select Connect to feed .
4. Select Maven .

5. Click on "Generate Maven Credentials"


6. Create a local file named settings.xml from the following template and then paste the generated XML,
replacing the comment:

IMPORTANT
Do not commit this file into your repository.
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<servers>
<!-- Paste the <server> snippet generated by Azure DevOps here -->
</servers>
</settings>

7. Below the settings.xml snippet in the generated credentials dialog, there is a snippet to be added to the
<repositories> section of your project's pom.xml . Add that snippet. If you intend to use Maven to publish to
Artifacts, add the snippet to the <distributionManagement> section of the POM file as well. Commit and push
this change.
8. Upload settings.xml created in step 3 as a Secure File into the pipeline's library.
9. Add tasks to your pipeline to download the secure file and to copy it to the (~/.m2) directory. The latter can
be accomplished with the following PowerShell script, where settingsxml is the reference name of the
"Download secure file" task:

New-Item -Type Directory -Force "${HOME}/.m2"


Copy-Item -Force "$(settingsxml.secureFilePath)" "${HOME}/.m2/settings.xml"

1. Navigate to Ar tifacts .
2. With your feed selected, select Connect to feed .
3. Select Maven .

4. Set up your project by following these steps:


a. Add the repo to both your pom.xml's <repositories> and <distributionManagement> sections.
Replace the [ORGANIZATION_NAME] placeholder with your own organization.
<repository>
<id>[ORGANIZATION_NAME]</id>

<url>https://pkgs.dev.azure.com/[ORGANIZATION_NAME]/_packaging/[ORGANIZATION_NAME]/maven/v1</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>

b. Add or edit the settings.xml file in ${user.home}/.m2. Replace the [ORGANIZATION_NAME] placeholder
with your own organization.

<server>
<id>[ORGANIZATION_NAME]</id>
<username>[ORGANIZATION_NAME]</username>
<password>[PERSONAL_ACCESS_TOKEN]</password>
</server>

c. Generate a Personal Access Token with Packaging read & write scopes and paste it into the
<password> tag.

IMPORTANT
In order to automatically authenticate Maven feeds from Azure Artifacts, you must have the mavenFeedAuthenticate
argument set to true in your Maven task. See Maven build task for more information.

Restore your package


Run this command in your project directory to restore your package.

mvn build

Publish your package


Run this command in your project directory to publish your package.

mvn deploy
Publish npm packages (YAML/Classic)
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

You can publish npm packages produced by your build to:


Azure Artifacts or the TFS Package Management service.
Other registries such as https://registry.npmjs.org/ .
YAML
Classic
To publish to an Azure Artifacts feed, set the Project Collection Build Ser vice identity to be a Contributor on
the feed. To learn more about permissions to Package Management feeds, see Secure and share packages using
feed permissions.
Add the following snippet to your azure-pipelines.yml file.

- task: Npm@1
inputs:
command: publish
publishRegistry: useFeed
publishFeed: projectName/feedName

useFeed : this option allows the use of an Azure Artifacts feed in the same organization as the build.
feedName : the name of the feed you want to publish to.
projectName the name of your project.

NOTE
All new feeds that were created through the classic user interface are project scoped feeds. You must include the project
name in the publishFeed parameter: publishFeed: '<projectName>/<feedName>' . See Project-scoped feeds vs.
Organization-scoped feeds to learn about the difference between the two types.

To publish to an external npm registry, you must first create a service connection to point to that feed. You can do
this by going to Project settings , selecting Ser vices , and then creating a New ser vice connection . Select the
npm option for the service connection. Fill in the registry URL and the credentials to connect to the registry. See
Service connections to learn more about how to create, manage, secure, and use a service connection.
To publish a package to an npm registry, add the following snippet to your azure-pipelines.yml file.
- task: Npm@1
inputs:
command: publish
publishEndpoint: '<copy and paste the name of the service connection here>'

publishEndpoint : This argument is required when publishRegistry == UseExternalRegistry . Copy and paste
the name of the service connection you created earlier.
For a list of other options, see the npm task to install and publish your npm packages, or run an npm command.
YAML is not supported in TFS.

NOTE
Ensure that your working folder has an .npmrc file with a registry= line, as described in the Connect to feed screen
in your feed.
The build does not support using the publishConfig property to specify the registry to which you're publishing. The
build will fail, potentially with unrelated authentication errors, if you include the publishConfig property in your
package.json configuration file.

FAQ
Where can I learn about the Azure Pipelines and TFS Package Management ser vice?
Check out the Azure Artifacts landing page for details about Artifacts in Azure Pipelines.
How to publish packages to my feed from the command line?
See Publish your package to an npm feed using the CLI for more information.
How to create a token that lasts longer than 90 days?
See Set up your client's npmrc for more information on how to set up authentication to Azure Artifacts
feeds.
Do you recommend using scopes or upstream sources?
We recommend using upstream sources because it gives you the most flexibility to use a combination of
scoped- and non-scoped packages in your feed, as well as scoped- and non-scoped packages from
npmjs.com.
See Use npm scopes and Use packages from npmjs.com for more details.
Publish to NuGet feeds (YAML/Classic)
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

You can publish NuGet packages from your build to NuGet feeds using the Pipeline tasks as well as the Classic
user interface. You can publish these packages to:
Azure Artifacts or the TFS Package Management service.
Other NuGet services such as NuGet.org.
Your internal NuGet repository.

Create a NuGet package


There are various ways to create NuGet packages during a build. If you're already using MSBuild or some other
task to create your packages, skip this section and publish your packages. Otherwise, add a NuGet task:
YAML
Classic
To create a package, add the following snippet to your azure-pipelines.yml file.

- task: NuGetCommand@2
inputs:
command: pack
packagesToPack: '**/*.csproj'

The NuGet task supports a number of options. The following list describes some of the key ones. The task
documentation describes the rest.
packagesToPack : The path to the files that describe the package you want to create. If you don't have these,
see the NuGet documentation to get started.
configuration : The default is $(BuildConfiguration) unless you want to always build either Debug or
Release packages, or unless you have a custom build configuration.
packDestination : The default is $(Build.ArtifactStagingDirectory) . If you set this, make a note of the
location so you can use it in the publish task.
YAML is not supported in TFS.

Package versioning
In NuGet, a particular package is identified by its name and version number. A recommended approach to
versioning packages is to use Semantic Versioning. Semantic version numbers have three numeric components,
Major.Minor.Patch .
When you fix a bug, you increment the patch ( 1.0.0 to 1.0.1 ). When you release a new backward-compatible
feature, you increment the minor version and reset the patch version to 0 ( 1.4.17 to 1.5.0 ). When you make a
backward-incompatible change, you increment the major version and reset the minor and patch versions to 0 (
2.6.5 to 3.0.0 ).

In addition to Major.Minor.Patch , Semantic Versioning provides for a prerelease label. Prerelease labels are a
hyphen ( - ) followed by whatever letters and numbers you want. Version 1.0.0-alpha , 1.0.0-beta , and
1.0.0-foo12345 are all prerelease versions of 1.0.0 . Even better, Semantic Versioning specifies that when you
sort by version number, those prerelease versions fit exactly where you'd expect: 0.99.999 < 1.0.0-alpha <
1.0.0 < 1.0.1-beta .

When you create a package in continuous integration (CI), you can use Semantic Versioning with prerelease
labels. You can use the NuGet task for this purpose. It supports the following formats:
Use the same versioning scheme for your builds and packages, if that scheme has at least three parts
separated by periods. The following build pipeline formats are examples of versioning schemes that are
compatible with NuGet:
$(Major).$(Minor).$(rev:.r) , where Major and Minor are two variables defined in the build pipeline.
This format will automatically increment the build number and the package version with a new patch
number. It will keep the major and minor versions constant, until you change them manually in the
build pipeline.
$(Major).$(Minor).$(Patch).$(date:yyyyMMdd) , where Major , Minor , and Patch are variables defined
in the build pipeline. This format will create a new prerelease label for the build and package while
keeping the major, minor, and patch versions constant.
Use a version that's different from the build number. You can customize the major, minor, and patch
versions for your packages in the NuGet task, and let the task generate a unique prerelease label based on
date and time.
Use a script in your build pipeline to generate the version.
YAML
Classic
This example shows how to use the date and time as the prerelease label.

variables:
Major: '1'
Minor: '0'
Patch: '0'

steps:
- task: NuGetCommand@2
inputs:
command: pack
versioningScheme: byPrereleaseNumber
majorVersion: '$(Major)'
minorVersion: '$(Minor)'
patchVersion: '$(Patch)'

For a list of other possible values for versioningScheme , see the NuGet task.
YAML is not supported in TFS.
Although Semantic Versioning with prerelease labels is a good solution for packages produced in CI builds,
including a prerelease label is not ideal when you want to release a package to your users. The challenge is that
after packages are produced, they're immutable. They can't be updated or replaced.
When you're producing a package in a build, you can't know whether it will be the version that you aim to release
to your users or just a step along the way toward that release. Although none of the following solutions are ideal,
you can use one of these depending on your preference:
After you validate a package and decide to release it, produce another package without the prerelease
label and publish it. The drawback of this approach is that you have to validate the new package again, and
it might uncover new issues.
Publish only packages that you want to release. In this case, you won't use a prerelease label for every
build. Instead, you'll reuse the same package version for all packages. Because you do not publish
packages from every build, you do not cause a conflict.

NOTE
Please note that DotNetCore and DotNetStandard packages should be packaged with the DotNetCoreCLI@2 task to
avoid System.InvalidCastExceptions. See the .NET Core CLI task for more details.

task: DotNetCoreCLI@2
displayName: 'dotnet pack $(buildConfiguration)'
inputs:
command: pack
versioningScheme: byPrereleaseNumber
majorVersion: '$(Major)'
minorVersion: '$(Minor)'
patchVersion: '$(Patch)'

Publish your packages


In the previous section, you learned how to create a package with every build. When you're ready to share the
changes to your package with your users, you can publish it.
YAML
Classic
To publish to an Azure Artifacts feed, set the Project Collection Build Ser vice identity to be a Contributor on
the feed. To learn more about permissions to Package Management feeds, see Secure and share packages using
feed permissions. Add the following snippet to your azure-pipelines.yml file.

steps:
- task: NuGetAuthenticate@0
displayName: 'NuGet Authenticate'
- task: NuGetCommand@2
displayName: 'NuGet push'
inputs:
command: push
publishVstsFeed: '<projectName>/<feed>'
allowPackageConflicts: true

NOTE
Artifact feeds that were created through the classic user interface are project scoped feeds. You must include the project
name in the publishVstsFeed parameter: publishVstsFeed: '<projectName>/<feed>' . See Project-scoped feeds vs.
Organization-scoped feeds to learn about the difference between the two types.

To publish to an external NuGet feed, you must first create a service connection to point to that feed. You can do
this by going to Project settings , selecting Ser vice connections , and then creating a New ser vice
connection . Select the NuGet option for the service connection. To connect to the feed, fill in the feed URL and
the API key or token.
To publish a package to a NuGet feed, add the following snippet to your azure-pipelines.yml file.

- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: '<Name of the NuGet service connection>'
- task: NuGetCommand@2
inputs:
command: push
nuGetFeedType: external
versioningScheme: byEnvVar
versionEnvVar: <VersionVariableName>

YAML is not supported in TFS.

Publish symbols for your packages


When you push packages to a Package Management feed, you can also publish symbols.

FAQ
Where can I learn more about Azure Artifacts and the TFS Package Management service?
Package Management in Azure Artifacts and TFS
Publish Python packages in Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online

You can publish Python packages produced by your build to:


Azure Artifacts
Other repositories such as https://pypi.org/

To publish Python packages produced by your build, you'll use twine, a widely used tool for publishing Python
packages. This guide covers how to do the following in your pipeline:
1. Install twine on your build agent
2. Authenticate twine with your Azure Artifacts feeds
3. Use a custom task that invokes twine to publish your Python packages

Install twine
First, you'll need to run pip install twine to ensure the build agent has twine installed.
YAML
Classic

- script: 'pip install twine'

Check out the script YAML task reference for the schema for this command.

Authenticate Azure Artifacts with twine


To use twine to publish Python packages, you first need to set up authentication. The Python Twine Authenticate
task stores your authentication credentials in an environment variable ( PYPIRC_PATH ). twine will reference this
variable later.
YAML
Classic
To authenticate with twine , add the following snippet to your azure-pipelines.yml file.
The example below will enable you to authenticate to a list of Azure Artifacts feeds as well as a list of service
connections from external organizations. If you need to authenticate to a single feed, you must replace the
following arguments: artifactFeeds and externalFeeds with artifactFeed and externalFeed and specify your
feed name accordingly.

- task: TwineAuthenticate@0
inputs:
artifactFeeds: 'feed_name1, feed_name2'
externalFeeds: 'feed_name1, feed_name2'

ar tifactFeeds : a list of Azure Artifacts feeds within your organization.


externalFeeds : a list of service connections from external organizations including PyPI or feeds in other
organizations in Azure DevOps.
TIP
The authentication credentials written into the PYPIRC_PATH environment variable supersede those in your .ini and .conf
files.
If you add multiple Python Twine Authenticate tasks at different times in your pipeline steps, each additional build task
execution will extend (not override) the existing PYPIRC_PATH environment variable.

Use a custom twine task to publish


After you've set up authentication with the preceding snippet, you can use twine to publish your Python packages.
The following example uses a custom command-line task.
YAML
Classic

- script: 'twine upload -r {feedName/EndpointName} --config-file $(PYPIRC_PATH) {package path to publish}'

Check out the YAML schema reference for more details on the script keyword.

WARNING
We strongly recommend NOT checking any credentials or tokens into source control.
Publish symbols for debugging
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

NOTE
A symbol server is available with Azure Ar tifacts in Azure DevOps Services and works best with Visual Studio 2017
Update 4 or later . Team Foundation Ser ver users and users without the Azure Ar tifacts extension can publish
symbols to a file share using a build task.

Symbol servers enable debuggers to automatically retrieve the correct symbol files without knowing product
names, build numbers, or package names. To learn more about symbols, read the concept page. To consume
symbols, see this page for Visual Studio or this page for WinDbg.

Publish symbols
To publish symbols to the symbol server in Azure Artifacts, include the Index Sources and Publish Symbols task in
your build pipeline. Configure the task as follows:
For Version , select 2.\ *.
For Version , select 1.\ *.
For Symbol Ser ver Type , select Symbol Ser ver in this organization/collection (requires Azure
Ar tifacts) .
Use the Path to symbols folder argument to specify the root directory that contains the .pdb files to be
published.
Use the Search pattern argument to specify search criteria to find the .pdb files in the folder that you specify
in Path to symbols folder . You can use a single-folder wildcard ( * ) and recursive wildcards ( ** ). For
example, **\bin\**\*.pdb searches for all .pdb files in all subdirectories named bin.
Publish symbols for NuGet packages
To publish symbols for NuGet packages, include the preceding task in the build pipeline that produces the NuGet
packages. Then the symbols will be available to all users in the Azure DevOps organization.

Publish symbols to a file share


You can also publish symbols to a file share by using the Index Sources and Publish Symbols task. When you use
this method, the task will copy the .pdb files over and put them into a specific layout. When Visual Studio is
pointed to the UNC share, it can find the symbols related to the binaries that are currently loaded.
Add the task to your build pipeline and configure it as follows:
For Version , select 2.\ *.
For Symbol Ser ver Type , select File share .
When you select File share as Symbol Ser ver Type , you get the Compress Symbols option. This
option compresses your symbols to save space.
Use the Path to symbols folder argument to specify the root directory that contains the .pdb files to be
published.
Use the Search pattern argument to specify search criteria to find the .pdb files in the folder that you specify
in Path to symbols folder . You can use a single-folder wildcard ( * ) and recursive wildcards ( ** ). For
example, **\bin\**\*.pdb searches for all .pdb files in all subdirectories named bin.

Portable PDBs
If you're using portable PDBs, you still need to use the Index Sources and Publish Symbols task to publish
symbols. For portable PDBs, the build does the indexing, however you should use SourceLink to index the symbols
as part of the build. Note that Azure Artifacts doesn't presently support ingesting NuGet symbol packages and so
the task is used to publish the generated PDB files into the symbols service directly.

Use indexed symbols to debug your app


You can use your indexed symbols to debug an app on a different machine from where the sources were built.
Enable your development machine
In Visual Studio, you might need to enable the following two options in Debug > Options > Debugging >
General :
Enable source ser ver suppor t
Allow source ser ver for par tial trust assemblies (Managed only)
Advanced usage: overriding at debug time
The mapping information injected into the .pdb files contains variables that can be overridden at debugging time.
Overriding the variables might be required if the collection URL has changed. When you're overriding the mapping
information, the goals are to construct:
A command (SRCSRVCMD) that the debugger can use to retrieve the source file from the server.
A location (SRCSRVTRG) where the debugger can find the retrieved source file.
The mapping information might look something like the following:
SRCSRV: variables ------------------------------------------
TFS_EXTRACT_TARGET=%targ%\%var5%\%fnvar%(%var6%)%fnbksl%(%var7%)
TFS_EXTRACT_CMD=tf.exe git view /collection:%fnvar%(%var2%) /teamproject:"%fnvar%(%var3%)"
/repository:"%fnvar%(%var4%)" /commitId:%fnvar%(%var5%) /path:"%var7%" /output:%SRCSRVTRG% %fnvar%(%var8%)
TFS_COLLECTION=http://SERVER:8080/tfs/DefaultCollection
TFS_TEAM_PROJECT=93fc2e4d-0f0f-4e40-9825-01326191395d
TFS_REPO=647ed0e6-43d2-4e3d-b8bf-2885476e9c44
TFS_COMMIT=3a9910862e22f442cd56ff280b43dd544d1ee8c9
TFS_SHORT_COMMIT=3a991086
TFS_APPLY_FILTERS=/applyfilters
SRCSRVVERCTRL=git
SRCSRVERRDESC=access
SRCSRVERRVAR=var2
SRCSRVTRG=%TFS_EXTRACT_TARGET%
SRCSRVCMD=%TFS_EXTRACT_CMD%
SRCSRV: source files ---------------------------------------
C:\BuildAgent\_work\1\src\MyApp\Program.cs*TFS_COLLECTION*TFS_TEAM_PROJECT*TFS_REPO*TFS_COMMIT*TFS_SHORT_COMMI
T*/MyApp/Program.cs*TFS_APPLY_FILTERS
C:\BuildAgent\_work\1\src\MyApp\SomeHelper.cs*TFS_COLLECTION*TFS_TEAM_PROJECT*TFS_REPO*TFS_COMMIT*TFS_SHORT_CO
MMIT*/MyApp/SomeHelper.cs*TFS_APPLY_FILTERS

The preceding example contains two sections: the variables section and the source files section. The information in
the variables section can be overridden. The variables can use other variables, and can use information from the
source files section.
To override one or more of the variables while debugging with Visual Studio, create an .ini file
%LOCALAPPDATA%\SourceServer\srcsrv.ini . Set the content of the .ini file to override the variables. For example:

[variables]
TFS_COLLECTION=http://DIFFERENT_SERVER:8080/tfs/DifferentCollection

IMPORTANT
If you want to delete symbols that were published using the Index Sources & Publish Symbols task, you must first
remove the build that generated those symbols. This can be accomplished by using retention policies to clean up your build
or by manually deleting the run. For more information about debugging your app, see Debug with symbols in Visual Studio,
and Debug with symbols in WinDbg.

FAQ
Q: What's the retention policy for the symbols stored in the Azure Pipelines symbol server?
A: Symbols have the same retention as the build. When you delete a build, you also delete the symbols that the
build produced.
Q: Can I use source indexing on a portable .pdb file created from a .NET Core assembly?
A: No, source indexing is currently not enabled for portable .pdb files because SourceLink doesn't support
authenticated source repositories. The workaround at the moment is to configure the build to generate full .pdb
files.
Q: Is this available in TFS?
A: In TFS, you can bring your own file share and set it up as a symbol server, as described in this blog.
Publish and download Universal Packages in Azure
Pipelines
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines
When you want to publish a set of related files from a pipeline as a single package, you can use Universal Packages
hosted in Azure Artifacts feeds.

Prepare your Universal Package


Universal Packages are created from a directory of files. By default, the Universal Packages task will publish all files
in $(Build.ArtifactStagingDirectory) .
To prepare your Universal Package for publishing, either configure preceding tasks to place output files in that
directory, or use the Copy Files utility task to assemble the files that you want to publish.

Publish your packages


YAML
Classic
To publish a Universal Package to your feed, add the following snippet to your azure-pipelines.yml file.

- task: UniversalPackages@0
displayName: Universal Publish
inputs:
command: publish
publishDirectory: '$(Build.ArtifactStagingDirectory)'
vstsFeedPublish: '<projectName>/<feedName>'
vstsFeedPackagePublish: '<Package name>'
packagePublishDescription: '<Package description>'

A RGUM EN T DESC RIP T IO N

publishDirectory Location of the files to be published.

vstsFeedPublish The project and feed name to publish to.

vstsFeedPackagePublish The package name.

packagePublishDescription Description of the content of the package.

NOTE
See Task control options to learn about the available control options for your task.

To publish to an Azure Artifacts feed, set the Project Collection Build Ser vice identity to be a Contributor on
the feed. To learn more about permissions to Package Management feeds, see Secure and share packages using
feed permissions.
To publish to an external Universal Packages feed, you must first create a service connection to point to that feed.
You can do this by going to Project settings , selecting Ser vice connections , and then creating a New Ser vice
Connection . Select the Team Foundation Ser ver/Team Ser vices option for the service connection. Fill in the
feed URL and a personal access token to connect to the feed.

Package versioning
In Universal Packages, a particular package is identified by its name and version number. Currently, Universal
Packages require Semantic Versioning. Semantic version numbers have three numeric components,
Major.Minor.Patch . When you fix a bug, you increment the patch ( 1.0.0 to 1.0.1 ). When you release a new
backward-compatible feature, you increment the minor version and reset the patch version to 0 ( 1.4.17 to 1.5.0
). When you make a backward-incompatible change, you increment the major version and reset the minor and
patch versions to 0 ( 2.6.5 to 3.0.0 ).
The Universal Packages task automatically selects the next major, minor, or patch version for you when you publish
a new package. Just set the appropriate option.
YAML
Classic
In the Universal Packages snippet that you added previously, add a versionOption . The options for publishing a
new package version are: major , minor , patch , or custom .
Selecting custom allows you to specify any SemVer2 compliant version number for your package. The other
options will get the latest version of the package from your feed and increment the chosen version segment by 1.
So if you have a testPackage v1.0.0, and you publish a new version of testPackage and select the major option, your
package version number will be 2.0.0. If you select the minor option, your package version will be 1.1.0, and if you
select the patch option, your package version will be 1.0.1.
One thing to keep in mind is that if you select the custom option, you must also provide a versionPublish .

- task: UniversalPackages@0
displayName: Universal Publish
inputs:
command: publish
publishDirectory: '$(Build.ArtifactStagingDirectory)'
vstsFeedPublish: '<projectName>/<feedName>'
vstsFeedPackagePublish: '<Package name>'
versionOption: custom
versionPublish: '<Package version>'
packagePublishDescription: '<Package description>'

A RGUM EN T DESC RIP T IO N

publishDirectory Location of the files to be published.

vstsFeedPublish The project and feed name to publish to.

vstsFeedPackagePublish The package name.

versionOption Select a version increment strategy. Options: major , minor ,


patch , custom
A RGUM EN T DESC RIP T IO N

versionPublish The custom package version

packagePublishDescription Description of the content of the package.

NOTE
See Task control options to learn about the available control options for your task.

Download a Universal Package


You can also download a Universal Package from your pipeline.
YAML
Classic
To download a Universal Package from a feed in your organization to a specified destination, use the following
snippet:

steps:
- task: UniversalPackages@0
displayName: 'Universal download'
inputs:
command: download
vstsFeed: '<projectName>/<feedName>'
vstsFeedPackage: '<packageName>'
vstsPackageVersion: 1.0.0
downloadDirectory: '$(Build.SourcesDirectory)\someFolder'

A RGUM EN T DESC RIP T IO N

vstsFeed The project and feed name that the package will be
downloaded from.

vstsFeedPackage Name of the package to be downloaded.

vstsPackageVersion Version of the package to be downloaded.

downloadDirectory Package destination directory. Default is


$(System.DefaultWorkingDirectory).

NOTE
See Task control options to learn about the available control options for your task.

To download a Universal Package from an external source, use the following snippet:
steps:
- task: UniversalPackages@0
displayName: 'Universal download'
inputs:
command: download
feedsToUse: external
externalFeedCredentials: MSENG2
feedDownloadExternal: 'fabrikamFeedExternal'
packageDownloadExternal: 'fabrikam-package'
versionDownloadExternal: 1.0.0

A RGUM EN T DESC RIP T IO N

feedsToUse Value should be external when you're downloading from an


external source.

externalFeedCredentials Name of a service connection to another Azure DevOps


organization or server. See service connections.

feedDownloadExternal Feed that the package will be downloaded from.

packageDownloadExternal Name of the package to be downloaded.

versionDownloadExternal Version of the package to be downloaded.

NOTE
See Task control options to learn about the available control options for your task.

Downloading the latest version


You can use a wildcard expression as the version to get the latest (highest) version of a package. For more
information, see Downloading the latest version in the quickstart guide.

FAQ
Where can I learn more about Azure Artifacts and the TFS Package Management service?
Package Management in Azure Artifacts and TFS
In what versions of Azure DevOps/TFS are Universal Packages available?
Universal Packages are only available for Azure DevOps Services.
Restore NuGet packages in Azure Pipelines
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

NuGet package restore allows you to have all your project's dependencies available without having to store them in
source control. This allows for a cleaner development environment and smaller repository size. You can restore
your NuGet packages using the NuGet restore build task, the NuGet CLI, or the .NET Core CLI. This article will show
you how to restore your NuGet packages using both YAML and the classic Azure pipelines.
Prerequisites
Set up your solution to consume packages from Azure artifacts feed.
Created your first pipeline for your repository.
Set up the build identity permissions for your feed.

Restore packages with NuGet restore build task


To build a solution that relies on NuGet packages from Azure artifacts feeds, we will want to add the NuGet build
task to our pipeline.
1. Navigate to your build pipeline and select Edit .
2. Under Tasks , Agent job , select the plus sign "+" to add a new task. Search for NuGet task and add it to your
agent job.
3. Fill out the following information:
Display name: NuGet restore.
Command: restore.
Path to solution, packages.config, or project.json: The path to the solution, packages.config, or
project.json file that references the packages to be restored.
4. If you've checked in a NuGet.config, select Feeds in my NuGet.config and specify the file from your repo. If
you're using a single Azure Artifacts feed, select the Feed(s) I select here option and select your feed from the
dropdown.
5. Check the Use packages from NuGet.org option if you want to include NuGet.org in the generated
NuGet.config.
6. Select Save & queue .
Restore your NuGet packages with the NuGet CLI
In order for your project to be set up properly, your nuget.config must be in the same folder as your .csproj or
.sln file. The nuGet.config file you check-in also should list all the package sources you want to consume. The
example below demonstrates how that might look.

<?xml version="1.0" encoding="utf-8"?>


<configuration>
<packageSources>
<!-- remove any machine-wide sources with <clear/> -->
<clear />
<!-- add an Azure Artifacts feed -->
<add key="FabrikamFiber"
value="https://pkgs.dev.azure.com/microsoftLearnModule/_packaging/FabrikamFiber/nuget/v3/index.json" />
<!-- also get packages from the NuGet Gallery -->
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
</packageSources>
</configuration>

To restore your NuGet packages run the following command in your project directory:

nuget.exe restore

Restore NuGet packages with the .NET Core CLI task


To restore your package using YAML and the .NET Core CLI task, use the following example:
- task: DotNetCoreCLI@2
displayName: dotnet restore
inputs:
command: restore
projects: '**/*.csproj'
feedsToUse: 'select'
vstsFeed: '<projectName>/<feedName>'
includeNuGetOrg: true

command : The dotnet command to run. Options: build , push , pack , restore , run , test , and custom .
projects : The path to the csproj file(s) to use. You can use wildcards (e.g. **/*.csproj for all .csproj files in all
subfolders).
feedsToUse : You can either choose to select a feed or commit a NuGet.config file to your source code repository
and set its path using nugetConfigPath . Options: select , config .
vstsFeed : This argument is required when feedsToUse == Select . Value format: <projectName>/<feedName> .
includeNuGetOrg : Use packages from NuGet.org.

Restore NuGet packages from feeds in a different organization


If your NuGet.config contains feeds in a different Azure DevOps organization than the one running the build, you'll
need to set up credentials for those feeds manually.
1. Select an account (either a service account (recommended) or a user account) that has access to the remote
feed.
2. In your browser, open a Private mode, Incognito mode, or a similar mode window and navigate to the Azure
DevOps organization that hosts the feed. Sign in with the credentials mentioned in step 1, select User
settings then Personal Access Tokens .

3. Create your PAT with the Packaging (read) scope and keep it handy.
4. In the Azure DevOps organization that contains the build, edit the build's NuGet step and ensure you're using
version 2 or greater of the task, using the version selector.
5. In the Feeds and authentication section, Ensure you've selected the Feeds in my NuGet.config radio
button.
6. Set the path to your NuGet.config in the Path to NuGet.config .
7. In Credentials for feeds outside this organization/collection , select the + New .

8. In the service connection dialog that appears, select the External Azure DevOps Ser ver option and enter
a connection name, the feed URL (make sure it matches what's in your NuGet.config) and the PAT you
created in step 3.

9. Save & queue a new build.

FAQ
Why can't my build restore NuGet packages?
NuGet restore can fail due to a variety of issues. One of the most common issues is the introduction of a new
project in your solution that requires a target framework that isn't understood by the version of NuGet your build is
using. This issue generally doesn't present itself on a developer machine because Visual Studio updates the NuGet
restore mechanism at the same time it adds new project types. We're looking into similar features for Azure
Artifacts. In the meantime though, the first thing to try when you can't restore packages is to update to the latest
version of NuGet.
How do I use the latest version of NuGet?
If you're using Azure Pipelines or TFS 2018, new template-based builds will work automatically thanks to a new
"NuGet Tool Installer" task that's been added to the beginning of all build templates that use the NuGet task. We
periodically update the default version that's selected for new builds around the same time we install Visual Studio
updates on the Hosted build agents.
For existing builds, just add or update a NuGet Tool Installer task to select the version of NuGet for all the
subsequent tasks. You can see all available versions of NuGet on nuget.org.

TFS 2017 and earlier


Because the NuGet Tool Installer is not available in TFS versions prior to TFS 2018, there is a recommended
workaround to use versions of NuGet > 4.0.0 in Azure Pipelines.
1. Add the task, if you haven't already. If you have a "NuGet Restore" task in the catalog (it may be in the
Deprecated tasks section), insert it into your build. Otherwise, insert a "NuGet" task.
2. For your NuGet/NuGet Installer task, use the version selector under the task name to select version "0.*".
3. In the Advanced section, set the NuGet Version to "Custom" and the Path to NuGet.exe as
$(Build.BinariesDirectory)\nuget.exe
4. Before your NuGet task, add a "PowerShell" task, select "Inline Script" as the Type, enter this PowerShell script as
the Inline Script, and enter "4.3.0" (or any version of NuGet from this list) as the Arguments.
Our thanks to GitHub user leftler for creating the original version of the PowerShell script linked above.

Related articles
Publish to NuGet feeds (YAML/Classic)
Publish and consume build artifacts
Use Jenkins to restore and publish packages
11/2/2020 • 4 minutes to read • Edit Online

Azure Ar tifacts | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Azure Artifacts works with the continuous integration tools your team already uses. In this Jenkins walkthrough,
you'll create a NuGet package and publish it to an Azure Artifacts feed. If you need help on Jenkins setup, you can
learn more on the Jenkins wiki.

Setup
This walkthrough uses Jenkins 1.635 running on Windows 10. The walkthrough is simple, so any recent Jenkins
and Windows versions should work.
Ensure the following Jenkins plugins are enabled:
MSBuild 1.24
Git 2.4.0
Git Client 1.19.0
Credentials Binding plugin 1.6
Some of these plugins are enabled by default. Others you will need to install by using Jenkins's "Manage Plugins"
feature.
The example project
The sample project is a simple shared library written in C#.
To follow along with this walkthrough, create a new C# Class Library solution in Visual Studio 2015.
Name the solution "FabrikamLibrary" and uncheck the Create director y for solution checkbox.
On the FabrikamLibrary project's context menu, choose Proper ties , then choose Assembly Information .
Edit the description and company fields. Now generating a NuGet package is easier.
Check the new solution into a Git repo where your Jenkins server can access it later.

Add the Azure Artifacts NuGet tools to your repo


The easiest way to use the Azure Artifacts NuGet service is by adding the
Microsoft.VisualStudio.Services.NuGet.Bootstrap package to your project.

Create a package from your project


Whenever you work from a command line, run init.cmd first. This sets up your environment to allow you to work
with nuget.exe and the Azure Artifacts NuGet service.
Change into the directory containing FabrikamLibrary.csproj.
Run the command nuget spec to create the file FabrikamLibrary.nuspec, which defines how your NuGet
package builds.
Edit FabrikamLibrary.nuspec to remove the boilerplate tags <licenseUrl> , <projectUrl> , and <iconUrl> .
Change the tags from Tag1 Tag2 to fabrikam .
Ensure that you can build the package using the command nuget pack FabrikamLibrary.csproj . Note, you
should target the .csproj (project) file, not the NuSpec file.
A file called FabrikamLibrary.1.0.0.0.nupkg will be produced.

Set up a feed in Azure Artifacts and add it to your project


Create a feed in your Azure DevOps organization called MyGreatFeed. Since you're the owner of the feed, you
will automatically be able to push packages to it.
Add the URL for the feed you just generated to the nuget.config in the root of your repo.
Find the <packageSources> section of nuget.config.
Just before </packageSources> , add a line using this template:
<add key="MyGreatFeed" value="{feed_url}" /> . Change {feed_url} to the URL of your feed.
Commit this change to your repo.
Generate a PAT (personal access token) for your user account. This PAT will allow the Jenkins job to authenticate
to Azure Artifacts as you, so be sure to protect your PAT like a password.
Save your feed URL and PAT to a text file for use later in the walkthrough.

Create a build pipeline in Jenkins


Ensure you have the correct plugins installed in Jenkins.
This will be a Freestyle project. Call it "Fabrikam.Walkthrough".

Under Source Code Management, set the build to use Git and select your Git repo.
Under Build Environment, select the Use secret text(s) or file(s) option.
Add a new Username and password (separated) binding.
Set the Username Variable to "FEEDUSER" and the Password Variable to "FEEDPASS". These are the
environment variables Jenkins will fill in with your credentials when the build runs.
Choose the Add button to create a new username and password credential in Jenkins.
Set the username to "token" and the password to the PAT you generated earlier. Choose Add to save
these credentials.

Under Build (see screenshot below), follow these steps:


Choose Execute Windows batch command . In the Command box, type init.cmd .
Choose Build a Visual Studio project or solution using MSBuild . This task should point to
msbuild.exe and FabrikamLibrary.sln.
Choose Execute Windows batch command again, but this time, use this command:
.tools\VSS.NuGet\nuget pack FabrikamLibrary\FabrikamLibrary.csproj .
Save this build pipeline and queue a build.
The build's Workspace will now contain a .nupkg just like the one you built locally earlier.

Publish a package using Jenkins


These are the last walkthrough steps to publish the package to a feed:
Edit the build pipeline in Jenkins.
After the last build task (which runs nuget pack ), add a new Execute a Windows batch command build task.
In the new Command box, add these two lines:
The first line puts credentials where NuGet can find them:
.tools\VSS.NuGet\nuget sources update -Name "MyGreatFeed" -UserName "%FEEDUSER%" -Password
"%FEEDPASS%"
The second line pushes your package using the credentials saved above:
.tools\VSS.NuGet\nuget push *.nupkg -Name "MyGreatFeed" -ApiKey VSS
Queue another build. This time, the build machine will authenticate to Azure Artifacts and push the package to
the feed you selected.
About pipeline resources
4/16/2020 • 2 minutes to read • Edit Online

A resource is anything used by a pipeline that lives outside the pipeline. Resources are defined at one place and can
be consumed anywhere in your pipeline. Resources can be protected or open.
Resources include:
agent pools
variable groups
secure files
service connections
environments
repositories
artifacts
pipelines
containers
Resources in YAML
11/2/2020 • 23 minutes to read • Edit Online

Azure Pipelines
A resource is anything used by a pipeline that lives outside the pipeline. Pipeline resources include:
CI/CD pipelines that produce artifacts (Azure Pipelines, Jenkins, etc.)
code repositories (Azure Repos Git repos, GitHub, GitHub Enterprise, Bitbucket Cloud)
container image registries (Azure Container Registry, Docker Hub, etc.)
package feeds (GitHub packages)

Why resources?
Resources are defined at one place and can be consumed anywhere in your pipeline. Resources provide you the full
traceability of the services consumed in your pipeline including the version, artifacts, associated commits, and
work-items. You can fully automate your DevOps workflow by subscribing to trigger events on your resources.
Resources in YAML represent sources of types pipelines, builds, repositories, containers, and packages.
Schema

resources:
pipelines: [ pipeline ]
builds: [ build ]
repositories: [ repository ]
containers: [ container ]
packages: [ package ]

Variables
When a resource triggers a pipeline, the following variables are set:

resources.triggeringAlias
resources.triggeringCategory

Resources: pipelines
If you have an Azure Pipeline that produces artifacts, you can consume the artifacts by defining a pipelines
resource. pipelines is a dedicated resource only for Azure Pipelines. You can also set triggers on pipeline resource
for your CD workflows.
In your resource definition, pipeline is a unique value that you can use to reference the pipeline resource later on.
source is the name of the pipeline that produces an artifact.

For an alternative way to download pipelines, see tasks in Pipeline Artifacts.


Schema
Example
resources: # types: pipelines | builds | repositories | containers | packages
pipelines:
- pipeline: string # identifier for the resource used in pipeline resource variables
project: string # project for the source; optional for current project
source: string # name of the pipeline that produces an artifact
version: string # the pipeline run number to pick the artifact, defaults to latest pipeline successful
across all stages; Used only for manual or scheduled triggers
branch: string # branch to pick the artifact, optional; defaults to all branches; Used only for manual or
scheduled triggers
tags: [ string ] # list of tags required on the pipeline to pickup default artifacts, optional; tags are
AND'ed; Used only for manual or scheduled triggers
trigger: # triggers are not enabled by default unless you add trigger section to the resource
branches: # branch conditions to filter the events, optional; Defaults to all branches.
include: [ string ] # branches to consider the trigger events, optional; Defaults to all branches.
exclude: [ string ] # branches to discard the trigger events, optional; Defaults to none.
tags: [ string ] # list of tags to evaluate for trigger event, optional; tags are AND'ed
stages: [ string ] # list of stages to evaluate for trigger event, optional; stages are AND'ed

IMPORTANT
When you define a resource trigger, if its pipeline resource is from the same repo as the current pipeline, triggering follows
the same branch and commit on which the event is raised. But if the pipeline resource is from a different repo, the current
pipeline is triggered on the default branch.

Default branch for triggers


Triggers for resources are created based on the default branch configuration of your YAML, which is master.
However, if you want to configure resource triggers from a different branch, you need to change the default branch
for the pipeline.
1. Go to the edit view of the pipeline and click on the overflow menu on the top-right corner and choose Triggers .

2. Now select 'YAML' tab and go to 'Get sources'.


3. Now you can set the default branch for your pipeline.
Evaluation of artifact version
The pipeline version (CI build run) that gets picked in your pipeline run is controlled by how your pipeline run is
triggered.
In case your pipeline run is created by manual trigger or by scheduled trigger, the default version, branch, and tags
are used to evaluate the version of CI pipeline version.
If you provide a build version (number), that version runs.
If you provide a branch, the latest version from the given branch runs.
If you provide a list of tags, the latest run that has all the matching tags runs.
If you provide a branch and list of tags, the latest run from the branch provided and that has the matching tags
runs.
If you don’t provide anything, the latest version across all the branches runs.

resources:
pipelines:
- pipeline: MyCIAlias
project: Fabrikam
source: Farbrikam-CI
branch: master ### This branch input cannot have wild cards. It is used for evaluating default
version when pipeline is triggered manually or scheduled.
tags: ### These tags are used for resolving default version when the pipeline is triggered
manually or scheduled
- Production ### Tags are AND'ed
- PreProduction

In case your pipeline is triggered automatically, the CI pipeline version will be picked based on the trigger event.
The default version info provided irrelevant.
If you provide branches, a new pipeline will be triggered whenever a CI run is successfully completed that
matches to the branches that are included.
If you provide tags, a new pipeline will be triggered whenever a CI run is successfully completed that matches
all the tags mentioned.
If you provide stages, new pipeline will be triggered whenever a CI run has all the stages mentioned are
completed successfully.
If you provide branches, tags and stages together, a new pipeline run is triggered whenever a CI run matches all
the conditions.
If you don't provide anything and just say trigger: true , a new pipeline run is triggered whenever a CI run is
successfully completed.
If you don't provide any trigger for the resource, no pipeline run will be triggered. Triggers are disabled by
default unless you specifically enable them.

resources:
pipelines:
- pipeline: SmartHotel
project: DevOpsProject
source: SmartHotel-CI
trigger:
branches:
include:
- releases/*
- master
exclude:
- topic/*
tags:
- Verified
- Signed
stages:
- Production
- PreProduction

download for pipelines


All artifacts from the current pipeline and from all pipeline resources are automatically downloaded and made
available at the beginning of each deployment job. You can override this behavior. For more information, see
Pipeline Artifacts. Regular 'job' artifacts are not automatically downloaded. Use download explicitly when needed.
Schema
Example

steps:
- download: [ current | pipeline resource identifier | none ] # disable automatic download if "none"
artifact: string ## artifact name, optional; downloads all the available artifacts if not specified
patterns: string # patterns representing files to include; optional

Artifacts from the pipeline resource are downloaded to


$(PIPELINE.WORKSPACE)/<pipeline-identifier>/<artifact-identifier> folder.
Pipeline resource variables
In each run, the metadata for a pipeline resource is available to all jobs in the form of below predefined variables.
The <Alias> is the identifier that you gave for your pipeline resource. Pipeline resources variables are only
available at runtime.
Schema
Example
resources.pipeline.<Alias>.projectID
resources.pipeline.<Alias>.pipelineName
resources.pipeline.<Alias>.pipelineID
resources.pipeline.<Alias>.runName
resources.pipeline.<Alias>.runID
resources.pipeline.<Alias>.runURI
resources.pipeline.<Alias>.sourceBranch
resources.pipeline.<Alias>.sourceCommit
resources.pipeline.<Alias>.sourceProvider
resources.pipeline.<Alias>.requestedFor
resources.pipeline.<Alias>.requestedForID

Resources: builds
If you have any external CI build system that produces artifacts, you can consume artifacts with a builds resource.
A builds resource can be any external CI systems like Jenkins, TeamCity, CircleCI etc.
Schema
Example

resources: # types: pipelines | builds | repositories | containers | packages


builds:
- build: string # identifier for the build resource
type: string # the type of your build service like Jenkins, circleCI etc.
connection: string # service connection for your build service.
source: string # source definition of the build
version: string # the build number to pick the artifact, defaults to Latest successful build
trigger: boolean # Triggers are not enabled by default and should be explicitly set

builds is an extensible category. You can write an extension to consume artifacts from your builds service
(CircleCI, TeamCity etc.) and introduce a new type of service as part of builds . Jenkins is a type of resource in
builds .

IMPORTANT
Triggers are only supported for hosted Jenkins where Azure DevOps has line of sight with Jenkins server.

downloadBuild for builds


You can consume artifacts from the build resource as part of your jobs using downloadBuild task. Based on the
type of build resource defined (Jenkins, TeamCity etc.), this task automatically resolves to the corresponding
download task for the service during the run time.
Artifacts from the build resource are downloaded to $(PIPELINE.WORKSPACE)/<build-identifier>/ folder.

IMPORTANT
build resource artifacts are not automatically downloaded in your jobs/deploy-jobs. You need to explicitly add
downloadBuild task for consuming the artifacts.

Schema
Example
- downloadBuild: string # identifier for the resource from which to download artifacts
artifact: string # artifact to download; if left blank, downloads all artifacts associated with the resource
provided
patterns: string | [ string ] # a minimatch path or list of [minimatch paths](tasks/file-matching-
patterns.md) to download; if blank, the entire artifact is downloaded

Resources: repositories
If your pipeline has templates in another repository, or if you want to use multi-repo checkout with a repository
that requires a service connection, you must let the system know about that repository. The repository keyword
lets you specify an external repository.
Schema
Example

resources:
repositories:
- repository: string # identifier (A-Z, a-z, 0-9, and underscore)
type: enum # see the following "Type" topic
name: string # repository name (format depends on `type`)
ref: string # ref name to use; defaults to 'refs/heads/master'
endpoint: string # name of the service connection to use (for types that aren't Azure Repos)
trigger: # CI trigger for this repository, no CI trigger if skipped (only works for Azure Repos)
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
tags:
include: [ string ] # tag names which will trigger a build
exclude: [ string ] # tag names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build

Type
Pipelines support the following values for the repository type: git , github , githubenterprise , and bitbucket .
The git type refers to Azure Repos Git repos.
If you specify type: git, the name value refers to another repository in the same project. An example is
name: otherRepo . To refer to a repo in another project within the same organization, prefix the name with
that project's name. An example is name: OtherProject/otherRepo .
If you specify type: github , the name value is the full name of the GitHub repo and includes the user or
organization. An example is name: Microsoft/vscode . GitHub repos require a GitHub service connection for
authorization.
If you specify type: githubenterprise , the name value is the full name of the GitHub Enterprise repo and
includes the user or organization. An example is name: Microsoft/vscode . GitHub Enterprise repos require a
GitHub Enterprise service connection for authorization.
If you specify type: bitbucket , the name value is the full name of the Bitbucket Cloud repo and includes the
user or organization. An example is name: MyBitbucket/vscode . Bitbucket Cloud repos require a Bitbucket
Cloud service connection for authorization.
checkout your repository
Use checkout keyword to consume your repos defined as part of repository resource.
Schema
steps:
- checkout: string # identifier for your repository resource
clean: boolean # if true, execute `execute git clean -ffdx && git reset --hard HEAD` before fetching
fetchDepth: number # the depth of commits to ask Git to fetch; defaults to no limit
lfs: boolean # whether to download Git-LFS files; defaults to false
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules; defaults to not checking out submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1);
defaults to a directory called `s`
persistCredentials: boolean # if 'true', leave the OAuth token in the Git config after the initial fetch;
defaults to false

Repos from the repository resource are not automatically synced in your jobs. Use checkout to fetch your repos
as part of your jobs.
For more information, see Check out multiple repositories in your pipeline.

Resources: containers
If you need to consume a container image as part of your CI/CD pipeline, you can achieve it using containers .A
container resource can be a public or private Docker Registry, or Azure Container Registry.
If you need to consume images from Docker registry as part of your pipeline, you can define a generic container
resource (not type keyword required).
Schema
Example

resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container

A generic container resource can be used as an image consumed as part of your job or it can also be used for
Container jobs.
You can use a first class container resource type for Azure Container Registry (ACR) to consume your ACR images.
This resources type can be used as part of your jobs and also to enable automatic pipeline triggers.
Schema
Example
resources: # types: pipelines | repositories | containers | builds | packages
containers:
- container: string # identifier for the container resource
type: string # type of the registry like ACR, GCR etc.
azureSubscription: string # Azure subscription (ARM service connection) for container registry;
resourceGroup: string # resource group for your ACR
registry: string # registry for container images
repository: string # name of the container image repository in ACR
trigger: # Triggers are not enabled by default and need to be set explicitly
tags:
include: [ string ] # image tags to consider the trigger events, optional; defaults to any new tag
exclude: [ string ] # image tags on discard the trigger events, optional; defaults to none

Container resource variables


Once you define a container as resource, container image metadata is passed to the pipeline in the form of
variables. Information like image, registry, and connection details are made accessible across all the jobs to be used
in your container deploy tasks.
Schema
Example

resources.container.<Alias>.type
resources.container.<Alias>.registry
resources.container.<Alias>.repository
resources.container.<Alias>.tag
resources.container.<Alias>.digest
resources.container.<Alias>.URI
resources.container.<Alias>.location

Note: location variable is only applicable for ACR type of container resources.

Resources: packages
You can consume NuGet and npm GitHub packages as a resource in YAML pipelines.
When specifying package resources, set the package as NuGet or npm. You can also enable automated pipeline
triggers when a new package version gets released.
To use GitHub packages, you will need to use PAT-based authentication and create a GitHub service connection that
uses PAT.
By default, packages will not be automatically downloaded into jobs. To download, use getPackage .
Schema
Example

resources:
packages:
- package: myPackageAlias # alias for the package resource
type: Npm # type of the package NuGet/npm
connection: GitHubConnectionName # Github service connection with the PAT type
name: nugetTest/nodeapp # <Repository>/<Name of the package>
version: 1.0.1 # Version of the packge to consume; Optional; Defaults to latest
trigger: true # To enable automated triggers (true/false); Optional; Defaults to no triggers

Resources: webhooks
With other resources (such as pipelines, containers, build, and packages) you can consume artifacts and enable
automated triggers. However, you could not automate your deployment process based on other external events or
services. webhooks resource enables you to integrate your pipeline with any external service and automate the
workflow. You can subscribe to any external events through its webhooks (GitHub, GitHub Enterprise, Nexus,
Artifactory, etc.) and trigger your pipelines.
Here are the steps to configure the webhook triggers:
1. Set up a webhook on your external service. When creating your webhook, you need to provide the following
info:
Request Url -
https://dev.azure.com/<ADO Organization>/_apis/public/distributedtask/webhooks/<WebHook Name>?api-
version=6.0-preview
Secret - This is optional. If you need to secure your JSON payload, provide the Secret value
2. Create a new "Incoming Webhook" service connection. This is a newly introduced Service Connection Type that
will allow you to define three important pieces of information:
Webhook Name : The name of the webhook should match webhook created in your external service.
HTTP Header - The name of the HTTP header in the request that contains the payload hash value for
request verification. For example, in the case of the GitHub, the request header will be "X-Hub-
Signature "
Secret - The secret is used to parse the payload hash used for verification of the incoming request (this
is optional). If you have used a secret in creating your webhook, you will need to provide the same secret
key
3. A new resource type called webhooks is introduced in YAML pipelines. For subscribing to a webhook event,
you need to define a webhook resource in your pipeline and point it to the Incoming webhook service
connection. You can also define additional filters on the webhook resource based on the JSON payload data
to further customize the triggers for each pipeline, and you can consume the payload data in the form of
variables in your jobs.
4. Whenever a webhook event is received by the Incoming Webhook service connection, a new run will be
triggered for all the pipelines subscribed to the webhook event. You can consume the JSON payload data in
your jobs using the format ${{ parameters..}}
Schema
Example

resources:
webhooks:
- webhook: MyWebhookTriggerAlias ### Webhook alias
connection: IncomingWebhookConnection ### Incoming webhook service connection
filters: ### List of JSON parameters to filter; Parameters are AND'ed
- path: JSONParameterPath ### JSON path in the payload
value: JSONParameterExpectedValue ### Expected value in the path provided

Webhooks are a great way to automate your workflow based on any external webhook event that is not supported
by first class resources like pipelines, builds, containers, and packages. Also, for on-premise services where Azure
DevOps doesn't have visibility into the process, you can configure webhooks on the service and to trigger your
pipelines automatically.

Manual version picker for resources in the create run dialogue


When you manually trigger a CD YAML pipeline, we automatically evaluate the default version for the resources
defined in the pipeline based on the inputs provided. However, you can choose to pick a different version from
resource version picker in create run dialogue.
1. In the create run pane, you can see resources section.
2. Clicking on it shows the list of resources consumed in this pipeline.
3. You can select each of the resource and pick a specific version from the list of versions available. Resource
version picker is supported for pipeline, build, repository, container, and package resources.

For pipeline resources, you can see all the available runs across all branches. You can search them based on the
pipeline number or branch. And you can pick a run that is successful, failed or in-progress run. This flexibility is
given to ensure you can run your CD pipeline if you are sure your CI pipeline produced all the artifacts you need
and you don't need to wait for the CI run is complete or rerun due to some unrelated stage in the CI run failed.
However, when we evaluate default version for scheduled triggers or if you don't use manual version picker, we
only consider successfully completed CI runs.
For resources where you can't fetch available versions (like GitHub packages), we will show a text box as part of
version picker so that user can provide the version to be picked in the run.

Troubleshooting authorization for a YAML pipeline


Resources must be authorized before they can be used. A resource owner controls the users and pipelines that can
access that resource. The pipeline must be authorized to use the resource. There are multiple ways to accomplish
this.
Navigate to the administration experience of the resource. For example, variable groups and secure files are
managed in the Librar y page under Pipelines . Agent pools and service connections are managed in
Project settings . Here you can authorize all pipelines to be able to access that resource. This is
convenient if you do not have a need to restrict access to a resource - for for example, test resources.
When you create a pipeline for the first time, all the resources that are referenced in the YAML file are
automatically authorized for use by the pipeline, provided that you are a member of the User role for that
resource. So, resources that are referenced in the YAML file at pipeline creation time are automatically
authorized.
When you make changes to the YAML file and add additional resources (assuming that these not authorized
for use in all pipelines as explained above), then the build fails with a resource authorization error that is
similar to the following:
Could not find a <resource> with name <resource-name>. The <resource> does not exist or has not been
authorized for use.

In this case, you will see an option to authorize the resources on the failed build. If you are a member of
the User role for the resource, you can select this option. Once the resources are authorized, you can
start a new build.

If you continue to have problems authorizing resources, verify that the agent pool security roles for your
project are correct.

Set approval checks for resources


You can manually control when a resource runs with approval checks and templates. With the required template
approval check, you can require that any pipeline using a resource or environment also extends from a specific
YAML template. Setting a required template approval enhances security. You can make sure that your resource only
gets used under specific conditions with a template. Learn more about how to enhance pipeline security with
templates and resources.

Traceability
We provide full traceability for any resource consumed at a pipeline or deployment-job level.
Pipeline traceability
For every pipeline run, we show the info about the
1. The resource that has triggered the pipeline (if it is triggered by a resource).
2. Version of the resource and the artifacts consumed.

3. Commits associated with each resource.

4. Work-items for each resource.


Environment traceability
Whenever a pipeline deploys to an environment, you can see a list of resources that are consumed in the
environments view. This view includes resources consumed as part of the deployment-jobs and their associated
commits and work-items.
Showing associated CD pipelines info in CI pipelines
To provide end to end traceability, user should be able to track which CD pipelines are consuming a giving CI
pipeline. You can see the list of CD YAML pipelines runs where a CI pipeline run is consumed through pipeline
resource. In your CI pipeline run view, if it is consumed by other pipeline(s), you will see a 'Associated pipelines' tab
where you can find all the pipeline runs that consume your pipeline and artifacts from it.

YAML resource trigger issues support and traceability


It can be confusing when pipeline triggers fail to execute. To help better understand this, we've added a new menu
item in the pipeline definition page called Trigger Issues where you can learn why triggers are not executing.
Resource triggers can fail to execute for two reasons.
If the source of the service connection provided is invalid, or if there are any syntax errors in the trigger, the
trigger will not be configured at all. These are surfaced as errors.
If trigger conditions are not matched, the trigger will not execute. Whenever this occurs, a warning will be
surfaced so you can understand why the conditions were not matched.
FAQ
Why should I use pipelines resources instead of the download shortcut?
Using a pipelines resource is a first class way to consume artifacts from a CI pipeline and also configure
automated triggers. It gives you full visibility into the process by displaying the version consumed, artifacts,
commits, and work-items. When you define a pipeline resource, the associated artifacts are automatically
downloaded in deployment jobs.
You can choose to download the artifacts in build jobs or to override the download behavior in deployment jobs
with download . The download task internally uses the Download Pipeline Artifacts task.
Why should I use resources instead of the Download Pipeline Artifacts task?
When you use the Download Pipeline Artifacts task directly, you miss traceability and triggers. At the same time,
there are times when it makes sense to use the Download Pipeline Artifacts task directly. For example, you might
have a script task stored in a different template and the script task requires artifacts from a build to be
downloaded. Or, you may not know if someone using a template will add a pipeline resource. To avoid
dependencies, you can use the Download Pipeline Artifacts task to pass all the build info to a task.
Add & use variable groups
11/2/2020 • 19 minutes to read • Edit Online

Use a variable group to store values that you want to control and make available across multiple pipelines. You
can also use variable groups to store secrets and other values that might need to be passed into a YAML
pipeline. Variable groups are defined and managed in the Librar y page under Pipelines .

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.

NOTE
Variable groups can be used in a build pipeline in only Azure DevOps and TFS 2018. They cannot be used in a build
pipeline in earlier versions of TFS.

Create a variable group


YAML
Classic
Azure DevOps CLI
Variable groups can't be created in YAML, but they can be used as described in Use a variable group.

Use a variable group


YAML
Classic
Azure DevOps CLI
To use a variable from a variable group, you need to add a reference to the group in your YAML file:

variables:
- group: my-variable-group

Thereafter variables from the variable group can be used in your YAML file.
If you use both variables and variable groups, you'll have to use name / value syntax for the individual (non-
grouped) variables:

variables:
- group: my-variable-group
- name: my-bare-variable
value: 'value of my-bare-variable'

To reference a variable group, you can use macro syntax or a runtime expression. In this example, the group
my-variable-group has a variable named myhello .
variables:
- group: my-variable-group
- name: my-passed-variable
value: $[variables.myhello] # uses runtime expression

steps:
- script: echo $(myhello) # uses macro syntax
- script: echo $(my-passed-variable)

You can reference multiple variable groups in the same pipeline. If multiple variable groups include the same
variable, the variable group included last in your YAML file will set the variable's value.

variables:
- group: my-first-variable-group
- group: my-second-variable-group

You can also reference a variable group in a template. In the template variables.yml , the group
my-variable-group is referenced. The variable group includes a variable named myhello .

# variables.yml
variables:
- group: my-variable-group

In this pipeline, the variable $(myhello) from the variable group my-variable-group included in
variables.yml is referenced.

# azure-pipeline.yml
stages:
- stage: MyStage
variables:
- template: variables.yml
jobs:
- job: Test
steps:
- script: echo $(myhello)

To work with variable groups, you must authorize the group. This is a security feature: if you only had to name
the variable group in YAML, then anyone who can push code to your repository could extract the contents of
secrets in the variable group. To do this, or if you encounter a resource authorization error in your build, use
one of the following techniques:
If you want to authorize any pipeline to use the variable group, which may be a suitable option if you do
not have any secrets in the group, go to Azure Pipelines, open the Librar y page, choose Variable
groups , select the variable group in question, and enable the setting Allow access to all pipelines .
If you want to authorize a variable group for a specific pipeline, open the pipeline by selecting Edit and
queue a build manually. You will see a resource authorization error and a "Authorize resources" action
on the error. Choose this action to explicitly add the pipeline as an authorized user of the variable group.

NOTE
If you added a variable group to a pipeline and did not get a resource authorization error in your build when you
expected one, turn off the Allow access to all pipelines setting described above.

YAML builds are not yet available on TFS.


You access the value of the variables in a linked variable group in exactly the same way as variables you define
within the pipeline itself. For example, to access the value of a variable named customer in a variable group
linked to the pipeline, use $(customer) in a task parameter or a script. However, secret variables (encrypted
variables and key vault variables) cannot be accessed directly in scripts - instead they must be passed as
arguments to a task. For more information, see secrets
Any changes made centrally to a variable group, such as a change in the value of a variable or the addition of
new variables, will automatically be made available to all the definitions or stages to which the variable group
is linked.

Manage a variable group


Using the Azure DevOps CLI, you can list the variable groups for the pipeline runs in your project and show
details for each one. You can also delete variable groups if you no longer need them.
List variable groups | Show details for a variable group | Delete a variable group
List variable groups
You can list the variable groups in your project with the az pipelines variable-group list command. To get
started, see Get started with Azure DevOps CLI.

az pipelines variable-group list [--action {manage, none, use}]


[--continuation-token]
[--group-name]
[--org]
[--project]
[--query-order {Asc, Desc}]
[--top]

Optional parameters
action : Specifies the action that can be performed on the variable groups. Accepted values are manage,
none and use.
continuation-token : Lists the variable groups after a continuation token is provided.
group-name : Name of the variable group. Wildcards are accepted, such as new-var* .
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
quer y-order : Lists the results in either ascending or descending (the default) order. Accepted values are
Asc and Desc.
top : Number of variable groups to list.
Example
The following command lists the top 3 variable groups in ascending order and returns the results in table
format.
az pipelines variable-group list --top 3 --query-order Asc --output table

ID Name Type Number of Variables


---- ----------------- ------ ---------------------
1 myvariables Vsts 2
2 newvariables Vsts 4
3 new-app-variables Vsts 3

Show details for a variable group


You can display the details of a variable group in your project with the az pipelines variable-group show
command. To get started, see Get started with Azure DevOps CLI.

az pipelines variable-group show --group-id


[--org]
[--project]

Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .

Example
The following command shows details for the variable group with the ID 4 and returns the results in YAML
format.

az pipelines variable-group show --group-id 4 --output yaml

authorized: false
description: Variables for my new app
id: 4
name: MyNewAppVariables
providerData: null
type: Vsts
variables:
app-location:
isSecret: null
value: Head_Office
app-name:
isSecret: null
value: Fabrikam

Delete a variable group


You can delete a variable group in your project with the az pipelines variable-group delete command. To get
started, see Get started with Azure DevOps CLI.

az pipelines variable-group delete --group-id


[--org]
[--project]
[--yes]

Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
yes : Optional. Doesn't prompt for confirmation.
Example
The following command deletes the variable group with the ID 1 and doesn't prompt for confirmation.

az pipelines variable-group delete --group-id 1 --yes

Deleted variable group successfully.

Manage variables in a variable group


Using the Azure DevOps CLI, you can add and delete variables from a variable group in a pipeline run. You can
also list the variables in the variable group and make updates to them as needed.
Add variables to a variable group | List variables in a variable group | Update variables in a variable group |
Delete variables from a variable group
Add variables to a variable group
You can add a variable to a variable group with the az pipelines variable-group variable create command. To
get started, see Get started with Azure DevOps CLI.

az pipelines variable-group variable create --group-id


--name
[--org]
[--project]
[--secret {false, true}]
[--value]

Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
name : Required. Name of the variable you are adding.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
secret : Optional. Indicates whether the variable's value is a secret. Accepted values are false and true.
value : Required for non secret variable. Value of the variable. For secret variables, if value parameter is not
provided, it is picked from environment variable prefixed with AZURE_DEVOPS_EXT_PIPELINE_VAR_ or user is
prompted to enter it via standard input. For example, a variable named MySecret can be input using the
environment variable AZURE_DEVOPS_EXT_PIPELINE_VAR_MySecret .
Example
The following command creates a variable in the variable group with ID of 4 . The new variable is named
requires-login and has a value of True , and the result is shown in table format.
az pipelines variable-group variable create --group-id 4 --name requires-login --value True --output table

Name Is Secret Value


-------------- ----------- -------
requires-login False True

List variables in a variable group


You can list the variables in a variable group with the az pipelines variable-group variable list command. To get
started, see Get started with Azure DevOps CLI.

az pipelines variable-group variable list --group-id


[--org]
[--project]

Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .

Example
The following command lists all of the variables in the variable group with ID of 4 and shows the result in
table format.

az pipelines variable-group variable list --group-id 4 --output table

Name Is Secret Value


-------------- ----------- -----------
app-location False Head_Office
app-name False Fabrikam
requires-login False True

Update variables in a variable group


You can update a variable in a variable group with the az pipelines variable-group variable update command.
To get started, see Get started with Azure DevOps CLI.

az pipelines variable-group variable update --group-id


--name
[--new-name]
[--org]
[--project]
[--prompt-value {false, true}]
[--secret {false, true}]
[--value]

Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
name : Required. Name of the variable you are adding.
new-name : Optional. Specify to change the name of the variable.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
prompt-value : Set to true to update the value of a secret variable using environment variable or prompt
via standard input. Accepted values are false and true.
secret : Optional. Indicates whether the variable's value is kept secret. Accepted values are false and true.
value : Updates the value of the variable. For secret variables, use the prompt-value parameter to be
prompted to enter it via standard input. For non-interactive consoles, it can be picked from environment
variable prefixed with AZURE_DEVOPS_EXT_PIPELINE_VAR_ . For example, a variable named MySecret can be
input using the environment variable AZURE_DEVOPS_EXT_PIPELINE_VAR_MySecret .
Example
The following command updates the requires-login variable with the new value False in the variable group
with ID of 4 . It specifies that the variable is a secret and shows the result in YAML format. Notice that the
output shows the value as null instead of False since it is a secret value (hidden).

az pipelines variable-group variable update --group-id 4 --name requires-login --value False --secret true
--output yaml

requires-login:
isSecret: true
value: null

Delete variables from a variable group


You can delete a variable from a variable group with the az pipelines variable-group variable delete command.
To get started, see Get started with Azure DevOps CLI.

az pipelines variable-group variable delete --group-id


--name
[--org]
[--project]
[--yes]

Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
name : Required. Name of the variable you are deleting.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
yes : Optional. Doesn't prompt for confirmation.
Example
The following command deletes the requires-login variable from the variable group with ID of 4 and
prompts for confirmation.

az pipelines variable-group variable delete --group-id 4 --name requires-login

Are you sure you want to delete this variable? (y/n): y


Deleted variable 'requires-login' successfully.
Link secrets from an Azure key vault
Link an existing Azure key vault to a variable group and map selective vault secrets to the variable group.
1. In the Variable groups page, enable Link secrets from an Azure key vault as variables . You'll
need an existing key vault containing your secrets. You can create a key vault using the Azure portal.

2. Specify your Azure subscription end point and the name of the vault containing your secrets.
Ensure the Azure service connection has at least Get and List management permissions on the vault
for secrets. You can enable Azure Pipelines to set these permissions by choosing Authorize next to the
vault name. Alternatively, you can set the permissions manually in the Azure portal:
Open the Settings blade for the vault, choose Access policies , then Add new .
In the Add access policy blade, choose Select principal and select the service principal for your
client account.
In the Add access policy blade, choose Secret permissions and ensure that Get and List are
checked (ticked).
Choose OK to save the changes.
3. In the Variable groups page, choose + Add to select specific secrets from your vault that will be
mapped to this variable group.
Secrets management notes
Only the secret names are mapped to the variable group, not the secret values. The latest version of the
value of each secret is fetched from the vault and used in the pipeline linked to the variable group
during the run.
Any changes made to existing secrets in the key vault, such as a change in the value of a secret, will be
made available automatically to all the pipelines in which the variable group is used.
When new secrets are added to the vault, or a secret is deleted from the vault, the associated variable
groups are not updated automatically. The secrets included in the variable group must be explicitly
updated in order for the pipelines using the variable group to execute correctly.
Azure Key Vault supports storing and managing cryptographic keys and secrets in Azure. Currently,
Azure Pipelines variable group integration supports mapping only secrets from the Azure key vault.
Cryptographic keys and certificates are not supported.

Expansion of variables in a group


YAML
Classic
Azure DevOps CLI
When you set a variable in a group and use it in a YAML file, it has the same precedence as any other variable
defined within the YAML file. For more information about precedence of variables, see the topic on variables.
YAML is not supported in TFS.
Secure files
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Use the Secure Files library to store files such as signing certificates, Apple Provisioning Profiles, Android
Keystore files, and SSH keys on the server without having to commit them to your source repository. Secure files
are defined and managed in the Librar y tab in Azure Pipelines .
The contents of the secure files are encrypted and can only be used during the build or release pipeline by
referencing them from a task. The secure files are available across multiple build and release pipelines in the
project based on the security settings. Secure files follow the library security model.
There's a size limit of 10 MB for each secure file.

FAQ
How can I consume secure files in a Build or Release Pipeline?
Use the Download Secure File Utility task to consume secure files within a Build or Release Pipeline.
How can I create a custom task using secure files?
You can build your own tasks that use secure files by using inputs with type secureFile in the task.json . Learn
how to build a custom task.
The Install Apple Provisioning Profile task is a simple example of a task using a secure file. See the reference
documentation and source code.
To handle secure files during build or release, you can refer to the common module available here.
My task can't access the secure files. What do I do?
Make sure your agent is running version of 2.116.0 or higher. See Agent version and upgrades.
Why do I see an Invalid Resource error when downloading a secure file with Azure DevOps Server/TFS on-
premises?
Make sure IIS Basic Authentication is disabled on the TFS or Azure DevOps Server.
How do I authorize a secure file for use in all pipelines?
1. In Azure Pipelines , select the Librar y tab.
2. Select the Secure files tab at the top.
3. Select the secure file you want to authorize.
4. In the details view under Proper ties , select Authorize for use in all pipelines , and then select Save .
Service connections
11/2/2020 • 27 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.

You will typically need to connect to external and remote services to execute tasks in a job. For example,
you may need to connect to your Microsoft Azure subscription, to a different build server or file server, to
an online continuous integration environment, or to services you install on remote computers.
You can define service connections in Azure Pipelines or Team Foundation Server (TFS) that are available
for use in all your tasks. For example, you can create a service connection for your Azure subscription and
use this service connection name in an Azure Web Site Deployment task in a release pipeline.
You define and manage service connections from the Admin settings of your project:
Azure DevOps: https://dev.azure.com/{organization}/{project}/adminservices
TFS: https://{tfsserver}/{collection}/{project}/_admin/_services

Create a service connection


1. In Azure DevOps, open the Ser vice connections page from the project settings page. In TFS, open
the Ser vices page from the "settings" icon in the top menu bar.
2. Choose + New ser vice connection and select the type of service connection you need.
3. Fill in the parameters for the service connection. The list of parameters differs for each type of
service connection - see the following list.
4. Decide if you want the service connection to be accessible for any pipeline by setting the Allow all
pipelines to use this connection option. This option allows pipelines defined in YAML, which are
not automatically authorized for service connections, to use this service connection. See Use a
service connection.
5. Choose OK to create the connection. For example, this is the default Azure Resource Manager
connection dialog:
NOTE
The connection dialog may appear different for the different types of service connections, and have
different parameters. See the list of parameters in Common service connection types for each service
connection type.

Manage a service connection


1. In Azure DevOps, open the Ser vice connections page from the project settings page. Or, in TFS,
open the Ser vices page from the "settings" icon in the top menu bar.
2. Select the service connection you want to manage.
3. You will land in the Over view tab of the service connection where you can see the details of the
service connection i.e. type, creator, authentication type (like Token, Username/Password or OAuth
etc.).
4. Next to the overview tab, you can see Usage histor y that shows the list of pipelines using the
service connection.

5. To update the service connection, click on Edit at the top-right corner of the page.
6. Approvals and checks , Security and Delete are part of the more options at the top-right corner.

Secure a service connection


To manage the security for a connection:
1. In Azure DevOps, open the Ser vice connections page from the project settings page. In TFS, open
the Ser vices page from the "settings" icon in the top menu bar.
2. To manage user permissions at hub level, go to the more options at the top-right corner and
choose Security .

3. To manage security for a service connection, open the service connection and go to more options at
top-right corner and choose Security .

Service connection is a critical resource for various workflows in Azure DevOps like Classic Build and
Release pipelines, YAML pipelines, KevVault Variable groups etc. Based on the usage patterns, service
connection security is divided into three categories in the service connections new UI.
User permissions
Pipeline permissions
Project permissions
User permissions
You can control who can create, view, use and manage the service connection with user permissions. You
have four roles i.e. Creator, Reader, User and Administrator roles to manage each of these actions. In the
service connections tab, you can set the hub level permissions which are inherited and you can override
the roles for each service connection.

RO L E O N A SERVIC E C O N N EC T IO N P URP O SE

Creator Members of this role can create the service connection in


the project. Contributors are added as members by
default

Reader Members of this role can view the service connection.

User Members of this role can use the service connection


when authoring build or release pipelines or authorize
yaml pipelines.

Administrator In addition to using the service connection, members of


this role can manage membership of all other roles for
the service connection in the project. Project
administrators are added as members by default.

Previously, two special groups, Endpoint Creators and Endpoint Administrator groups were used to
control who can create and manage service connections. Now, as part of service connection new UI, we
are moving to pure RBAC model i.e. using roles. For backward compatibility, in the existing projects,
Endpoint Administrators group is added as Administrator role and Endpoint creators group is assigned
with creator role which ensures there is no change in the behavior for existing service connections.

NOTE
This change is applicable only in Azure DevOps Services where new UI is available. Azure DevOps Server 2019 and
older versions still follow the previous security model.

Along with the new service connections UI, we are introducing Sharing of ser vice connections across
projects . With this feature, service connections now become an organization level object however scoped
to current project by default. In User permissions section, you can see Project and Organization level
permissions. And the functionalities of administrator role are split between the two levels.
Project level permissions
The project level permissions are the user permissions with reader, user, creator and administrator roles,
as explained above, within the project scope. You have inheritance and you can set the roles at the hub
level as well as for each service connection.
The project-level administrator has limited administrative capabilities as below:
A project-level administrator can manage other users and roles at project scope.
A project-level administrator can rename a service connection, update description and enable/disable
"Allow pipeline access" flag.
A project-level administrator can delete a service connection which removes the existence of service
connection from the project.
The user that created the service connection is automatically added to the project level Administrator role
for that service connection. And users/groups assigned administrator role at hub level are inherited if the
inheritance is turned on.
Organization level permissions
Organization level permissions are introduced along with cross project sharing feature. Any permissions
set at this level are reflected across all the projects where the service connection is shared. There is not
inheritance for organization level permissions. Today we only have administrator role at organization level.
The organization-level administrator has all the administrative capabilities that include:
An organization-level administrator can manage organization level users.
An organization-level administrator can edit all the fields of a service connection.
An organization-level administrator can share/un-share a service connection with other projects.

The user that created the service connection is automatically added as an organization level Administrator
role for that service connection. In all the existing service connections, for backward compatibility, all the
connection administrators are made organization-level administrators to ensure there is no change in the
behavior.
Pipeline permissions
Pipeline permissions control which YAML pipelines are authorized to use this service connection. This is
interlinked with 'Allow pipeline access' checkbox you find in service connection creation dialogue.
You can either choose to open access for all pipelines to consume this service connection from the more
options at top-right corner of the Pipeline permissions section in security tab of a service connection.
Or you can choose to lock down the service connection and only allow selected YAML pipelines to
consume this service connection. If any other YAML pipeline refers to this service connection, an
authorization request is raised which has to be approved by the connection administrators.
Project permissions - Cross project sharing of service connections
Only the organization-level administrators from User permissions can share the service connection
with other projects.
The user who is sharing the service connection with a project should have at least create service
connection permission in the target project.
The user who shares the service connection with a project becomes the project-level administrator for
that service connection and the project-level inheritance is turned on in the target project.
The service connection name is appended with the project name and it can be renamed in the target
project scope.
Organization level administrator can unshare a service connection from any shared project.

NOTE
The sharing feature is still under preview and is not yet rolled out. If you want this feature enabled, you can reach
out to us. Project permissions feature is dependent on the new service connections UI and once we enable this
feature, the old service connections UI is no longer usable.

Use a service connection


After the new service connection is created:
YAML
Classic
Copy the connection name into your code as the azureSubscription (or the equivalent connection name)
value.
Next you must authorize the service connection. To do this, or if you encounter a resource authorization
error in your build, use one of the following techniques:
If you want to authorize any pipeline to use the service connection, go to Azure Pipelines, open the
Settings page, select Service connections, and enable the setting Allow all pipelines to use this
connection option for the connection.
If you want to authorize a service connection for a specific pipeline, open the pipeline by selecting
Edit and queue a build manually. You will see a resource authorization error and an "Authorize
resources" action on the error. Choose this action to explicitly add the pipeline as an authorized user
of the service connection.

You can also create your own custom service connections.

NOTE
Service connection cannot be specified by variable

Common service connection types


Azure Pipelines and TFS support a variety of service connection types by default. Some of these are
described below:
Azure Classic
Azure Resource Manager
Azure Service Bus
Bitbucket Cloud
Chef
Docker Host
Docker Registry
External Git
Generic
GitHub
GitHub Enterprise Server
Jenkins
Kubernetes
Maven
npm
NuGet
Python package download
Python package upload
Service Fabric
SSH
Subversion
Team Foundation Server/Azure Pipelines
Visual Studio App Center
After you enter the parameters when creating a service connection, validate the connection. The validation
link uses a REST call to the external service with the information you entered, and indicates if the call
succeeded.
Azure Classic service connection
Defines and secures a connection to a Microsoft Azure subscription using Azure credentials or an Azure
management certificate. How do I create a new service connection?

PA RA M ET ER DESC RIP T IO N

[authentication type] Required. Select Credentials or Cer tificate based .

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Environment Required. Select Azure Cloud , Azure Stack , or one of


the pre-defined Azure Government Clouds where your
subscription is defined.

Subscription ID Required. The GUID-like identifier for your Azure


subscription (not the subscription name). You can copy
this from the Azure portal.

Subscription Name Required. The name of your Microsoft Azure subscription


(account).

User name Required for Credentials authentication. User name of a


work or school account (for example @fabrikam.com).
Microsoft accounts (for example @live or @hotmail) are
not supported.

Password Required for Credentials authentication. Password for the


user specified above.

Management Certificate Required for Certificate-based authentication. Copy the


value of the management certificate key from your
publish settings XML file or the Azure portal.

If your subscription is defined in an Azure Government Cloud, ensure your application meets the
relevant compliance requirements before you configure a service connection.
Azure Resource Manager service connection
Defines and secures a connection to a Microsoft Azure subscription using Service Principal Authentication
(SPA) or an Azure-Managed Service Identity. The dialog offers two main modes:
Automated subscription detection . In this mode, Azure Pipelines and TFS will attempt to query
Azure for all of the subscriptions and instances to which you have access using the credentials you
are currently logged on with in Azure Pipelines or TFS (including Microsoft accounts and School or
Work accounts). If no subscriptions are shown, or subscriptions other than the one you want to use,
you must sign out of Azure Pipelines or TFS and sign in again using the appropriate account
credentials.
Manual subscription pipeline . In this mode, you must specify the service principal you want to
use to connect to Azure. The service principal specifies the resources and the access levels that will
be available over the connection. Use this approach when you need to connect to an Azure account
using different credentials from those you are currently logged on with in Azure Pipelines or TFS.
This is also a useful way to maximize security and limit access.
For more information, see Connect to Microsoft Azure

NOTE
If you don't see any Azure subscriptions or instances, or you have problems validating the connection, see
Troubleshoot Azure Resource Manager service connections.

Azure Service Bus service connection


Defines and secures a connection to a Microsoft Azure Service Bus queue.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Service Bus ConnectionString The URL of your Azure Service Bus instance. More
information.

Service Bus Queue Name The name of an existing Azure Service Bus queue.

Bitbucket Cloud service connection


Defines a connection to Bitbucket Cloud.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

User name Required. The username to connect to the service.


PA RA M ET ER DESC RIP T IO N

Password Required. The password for the specified username.

Chef service connection


Defines and secures a connection to a Chef automation server.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Server URL Required. The URL of the Chef automation server.

Node Name (Username) Required. The name of the node to connect to. Typically
this is your username.

Client Key Required. The key specified in the Chef .pem file.

Docker Host service connection


Defines and secures a connection to a Docker host.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Server URL Required. The URL of the Docker host.

CA Certificate Required. A trusted certificate authority certificate to use


to authenticate with the host.

Certificate Required. A client certificate to use to authenticate with


the host.

Key Required. The key specified in the Docker key.pem file.

Ensure you protect your connection to the Docker host. Learn more.

Docker Registry service connection


Defines a connection to a container registry.
Azure Container Registr y

PA RA M ET ER DESC RIP T IO N
PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task inputs.

Azure subscription Required. The Azure subscription containing the container


registry to be used for service connection creation.

Azure Container Registry Required. The Azure Container Registry to be used for
creation of service connection.

Docker Hub or Others

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task inputs.

Docker Registry Required. The URL of the Docker registry.

Docker ID Required. The identifier of the Docker account user.

Password Required. The password for the account user identified


above.

Email Optional. An email address to receive notifications.

External Git service connection


Defines and secures a connection to a Git repository server. Note that there is a specific service connection
for GitHub and GitHub Enterprise Server connections.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Server URL Required. The URL of the Git repository server.

User name Required. The username to connect to the Git repository


server.

Password/Token Key Required. The password or access token for the specified
username.

Also see Artifact sources.

Generic service connection


Defines and secures a connection to any other type of service or application.
PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Server URL Required. The URL of the service.

User name Required. The username to connect to the service.

Password/Token Key Required. The password or access token for the specified
username.

GitHub service connection


Defines a connection to a GitHub repository. Note that there is a specific service connection for External
Git servers and GitHub Enterprise Server connections.

PA RA M ET ER DESC RIP T IO N

Choose authorization Required. Either Grant authorization or Personal


access token . See notes below.

Token Required for Personal access token authorization. See


notes below.

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

NOTE
If you select Grant authorization for the Choose authorization option, the dialog shows an Authorize
button that opens the GitHub login page. If you select Personal access token you must obtain a suitable token
and paste it into the Token textbox. The dialog shows the recommended scopes for the token: repo, user,
admin:repo_hook . See this page on GitHub for information about obtaining an access token. Then register your
GitHub account in your profile:

Open your profile from your organization name at the right of the Azure Pipelines page heading.
At the top of the left column, under DETAILS , choose Security .
In the Security tab, in the right column, choose Personal access tokens .
Choose the Add link and enter the information required to create the token.
Also see Artifact sources.

GitHub Enterprise Server service connection


Defines a connection to a GitHub repository. Note that there is a specific service connection for External
Git servers and standard GitHub service connections.
PA RA M ET ER DESC RIP T IO N

Choose authorization Required. Either Personal access token , Username


and Password , or OAuth2 . See notes below.

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Server URL Required. The URL of the service.

Accept untrusted SSL certificates Set this option to allow clients to accept a self-signed
certificate instead of installing the certificate in the TFS
service role or the computers hosting the agent.

Token Required for Personal access token authorization. See


notes below.

User name Required for Username and Password authentication. The


username to connect to the service.

Password Required for Username and Password authentication. The


password for the specified username.

OAuth configuration Required for OAuth2 authorization. The OAuth


configuration specified in your account.

GitHub Enterprise Server configuration URL The URL is fetched from OAuth configuration.

NOTE
If you select Personal access token you must obtain a suitable token and paste it into the Token textbox. The
dialog shows the recommended scopes for the token: repo, user, admin:repo_hook . See this page on GitHub
for information about obtaining an access token. Then register your GitHub account in your profile:

Open your profile from your account name at the right of the Azure Pipelines page heading.
At the top of the left column, under DETAILS , choose Security .
In the Security tab, in the right column, choose Personal access tokens .
Choose the Add link and enter the information required to create the token.

Jenkins service connection


Defines a connection to the Jenkins service.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
PA RA M ET ER DESC RIP T IO N

Server URL Required. The URL of the service.

Accept untrusted SSL certificates Set this option to allow clients to accept a self-signed
certificate instead of installing the certificate in the TFS
service role or the computers hosting the agent.

User name Required. The username to connect to the service.

Password Required. The password for the specified username.

Also see Azure Pipelines Integration with Jenkins and Artifact sources.

Kubernetes service connection


Defines a connection to a Kubernetes cluster.
Azure subscription option

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task inputs.

Azure subscription Required. The Azure subscription containing the cluster to


be used for service connection creation.

Cluster Name of the Azure Kubernetes Service cluster.

Namespace Namespace within the cluster.

For an RBAC enabled cluster, a ServiceAccount is created in the chosen namespace along with RoleBinding
object so that the created ServiceAccount is able to perform actions only on the chosen namespace.
For an RBAC disabled cluster, a ServiceAccount is created in the chosen namespace. But the created
ServiceAccount has cluster-wide privileges (across namespaces).

NOTE
This option lists all the subscriptions the service connection creator has access to across different Azure tenants. If
you are unable to see subscriptions from other Azure tenants, please check your AAD permissions in those tenants.

Ser vice account option

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task inputs.

Server URL Required. Cluster's API server URL.

Secret Secret associated with the service account to be used for


deployment
The following command can be used to fetch Server URL -

kubectl config view --minify -o 'jsonpath={.clusters[0].cluster.server}'

For fetching Secret object required to connect and authenticate with the cluster, the following sequence of
commands need to be run -

kubectl get serviceAccounts <service-account-name> -n <namespace> -o 'jsonpath={.secrets[*].name}'

The above command fetches the name of the secret associated with a ServiceAccount. The output of the
above command is to be substituted in the following command for fetching Secret object -

kubectl get secret <service-account-secret-name> -n <namespace> -o yaml

Copy and paste the Secret object fetched in YAML form into the Secret text-field.

NOTE
When using the service account option, ensure that a RoleBinding exists, which grants permissions in the edit
ClusterRole to the desired service account. This is needed so that the service account can be used by Azure
Pipelines for creating objects in the chosen namespace.

Kubeconfig option

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task inputs.

Kubeconfig Required. Contents of the kubeconfig file

Context Context within the kubeconfig file that is to be used for


identifying the cluster

Maven service connection


Defines and secures a connection to a Maven repository.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Registry URL Required. The URL of the Maven repository.

Registry Id Required. This is the ID of the server that matches the id


element of the repository/mirror that Maven tries to
connect to.
PA RA M ET ER DESC RIP T IO N

Username Required when connection type is Username and


Password . The username for authentication.

Password Required when connection type is Username and


Password . The password for the username.

Personal Access Token Required when connection type is Authentication


Token . The token to use to authenticate with the service.
Learn more.

npm service connection


Defines and secures a connection to an npm server.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Registry URL Required. The URL of the npm server.

Username Required when connection type is Username and


Password . The username for authentication.

Password Required when connection type is Username and


Password . The password for the username.

Personal Access Token Required when connection type is External Azure


Pipelines . The token to use to authenticate with the
service. Learn more.

NuGet service connection


Defines and secures a connection to a NuGet server.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Feed URL Required. The URL of the NuGet server.

ApiKey Required when connection type is ApiKey . The


authentication key.

Personal Access Token Required when connection type is External Azure


Pipelines . The token to use to authenticate with the
service. Learn more.
PA RA M ET ER DESC RIP T IO N

Username Required when connection type is Basic


authentication . The username for authentication.

Password Required when connection type is Basic


authentication . The password for the username.

Python package download service connection


Defines and secures a connection to a Python repository for downloading Python packages.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Python repository url for download Required. The URL of the Python repository.

Personal Access Token Required when connection type is Authentication


Token . The token to use to authenticate with the service.
Learn more.

Username Required when connection type is Username and


Password . The username for authentication.

Password Required when connection type is Username and


Password . The password for the username.

Python package upload service connection


Defines and secures a connection to a Python repository for uploading Python packages.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Python repository url for upload Required. The URL of the Python repository.

EndpointName Required. Unique repository name used for twine upload.


Spaces and special characters are not allowed.

Personal Access Token Required when connection type is Authentication


Token . The token to use to authenticate with the service.
Learn more.

Username Required when connection type is Username and


Password . The username for authentication.
PA RA M ET ER DESC RIP T IO N

Password Required when connection type is Username and


Password . The password for the username.

Service Fabric service connection


Defines and secures a connection to a Service Fabric cluster.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Cluster Endpoint Required. The TCP endpoint of the cluster.

Server Certificate Thumbprint Required when connection type is Cer tificate based or
Azure Active Director y .

Client Certificate Required when connection type is Cer tificate based .

Password Required when connection type is Cer tificate based .


The certificate password.

Username Required when connection type is Azure Active


Director y . The username for authentication.

Password Required when connection type is Azure Active


Director y . The password for the username.

Use Windows security Required when connection type is Others .

Cluster SPN Required when connection type is Others and using


Windows security.

SSH service connection


Defines and secures a connection to a remote host using Secure Shell (SSH).

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Host name Required. The name of the remote host machine or the IP
address.

Port number Required. The port number of the remote host machine
to which you want to connect. The default is port 22.
PA RA M ET ER DESC RIP T IO N

User name Required. The username to use when connecting to the


remote host machine.

Password or passphrase The password or passphrase for the specified username if


using a keypair as credentials.

Private key The entire contents of the private key file if using this
type of authentication.

Also see SSH task and Copy Files Over SSH.

Subversion service connection


Defines and secures a connection to the Subversion repository.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Server repository URL Required. The URL of the repository.

Accept untrusted SSL certificates Set this option to allow the client to accept self-signed
certificates installed on the agent computer(s).

Realm name Optional. If you use multiple credentials in a build or


release pipeline, use this parameter to specify the realm
containing the credentials specified for this service
connection.

User name Required. The username to connect to the service.

Password Required. The password for the specified username.

Team Foundation Server / Azure Pipelines service connection


Defines and secures a connection to another TFS or Azure DevOps organization.

PA RA M ET ER DESC RIP T IO N

(authentication) Select Basic or Token Based authentication.

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

Connection URL Required. The URL of the TFS or Azure Pipelines instance.
PA RA M ET ER DESC RIP T IO N

User name Required for Basic authentication. The username to


connect to the service.

Password Required for Basic authentication. The password for the


specified username.

Personal Access Token Required for Token Based authentication (TFS 2017 and
newer and Azure Pipelines only). The token to use to
authenticate with the service. Learn more.

Use the Verify connection link to validate your connection information.


See also Authenticate access with personal access tokens for Azure DevOps and TFS.

Visual Studio App Center service connection


Defines and secures a connection to Visual Studio App Center.

PA RA M ET ER DESC RIP T IO N

Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.

API Token Required. The token to use to authenticate with the


service. Learn more.

Extensions for other service connections


Other service connection types and tasks can be installed in Azure Pipelines and Team Foundation Server
as extensions. Some examples of service connections currently available through extensions are:
TFS artifacts for Azure Pipelines. Deploy on-premises TFS builds with Azure Pipelines through a TFS
service connection and the Team Build (external) artifact, even when the TFS machine is not
reachable directly from Azure Pipelines. For more information, see External TFS and this blog post.
TeamCity artifacts for Azure Pipelines. This extension provides integration with TeamCity through a
TeamCity service connection, enabling artifacts produced in TeamCity to be deployed by using
Azure Pipelines. See TeamCity for more details.
SCVMM Integration. Connect to a System Center Virtual Machine Manager (SCVMM) server to
easily provision virtual machines and perform actions on them such as managing checkpoints,
starting and stopping VMs, and running PowerShell scripts.
VMware Resource Deployment. Connect to a VMware vCenter Server from Visual Studio Team
Services or Team Foundation Server to provision, start, stop, or snapshot VMware virtual machines.

You can also create your own custom service connections.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a
feature on our Azure DevOps Developer Community. Support page.
Azure Pipelines agents
11/2/2020 • 23 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
- TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are
called definitions, runs are called builds, service connections are called service endpoints, stages are called
environments, and jobs are called phases.

To build your code or deploy your software using Azure Pipelines, you need at least one agent. As
you add more code and people, you'll eventually need more.
When your pipeline runs, the system begins one or more jobs. An agent is computing
infrastructure with installed agent software that runs one job at a time.
Jobs can be run directly on the host machine of the agent or in a container.

Microsoft-hosted agents
If your pipelines are in Azure Pipelines, then you've got a convenient option to run your jobs
using a Microsoft-hosted agent . With Microsoft-hosted agents, maintenance and upgrades are
taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual
machine is discarded after one use. Microsoft-hosted agents can run jobs directly on the VM or in
a container.
Azure Pipelines provides a pre-defined agent pool named Azure Pipelines with Microsoft-
hosted agents.
For many teams this is the simplest way to run your jobs. You can try it first and see if it works for
your build or deployment. If not, you can use a self-hosted agent.

TIP
You can try a Microsoft-hosted agent for no charge.

Learn more about Microsoft-hosted agents.

Self-hosted agents
An agent that you set up and manage on your own to run jobs is a self-hosted agent . You can
use self-hosted agents in Azure Pipelines or Team Foundation Server (TFS). Self-hosted agents
give you more control to install dependent software needed for your builds and deployments.
Also, machine-level caches and configuration persist from run to run, which can boost speed.
TIP
Before you install a self-hosted agent you might want to see if a Microsoft-hosted agent pool will work
for you. In many cases this is the simplest way to get going. Give it a try.

You can install the agent on Linux, macOS, or Windows machines. You can also install an agent on
a Docker container. For more information about installing a self-hosted agent, see:
macOS agent
Linux agent (x64, ARM, RHEL6)
Windows agent (x64, x86)
Docker agent
You can install the agent on Linux, macOS, or Windows machines. For more information about
installing a self-hosted agent, see:
macOS agent
Red Hat agent
Ubuntu 14.04 agent
Ubuntu 16.04 agent
Windows agent v1

NOTE
On macOS, you need to clear the special attribute on the download archive to prevent Gatekeeper
protection from displaying for each assembly in the tar file when ./config.sh is run. The following
command clears the extended attribute on the file:

xattr -c vsts-agent-osx-x64-V.v.v.tar.gz ## replace V.v.v with the version in the


filename downloaded.

# then unpack the gzip tar file normally:

tar xvfz vsts-agent-osx-x64-V.v.v.tar.gz

After you've installed the agent on a machine, you can install any other software on that machine
as required by your jobs.

Azure virtual machine scale set agents


Azure virtual machine scale set agents are a form of self-hosted agents that can be auto-scaled to
meet your demands. This elasticity reduces your need to run dedicated agents all the time. Unlike
Microsoft-hosted agents, you have flexibility over the size and the image of machines on which
agents run.
You specify a virtual machine scale set, a number of agents to keep on standby, a maximum
number of virtual machines in the scale set, and Azure Pipelines manages the scaling of your
agents for you.
For more information, see Azure virtual machine scale set agents.

Parallel jobs
You can use a parallel job in Azure Pipelines to run a single job at a time in your organization. In
Azure Pipelines, you can run parallel jobs on Microsoft-hosted infrastructure or on your own
(self-hosted) infrastructure.
Microsoft provides a free tier of service by default in every organization that includes at least one
parallel job. Depending on the number of concurrent pipelines you need to run, you might need
more parallel jobs to use multiple Microsoft-hosted or self-hosted agents at the same time. For
more information on parallel jobs and different free tiers of service, see Parallel jobs in Azure
Pipelines.
You might need more parallel jobs to use multiple agents at the same time:
Parallel jobs in TFS

IMPORTANT
Starting with Azure DevOps Server 2019, you do not have to pay for self-hosted concurrent jobs in
releases. You are only limited by the number of agents that you have.

Capabilities
Every self-hosted agent has a set of capabilities that indicate what it can do. Capabilities are
name-value pairs that are either automatically discovered by the agent software, in which case
they are called system capabilities , or those that you define, in which case they are called user
capabilities .
The agent software automatically determines various system capabilities such as the name of the
machine, type of operating system, and versions of certain software installed on the machine.
Also, environment variables defined in the machine automatically appear in the list of system
capabilities.
When you author a pipeline you specify certain demands of the agent. The system sends the job
only to agents that have capabilities matching the demands specified in the pipeline. As a result,
agent capabilities allow you to direct jobs to specific agents.

NOTE
Demands and capabilities are designed for use with self-hosted agents so that jobs can be matched with
an agent that meets the requirements of the job. When using Microsoft-hosted agents, you select an
image for the agent that matches the requirements of the job, so although it is possible to add
capabilities to a Microsoft-hosted agent, you don't need to use capabilities with Microsoft-hosted agents.

View agent details


Browser
Azure DevOps CLI
You can view the details of an agent, including its version and system capabilities, and manage its
user capabilities, by navigating to Agent pools and selecting the Capabilities tab for the
desired agent.
1. In your web browser, navigate to Agent pools:
1. Choose Azure DevOps , Organization settings .
2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .


2. Navigate to the capabilities tab:
1. From the Agent pools tab, select the desired agent pool.

2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


NOTE
Microsoft-hosted agents don't display system capabilities. For a list of software installed
on Microsoft-hosted agents, see Use a Microsoft-hosted agent.

1. From the Agent pools tab, select the desired pool.

2. Select Agents and choose the desired agent.


3. Choose the Capabilities tab.

1. From the Agent pools tab, select the desired pool.

2. Select Agents and choose the desired agent.


3. Choose the Capabilities tab.

Select the desired agent, and choose the Capabilities tab.

Select the desired agent, and choose the Capabilities tab.


From the Agent pools tab, select the desired agent, and choose the Capabilities tab.

TIP
After you install new software on a self-hosted agent, you must restart the agent for the new capability
to show up. For more information, see Restart Windows agent, Restart Linux agent, and Restart Mac
agent.

Communication
Communication with Azure Pipelines
Communication with TFS
The agent communicates with Azure Pipelines or TFS to determine which job it needs to run, and
to report the logs and job status. This communication is always initiated by the agent. All the
messages from the agent to Azure Pipelines or TFS happen over HTTP or HTTPS, depending on
how you configure the agent. This pull model allows the agent to be configured in different
topologies as shown below.
Here is a common communication pattern between the agent and Azure Pipelines or TFS.
1. The user registers an agent with Azure Pipelines or TFS by adding it to an agent pool. You
need to be an agent pool administrator to register an agent in that agent pool. The identity
of agent pool administrator is needed only at the time of registration and is not persisted
on the agent, nor is it used in any further communication between the agent and Azure
Pipelines or TFS. Once the registration is complete, the agent downloads a listener OAuth
token and uses it to listen to the job queue.
2. The agent listens to see if a new job request has been posted for it in the job queue in
Azure Pipelines/TFS using an HTTP long poll. When a job is available, the agent downloads
the job as well as a job-specific OAuth token. This token is generated by Azure
Pipelines/TFS for the scoped identity specified in the pipeline. That token is short lived and
is used by the agent to access resources (for example, source code) or modify resources
(for example, upload test results) on Azure Pipelines or TFS within that job.
3. After the job is completed, the agent discards the job-specific OAuth token and goes back
to checking if there is a new job request using the listener OAuth token.
The payload of the messages exchanged between the agent and Azure Pipelines/TFS are secured
using asymmetric encryption. Each agent has a public-private key pair, and the public key is
exchanged with the server during registration. The server uses the public key to encrypt the
payload of the job before sending it to the agent. The agent decrypts the job content using its
private key. This is how secrets stored in pipelines or variable groups are secured as they are
exchanged with the agent.
Here is a common communication pattern between the agent and TFS.
An agent pool administrator joins the agent to an agent pool, and the credentials of the
service account (for Windows) or the saved user name and password (for Linux and
macOS) are used to initiate communication with TFS. The agent uses these credentials to
listen to the job queue.
The agent does not use asymmetric key encryption while communicating with the server.
However, you can use HTTPS to secure the communication between the agent and TFS.
Communication to deploy to target servers
When you use the agent to deploy artifacts to a set of servers, it must have "line of sight"
connectivity to those servers. The Microsoft-hosted agent pools, by default, have connectivity to
Azure websites and servers running in Azure.

NOTE
If your Azure resources are running in an Azure Virtual Network, you can get the Agent IP ranges where
Microsoft-hosted agents are deployed so you can configure the firewall rules for your Azure VNet to
allow access by the agent.

If your on-premises environments do not have connectivity to a Microsoft-hosted agent pool


(which is typically the case due to intermediate firewalls), you'll need to manually configure a self-
hosted agent on on-premises computer(s). The agents must have connectivity to the target on-
premises environments, and access to the Internet to connect to Azure Pipelines or Team
Foundation Server, as shown in the following schematic.

Authentication
To register an agent, you need to be a member of the administrator role in the agent pool. The
identity of agent pool administrator is needed only at the time of registration and is not persisted
on the agent, and is not used in any subsequent communication between the agent and Azure
Pipelines or TFS. In addition, you must be a local administrator on the server in order to configure
the agent.
Your agent can authenticate to Azure Pipelines using the following method:
Your agent can authenticate to Azure DevOps Server or TFS using one of the following methods:
Personal Access Token (PAT ):
Generate and use a PAT to connect an agent with Azure Pipelines or TFS 2017 and newer. PAT is
the only scheme that works with Azure Pipelines. The PAT must have Agent Pools (read,
manage) scope (for a deployment group agent, the PAT must have Deployment group (read,
manage) scope), and while a single PAT can be used for registering multiple agents, the PAT is
used only at the time of registering the agent, and not for subsequent communication. For more
information, see the Authenticate with a personal access token (PAT) section in the Windows,
Linux, or macOS self-hosted agents articles.
To use a PAT with TFS, your server must be configured with HTTPS. See Web site settings and
security.
Integrated
Connect a Windows agent to TFS using the credentials of the signed-in user through a Windows
authentication scheme such as NTLM or Kerberos.
To use this method of authentication, you must first configure your TFS server.
1. Sign into the machine where you are running TFS.
2. Start Internet Information Services (IIS) Manager. Select your TFS site and make sure
Windows Authentication is enabled with a valid provider such as NTLM or Kerberos.

Negotiate
Connect to TFS as a user other than the signed-in user through a Windows authentication
scheme such as NTLM or Kerberos.
To use this method of authentication, you must first configure your TFS server.
1. Log on to the machine where you are running TFS.
2. Start Internet Information Services (IIS) Manager. Select your TFS site and make sure
Windows Authentication is enabled with the Negotiate provider and with another method
such as NTLM or Kerberos.

Alternate
Connect to TFS using Basic authentication. To use this method, you must first configure HTTPS on
TFS.
To use this method of authentication, you must configure your TFS server as follows:
1. Sign in to the machine where you are running TFS.
2. Configure basic authentication. See Using tfx against Team Foundation Server 2015
using Basic Authentication.

Interactive vs. service


You can run your self-hosted agent as either a service or an interactive process. After you've
configured the agent, we recommend you first try it in interactive mode to make sure it works.
Then, for production use, we recommend you run the agent in one of the following modes so that
it reliably remains in a running state. These modes also ensure that the agent starts automatically
if the machine is restarted.
1. As a ser vice . You can leverage the service manager of the operating system to manage
the lifecycle of the agent. In addition, the experience for auto-upgrading the agent is better
when it is run as a service.
2. As an interactive process with auto-logon enabled . In some cases, you might need
to run the agent interactively for production use - such as to run UI tests. When the agent
is configured to run in this mode, the screen saver is also disabled. Some domain policies
may prevent you from enabling auto-logon or disabling the screen saver. In such cases,
you may need to seek an exemption from the domain policy, or run the agent on a
workgroup computer where the domain policies do not apply.

NOTE
There are security risks when you enable automatic logon or disable the screen saver because
you enable other users to walk up to the computer and use the account that automatically logs
on. If you configure the agent to run in this way, you must ensure the computer is physically
protected; for example, located in a secure facility. If you use Remote Desktop to access the
computer on which an agent is running with auto-logon, simply closing the Remote Desktop
causes the computer to be locked and any UI tests that run on this agent may fail. To avoid this,
use the tscon command to disconnect from Remote Desktop. For example:
%windir%\System32\tscon.exe 1 /dest:console

Agent account
Whether you run an agent as a service or interactively, you can choose which computer account
you use to run the agent. (Note that this is different from the credentials that you use when you
register the agent with Azure Pipelines or TFS.) The choice of agent account depends solely on the
needs of the tasks running in your build and deployment jobs.
For example, to run tasks that use Windows authentication to access an external service, you
must run the agent using an account that has access to that service. However, if you are running
UI tests such as Selenium or Coded UI tests that require a browser, the browser is launched in the
context of the agent account.
On Windows, you should consider using a service account such as Network Service or Local
Service. These accounts have restricted permissions and their passwords don't expire, meaning
the agent requires less management over time.

Agent version and upgrades


We update the agent software every few weeks in Azure Pipelines. We indicate the agent version
in the format {major}.{minor} . For instance, if the agent version is 2.1 , then the major version is
2 and the minor version is 1.
Microsoft-hosted agents are always kept up-to-date. If the newer version of the agent is only
different in minor version, self-hosted agents can usually be updated automatically (configure
this setting in Agent pools , select your agent, Settings - the default is enabled) by Azure
Pipelines. An upgrade is requested when a platform feature or one of the tasks used in the
pipeline requires a newer version of the agent.
If you run a self-hosted agent interactively, or if there is a newer major version of the agent
available, then you may have to manually upgrade the agents. You can do this easily from the
Agent pools tab under your organization. Your pipelines won't run until they can target a
compatible agent.
To update self-hosted agents
1. Navigate to Project settings , Agent pools .
2. Select your agent pool and choose Update all agents .

You can also update agents individually by choosing Update agent from the ... menu.

3. Select Update to confirm the update.

4. An update request is queued for each agent in the pool, that runs when any currently
running jobs complete. Upgrading typically only takes a few moments - long enough to
download the latest version of the agent software (approximately 200 MB), unzip it, and
restart the agent with the new version. You can monitor the status of your agents on the
Agents tab.
We update the agent software with every update in Azure DevOps Server and TFS. We indicate
the agent version in the format {major}.{minor} . For instance, if the agent version is 2.1 , then
the major version is 2 and the minor version is 1.
When your Azure DevOps Server or TFS server has a newer version of the agent, and that newer
agent is only different in minor version, it can usually be automatically upgraded. An upgrade is
requested when a platform feature or one of the tasks used in the pipeline requires a newer
version of the agent. Starting with Azure DevOps Server 2019, you don't have to wait for a new
server release. You can upload a new version of the agent to your application tier, and that
version will be offered as an upgrade.
If you run the agent interactively, or if there is a newer major version of the agent available, then
you may have to manually upgrade the agents. You can do this easily from the Agent pools tab
under your project collection. Your pipelines won't run until they can target a compatible agent.
You can view the version of an agent by navigating to Agent pools and selecting the
Capabilities tab for the desired agent, as described in View agent details.

![NOTE] For servers with no internet access, manually copy the agent zip file to
C:\ProgramData\Microsoft\Azure DevOps\Agents\ to use as a local file.

FAQ
How do I make sure I have the latest v2 agent version?
1. Navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

2. Click the pool that contains the agent.


3. Make sure the agent is enabled.
4. Navigate to the capabilities tab:
1. From the Agent pools tab, select the desired agent pool.
2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


NOTE
Microsoft-hosted agents don't display system capabilities. For a list of software installed
on Microsoft-hosted agents, see Use a Microsoft-hosted agent.

1. From the Agent pools tab, select the desired pool.

2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


1. From the Agent pools tab, select the desired pool.

2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


Select the desired agent, and choose the Capabilities tab.

Select the desired agent, and choose the Capabilities tab.

From the Agent pools tab, select the desired agent, and choose the Capabilities tab.
5. Look for the Agent.Version capability. You can check this value against the latest published
agent version. See Azure Pipelines Agent and check the page for the highest version
number listed.
6. Each agent automatically updates itself when it runs a task that requires a newer version of
the agent. If you want to manually update some agents, right-click the pool, and select
Update all agents .
Can I update my v2 agents that are part of an Azure DevOps Server pool?
Yes. Beginning with Azure DevOps Server 2019, you can configure your server to look for the
agent package files on a local disk. This configuration will override the default version that came
with the server at the time of its release. This scenario also applies when the server doesn't have
access to the internet.
1. From a computer with Internet access, download the latest version of the agent package
files (in .zip or .tar.gz form) from the Azure Pipelines Agent GitHub Releases page.
2. Transfer the downloaded package files to each Azure DevOps Server Application Tier by
using a method of your choice (such as USB drive, Network transfer, and so on). Place the
agent files under the %ProgramData%\Microsoft\Azure DevOps\Agents folder.
3. You're all set! Your Azure DevOps Server will now use the local files whenever the agents
are updated. Each agent automatically updates itself when it runs a task that requires a
newer version of the agent. But if you want to manually update some agents, right-click
the pool, and then choose Update all agents .
Do self-hosted agents have any performance advantages over Microsoft-hosted agents?
In many cases, yes. Specifically:
If you use a self-hosted agent, you can run incremental builds. For example, if you define a
pipeline that does not clean the repo and does not perform a clean build, your builds will
typically run faster. When you use a Microsoft-hosted agent, you don't get these benefits
because the agent is destroyed after the build or release pipeline is completed.
A Microsoft-hosted agent can take longer to start your build. While it often takes just a few
seconds for your job to be assigned to a Microsoft-hosted agent, it can sometimes take
several minutes for an agent to be allocated depending on the load on our system.
Can I install multiple self-hosted agents on the same machine?
Yes. This approach can work well for agents that run jobs that don't consume many shared
resources. For example, you could try it for agents that run releases that mostly orchestrate
deployments and don't do much work on the agent itself.
You might find that in other cases you don't gain much efficiency by running multiple agents on
the same machine. For example, it might not be worthwhile for agents that run builds that
consume much disk and I/O resources.
You might also run into problems if parallel build jobs are using the same singleton tool
deployment, such as npm packages. For example, one build might update a dependency while
another build is in the middle of using it, which could cause unreliable results and errors.

Learn more
For more information about agents, see the following modules from the Build applications with
Azure DevOps learning path.
Choose a Microsoft-hosted or self-hosted build agent
Host your own build agent in Azure Pipelines
Agent pools
11/2/2020 • 18 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.

Instead of managing each agent individually, you organize agents into agent pools . In TFS, pools are
scoped to the entire server; so you can share an agent pool across project collections and projects.
An agent queue provides access to an agent pool within a project. When you create a build or release
pipeline, you specify which queue it uses. Queues are scoped to your project in TFS 2017 and newer, so
you can only use them across build and release pipelines within a project.
To share an agent pool with multiple projects, in each of those projects, you create an agent queue
pointing to the same agent pool. While multiple queues across projects can use the same agent pool,
multiple queues within a project cannot use the agent pool. Also, each agent queue can use only one
agent pool.

Agent pools are scoped to project collections.


Instead of managing each agent individually, you organize agents into agent pools . In Azure Pipelines,
pools are scoped to the entire organization; so you can share the agent machines across projects. In
Azure DevOps Server, agent pools are scoped to the entire server; so you can share the agent machines
across projects and collections.
When you configure an agent, it is registered with a single pool, and when you create a pipeline, you
specify which pool the pipeline uses. When you run the pipeline, it runs on an agent from that pool that
meets the demands of the pipeline.
You create and manage agent pools from the agent pools tab in admin settings.
If you are an organization administrator, you create and manage agent pools from the agent pools tab in
admin settings.
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

You create and manage agent queues from the agent queues tab in project settings.
If you are a project team member, you create and manage agent queues from the agent pools tab in
project settings.
Navigate to your project and choose Project settings , Agent pools .
Navigate to your project and choose Project settings , Agent pools .

Navigate to your project and choose Project settings , Agent pools .

Navigate to your project and choose Settings (gear icon) > Agent Queues .

Navigate to your project and choose Settings (gear icon) > Agent Queues .

1. Navigate to your project and choose Manage project (gear icon).


2. Choose Control panel .

3. Select the desired project collection, and choose View the collection administration page .

a. Select Agent Queues (For TFS 2015, Select Build and then Queues ).

Default agent pools


The following agent pools are provided by default:
Default pool: Use it to register self-hosted agents that you've set up.
Azure Pipelines hosted pool with various Windows, Linux, and macOS images. For a complete
list of the available images and their installed software, see Microsoft-hosted agents.
NOTE
The Azure Pipelines hosted pool replaces the previous hosted pools that had names that mapped to the
corresponding images. Any jobs you had in the previous hosted pools are automatically redirected to the
correct image in the new Azure Pipelines hosted pool. In some circumstances, you may still see the old
pool names, but behind the scenes the hosted jobs are run using the Azure Pipelines pool. For more
information, see the Single hosted pool release notes from the July 1 2019 - Sprint 154 release notes.

By default, all contributors in a project are members of the User role on hosted pools. This allows every
contributor in a project to author and run pipelines using Microsoft-hosted agents.
Choosing a pool and agent in your pipeline
YAML
Classic
To choose a Microsoft-hosted agent from the Azure Pipelines pool in your Azure DevOps Services YAML
pipeline, specify the name of the image, using the YAML VM Image Label from this table.

pool:
vmImage: ubuntu-16.04

To use a private pool with no demands:

pool: MyPool

For more information, see the YAML schema for pools.


Managing pools and queues
Browser
Azure DevOps CLI
You create and manage agent pools from the agent pools tab in admin settings.
If you are an organization administrator, you create and manage agent pools from the agent pools tab in
admin settings.
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

You create and manage agent queues from the agent queues tab in project settings.
If you are a project team member, you create and manage agent queues from the agent pools tab in
project settings.
Navigate to your project and choose Project settings , Agent pools .
Navigate to your project and choose Project settings , Agent pools .

Navigate to your project and choose Project settings , Agent pools .

Navigate to your project and choose Settings (gear icon) > Agent Queues .

Navigate to your project and choose Settings (gear icon) > Agent Queues .

1. Navigate to your project and choose Manage project (gear icon).


2. Choose Control panel .

3. Select the desired project collection, and choose View the collection administration page .

a. Select Agent Queues (For TFS 2015, Select Build and then Queues ).

Pools are used to run jobs. Learn about specifying pools for jobs.
If you've got a lot of self-hosted agents intended for different teams or purposes, you might want to
create additional pools as explained below.

Creating agent pools


Here are some typical situations when you might want to create self-hosted agent pools:
You're a member of a project and you want to use a set of machines owned by your team for
running build and deployment jobs. First, make sure you've the permissions to create pools in
your project by selecting Security on the agent pools page in your project settings. You must have
Administrator role to be able to create new pools. Next, select Add pool and select the option to
create a new pool at the organization level. Finally install and configure agents to be part of that
agent pool.
You're a member of the infrastructure team and would like to set up a pool of agents for use in all
projects. First make sure you're a member of a group in All agent pools with the Administrator
role by navigating to agent pools page in your organization settings. Next create a New agent
pool and select the option to Auto-provision corresponding agent pools in all projects
while creating the pool. This setting ensures all projects have access to this agent pool. Finally
install and configure agents to be part of that agent pool.
You want to share a set of agent machines with multiple projects, but not all of them. First,
navigate to the settings for one of the projects, add an agent pool, and select the option to create a
new pool at the organization level. Next, go to each of the other projects, and create a pool in each
of them while selecting the option to Use an existing agent pool from the organization .
Finally, install and configure agents to be part of the shared agent pool.
You're a member of a project and you want to use a set of machines owned by your team for
running build and deployment jobs. First, make sure you're a member of a group in All Pools with
the Administrator role. Next create a New project agent pool in your project settings and
select the option to Create a new organization agent pool . As a result, both an organization
and project-level agent pool will be created. Finally install and configure agents to be part of that
agent pool.
You're a member of the infrastructure team and would like to set up a pool of agents for use in all
projects. First make sure you're a member of a group in All Pools with the Administrator role.
Next create a New organization agent pool in your admin settings and select the option to
Auto-provision corresponding project agent pools in all projects while creating the pool.
This setting ensures all projects have a pool pointing to the organization agent pool. The system
creates a pool for existing projects, and in the future it will do so whenever a new project is
created. Finally install and configure agents to be part of that agent pool.
You want to share a set of agent machines with multiple projects, but not all of them. First create a
project agent pool in one of the projects and select the option to Create a new organization
agent pool while creating that pool. Next, go to each of the other projects, and create a pool in
each of them while selecting the option to Use an existing organization agent pool . Finally,
install and configure agents to be part of the shared agent pool.

Security of agent pools


Understanding how security works for agent pools helps you control sharing and use of agents.
Roles are defined on each agent pool, and membership in these roles governs what operations you can
perform on an agent pool.

RO L E O N A N A GEN T P O O L IN O RGA N IZ AT IO N
SET T IN GS P URP O SE

Reader Members of this role can view the agent pool as well as
agents. You typically use this to add operators that are
responsible for monitoring the agents and their health.

Service Account Members of this role can use the organization agent
pool to create a project agent pool in a project. If you
follow the guidelines above for creating new project
agent pools, you typically do not have to add any
members here.
RO L E O N A N A GEN T P O O L IN O RGA N IZ AT IO N
SET T IN GS P URP O SE

Administrator In addition to all the above permissions, members of this


role can register or unregister agents from the
organization agent pool. They can also refer to the
organization agent pool when creating a project agent
pool in a project. Finally, they can also manage
membership for all roles of the organization agent pool.
The user that created the organization agent pool is
automatically added to the Administrator role for that
pool.

The All agent pools node in the Agent Pools tab is used to control the security of all organization agent
pools. Role memberships for individual organization agent pools are automatically inherited from those
of the 'All agent pools' node. When using TFS or Azure DevOps Server, by default, TFS and Azure DevOps
Server administrators are also administrators of the 'All agent pools' node.
Roles are also defined on each project agent pool, and memberships in these roles govern what
operations you can perform on an agent pool at the project level.

RO L E O N A A GEN T P O O L IN P RO JEC T SET T IN GS P URP O SE

Reader Members of this role can view the project agent pool.
You typically use this to add operators that are
responsible for monitoring the build and deployment
jobs in that project agent pool.

User Members of this role can use the project agent pool
when authoring pipelines.

Administrator In addition to all the above operations, members of this


role can manage membership for all roles of the project
agent pool. The user that created the pool is
automatically added to the Administrator role for that
pool.

The All agent pools node in the Agent pools tab is used to control the security of all project agent pools
in a project. Role memberships for individual project agent pools are automatically inherited from those
of the 'All agent pools' node. By default, the following groups are added to the Administrator role of 'All
agent pools': Build Administrators, Release Administrators, Project Administrators.
The Security action in the Agent pools tab is used to control the security of all project agent pools in a
project. Role memberships for individual project agent pools are automatically inherited from what you
define here. By default, the following groups are added to the Administrator role of 'All agent pools':
Build Administrators, Release Administrators, Project Administrators.
TFS 2015
In TFS 2015, special groups are defined on agent pools, and membership in these groups governs what
operations you can perform.
Members of Agent Pool Administrators can register new agents in the pool and add additional users
as administrators or service accounts.
Add people to the Agent Pool Administrators group to grant them permission manage all the agent
pools. This enables people to create new pools and modify all existing pools. Members of Team
Foundation Administrators group can also perform all these operations.
Users in the Agent Pool Ser vice Accounts group have permission to listen to the message queue for
the specific pool to receive work. In most cases you should not have to manage members of this group.
The agent registration process takes care of it for you. The service account you specify for the agent
(commonly Network Service) is automatically added when you register the agent.

FAQ
If I don't schedule a maintenance window, when will the agents run maintenance?
If no window is scheduled, then the agents in that pool will not run the maintenance job.
What is a maintenance job?
You can configure agent pools to periodically clean up stale working directories and repositories. This
should reduce the potential for the agents to run out of disk space. Maintenance jobs are configured at
the project collection or organization level in agent pool settings.
To configure maintenance job settings:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

Choose the desired pool and choose Settings to configure maintenance job settings for that agent pool.

IMPORTANT
You must have the Manage build queues permission to configure maintenance job settings. If you don't see the
Settings tab or the Maintenance Histor y tab, you don't have that permission, which is granted by default to
the Administrator role. For more information, see Security of agent pools.
Configure your desired settings and choose Save .
Select Maintenance Histor y to see the maintenance job history for the current agent pool. You can
download and review logs to see the cleaning steps and actions taken.
The maintenance is done per agent, not per machine; so if you have multiple agents on a single machine,
you may still run into disk space issues.
I'm trying to create a project agent pool that uses an existing organization agent pool, but the controls
are grayed out. Why?
On the 'Create a project agent pool' dialog box, you can't use an existing organization agent pool if it is
already referenced by another project agent pool. Each organization agent pool can be referenced by
only one project agent pool within a given project collection.
I can't select a Microsoft-hosted pool and I can't queue my build. How do I fix this?
Ask the owner of your Azure DevOps organization to grant you permission to use the pool. See Security
of agent pools.
I need more hosted build resources. What can I do?
A: The Azure Pipelines pool provides all Azure DevOps organizations with cloud-hosted build agents and
free build minutes each month. If you need more Microsoft-hosted build resources, or need to run more
jobs in parallel, then you can either:
Host your own agents on infrastructure that you manage.
Buy additional parallel jobs.
Microsoft-hosted agents
11/2/2020 • 18 minutes to read • Edit Online

Azure Pipelines
Microsoft-hosted agents are only available with Azure DevOps Services, which is hosted in the cloud.
You cannot use Microsoft-hosted agents or the Azure Pipelines agent pool with on-premises TFS or
Azure DevOps Server. With these on-premises versions, you must use self-hosted agents.

IMPORTANT

To view the content available for your platform, make sure that you select the correct version of this article from
the version selector which is located above the table of contents. Feature support differs depending on whether
you are working from Azure DevOps Services or an on-premises version of Azure DevOps Server, renamed from
Team Foundation Server (TFS).
To learn which on-premises version you are using, see What platform/version am I using?

If your pipelines are in Azure Pipelines, then you've got a convenient option to run your jobs using a
Microsoft-hosted agent . With Microsoft-hosted agents, maintenance and upgrades are taken care of
for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded
after one use. Microsoft-hosted agents can run jobs directly on the VM or in a container.
Azure Pipelines provides a pre-defined agent pool named Azure Pipelines with Microsoft-hosted
agents.
For many teams this is the simplest way to run your jobs. You can try it first and see if it works for your
build or deployment. If not, you can use a self-hosted agent.

TIP
You can try a Microsoft-hosted agent for no charge.

Software
The Azure Pipelines agent pool offers several virtual machine images to choose from, each including a
broad range of tools and software.

C L A SSIC EDITO R A GEN T


IM A GE SP EC IF IC AT IO N YA M L VM IM A GE L A B EL IN C L UDED SO F T WA RE

Windows Server 2019 windows-2019 windows-latest OR Link


with Visual Studio 2019 windows-2019

Windows Server 2016 vs2017-win2016 vs2017-win2016 Link


with Visual Studio 2017

Ubuntu 20.04 (preview) ubuntu-20.04 ubuntu-20.04 Link


C L A SSIC EDITO R A GEN T
IM A GE SP EC IF IC AT IO N YA M L VM IM A GE L A B EL IN C L UDED SO F T WA RE

Ubuntu 18.04 ubuntu-18.04 ubuntu-latest OR Link


ubuntu-18.04

Ubuntu 16.04 ubuntu-16.04 ubuntu-16.04 Link

macOS X Mojave 10.14 macOS-10.14 macOS-10.14 Link

macOS X Catalina 10.15 macOS-10.15 macOS-latest OR Link


macOS-10.15

You can see the installed software for each hosted agent by choosing the Included Software link in the
table. When using macOS images, you can manually select from tool versions. See below.

NOTE
In March 2020, we removed the following Azure Pipelines hosted images:
Windows Server 2012R2 with Visual Studio 2015 ( vs2015-win2012r2 )
macOS X High Sierra 10.13 ( macOS-10.13 )
Windows Server Core 1803 - ( win1803 )

Customers are encouraged to migrate to vs2017-win2016 , macOS-10.14 , or a self-hosted agent respectively.


For more information and instructions on how to update your pipelines that use those images, see Removing
older images in Azure Pipelines hosted pools.

NOTE
The Azure Pipelines hosted pool replaces the previous hosted pools that had names that mapped to the
corresponding images. Any jobs you had in the previous hosted pools are automatically redirected to the correct
image in the new Azure Pipelines hosted pool. In some circumstances, you may still see the old pool names, but
behind the scenes the hosted jobs are run using the Azure Pipelines pool. For more information about this
update, see the Single hosted pool release notes from the July 1 2019 - Sprint 154 release notes.

IMPORTANT
To request additional software to be installed on Microsoft-hosted agents, don't create a feedback request on
this document or open a support ticket. Instead, open an issue on our repository, where we manage the scripts
to generate various images.

Use a Microsoft-hosted agent


YAML
Classic
In YAML pipelines, if you do not specify a pool, pipelines will default to the Azure Pipelines agent pool.
You simply need to specify which virtual machine image you want to use.
jobs:
- job: Linux
pool:
vmImage: 'ubuntu-latest'
steps:
- script: echo hello from Linux
- job: macOS
pool:
vmImage: 'macOS-latest'
steps:
- script: echo hello from macOS
- job: Windows
pool:
vmImage: 'windows-latest'
steps:
- script: echo hello from Windows

NOTE
The specification of a pool can be done at multiple levels in a YAML file. If you notice that your pipeline is not
running on the expected image, make sure that you verify the pool specification at the pipeline, stage, and job
levels.

Avoid hard-coded references


When you use a Microsoft-hosted agent, always use variables to refer to the build environment and
agent resources. For example, don't hard-code the drive letter or folder that contains the repository. The
precise layout of the hosted agents is subject to change without warning.

Hardware
Microsoft-hosted agents that run Windows and Linux images are provisioned on Azure general purpose
virtual machines Standard_DS2_v2. These virtual machines are colocated in the same geography as
your Azure DevOps organization.
Agents that run macOS images are provisioned on Mac pros. These agents always run in US and Europe
irrespective of the location of your Azure DevOps organization. If data sovereignty is important to you
and if your organization is not in one of these geographies, then you should not use macOS images.
Learn more.
All of these machines have 10 GB of free disk space available for your pipelines to run. This free space is
consumed when your pipeline checks out source code, downloads packages, pulls docker images, or
generates intermediate files.

IMPORTANT
We cannot honor requests to increase disk space on Microsoft-hosted agents, or to provision more powerful
machines. If the specifications of Microsoft-hosted agents do not meet your needs, then you should consider
self-hosted agents or scale set agents.

Networking
In some setups, you may need to know the range of IP addresses where agents are deployed. For
instance, if you need to grant the hosted agents access through a firewall, you may wish to restrict that
access by IP address. Because Azure DevOps uses the Azure global network, IP ranges vary over time.
We publish a weekly JSON file listing IP ranges for Azure datacenters, broken out by region. This file is
updated weekly with new planned IP ranges. The new IP ranges become effective the following week.
We recommend that you check back frequently (at least once every week) to ensure you keep an up-to-
date list. If agent jobs begin to fail, a key first troubleshooting step is to make sure your configuration
matches the latest list of IP addresses. The IP address ranges for the hosted agents are listed in the
weekly file under AzureCloud.<region> , such as AzureCloud.westus for the West US region.
Your hosted agents run in the same Azure geography as your organization. Each geography contains
one or more regions. While your agent may run in the same region as your organization, it is not
guaranteed to do so. To obtain the complete list of possible IP ranges for your agent, you must use the
IP ranges from all of the regions that are contained in your geography. For example, if your organization
is located in the United States geography, you must use the IP ranges for all of the regions in that
geography.
To determine your geography, navigate to
https://dev.azure.com/<your_organization>/_settings/organizationOverview , get your region, and find
the associated geography from the Azure geography table. Once you have identified your geography,
use the IP ranges from the weekly file for all regions in that geography.

IMPORTANT
You cannot use private connections such as ExpressRoute or VPN to connect Microsoft-hosted agents to your
corporate network. The traffic between Microsoft-hosted agents and your servers will be over public network.

To identify the possible IP ranges for Microsoft-hosted agents


1. Identify the region for your organization in Organization settings .
2. Identify the Azure Geography for your organization's region.
3. Map the names of the regions in your geography to the format used in the weekly file, following the
format of AzureCloud.<region> , such as AzureCloud.westus . You can map the names of the regions
from the Azure Geography list to the format used in the weekly file by reviewing the region names
passed to the constructor of the regions defined in the source code for the Region class, from the
Azure Management Libraries for .NET.

NOTE
Since there is no API in the Azure Management Libraries for .NET to list the regions for a geography, you
must list them manually as shown in the following example.

4. Retrieve the IP addresses for all regions in your geography from the weekly file. If your region is
Brazil South or West Europe , you must include additional IP ranges based on your fallback
geography, as described in the following note.

NOTE
Due to capacity restrictions, some organizations in the Brazil South or West Europe regions may occasionally
see their hosted agents located outside their expected geography. In these cases, in addition to including the IP
ranges as described in the previous section, additional IP ranges must be included for the regions in the capacity
fallback geography.
If your organization is in the Brazil South region, your capacity fallback geography is United States .
If your organization is in the West Europe region, the capacity fallback geography is France .
Our Mac IP ranges are not included in the Azure IPs above, though we are investigating options to publish these
in the future.
Example
In the following example, the hosted agent IP address ranges for an organization in the West US region
are retrieved from the weekly file. Since the West US region is in the United States geography, the IP
addresses for all regions in the United States geography are included. In this example, the IP addresses
are written to the console.

using Newtonsoft.Json.Linq;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;

namespace WeeklyFileIPRanges
{
class Program
{
// Path to the locally saved weekly file
const string weeklyFilePath = @"C:\MyPath\ServiceTags_Public_20200504.json";

static void Main(string[] args)


{
// United States geography has the following regions:
// Central US, East US 2, East US, North Central US,
// South Central US, West Central US, West US, West US 2
List<string> USGeographyRegions = new List<string>
{
"centralus",
"eastus",
"eastus2",
"northcentralus",
"southcentralus",
"westcentralus",
"westus",
"westus2"
};

// Load the weekly file


JObject weeklyFile = JObject.Parse(File.ReadAllText(weeklyFilePath));
JArray values = (JArray)weeklyFile["values"];

foreach (string region in USGeographyRegions)


{
string azureCloudRegion = $"AzureCloud.{region}";
Console.WriteLine(azureCloudRegion);

var ipList =
from v in values
where (string)v["name"] == azureCloudRegion
select v["properties"]["addressPrefixes"];

foreach (var ip in ipList.Children())


{
Console.WriteLine(ip);
}
}
}
}
}

Service tags
Microsoft-hosted agents can't be listed by service tags. If you're trying to grant hosted agents access to
your resources, you'll need to follow the IP range allow listing method.
Security
Microsoft-hosted agents run on secure Azure platform. However, you must be aware of the following
security considerations.
Although Microsoft-hosted agents run on Azure public network, they are not assigned public IP
addresses. So, external entities cannot target Microsoft-hosted agents.
Microsoft-hosted agents are run in individual VMs, which are re-imaged after each run. Each agent is
dedicated to a single organization, and each VM hosts only a single agent.
There are several benefits to running your pipeline on Microsoft-hosted agents, from a security
perspective. If you run untrusted code in your pipeline, such as contributions from forks, it is safer to
run the pipeline on Microsoft-hosted agents than on self-hosted agents that reside in your corporate
network.
When a pipeline needs to access your corporate resources behind a firewall, you have to allow the IP
address range for the Azure geography. This may increase your exposure as the range of IP
addresses is rather large and since machines in this range can belong to other customers as well. The
best way to prevent this is to avoid the need to access internal resources.
Hosted images do not conform to CIS hardening benchmarks. To use CIS-hardened images, you
must create either self-hosted agents or scale-set agents.

Capabilities and limitations


Microsoft-hosted agents:
Have the above software. You can also add software during your build or release using tool installer
tasks.
Provide 10 GB of storage for your source and build outputs.
Provide a free tier:
Public project: 10 free Microsoft-hosted parallel jobs that can run for up to 360 minutes (6
hours) each time, with no overall time limit per month. Contact us to get your free tier limits
increased.
Private project: One free parallel job that can run for up to 60 minutes each time, until you've
used 1,800 minutes (30 hours) per month. You can pay for additional capacity per parallel job.
Paid parallel jobs remove the monthly time limit and allow you to run each job for up to 360
minutes (6 hours). Buy Microsoft-hosted parallel jobs.
Run on Microsoft Azure general purpose virtual machines Standard_DS2_v2
Run as an administrator on Windows and a passwordless sudo user on Linux
(Linux only) Run steps in a cgroup that offers 6 GB of physical memory and 13 GB of total memory

Microsoft-hosted agents do not offer:


The ability to remotely connect.
The ability to drop artifacts to a UNC file share.
The ability to join machines directly to your corporate network.
The ability to get bigger or more powerful build machines.
The ability to pre-install custom software (other than through tool installer tasks in your pipeline).
Potential performance advantages that you might get by using self-hosted agents that might start
and run builds faster. Learn more
The ability to run XAML builds.
If Microsoft-hosted agents don't meet your needs, then you can deploy your own self-hosted agents or
use scale set agents.
FAQ
How can I see what software is included in an image?
You can see the installed software for each hosted agent by choosing the Included Software link in the
Use a Microsoft-hosted agent table.
How does Microsoft choose the software and versions to put on the image?
More information about the versions of software included on the images can be found at Guidelines for
what's installed.
When are the images updated?
Images are typically updated weekly. You can check the status badges which are in the format
20200113.x where the first part indicates the date the image was updated.

What can I do if software I need is removed or replaced with a newer version?


You can let us know by filing a GitHub issue by choosing the Included Software links in the Use a
Microsoft-hosted agent table.
You can also use a self-hosted agent that includes the exact versions of software that you need. For more
information, see Self-hosted agents.
What if I need a bigger machine with more processing power, memory, or disk space?
We can't increase the memory, processing power, or disk space for Microsoft-hosted agents, but you can
use self-hosted agents or scale set agents hosted on machines with your desired specifications.
I can't select a Microsoft-hosted agent and I can't queue my build or deployment. What should I do?
Microsoft-hosted agents are only available in Azure Pipelines and not in TFS or Azure DevOps Server.
By default, all project contributors in an organization have access to the Microsoft-hosted agents. But,
your organization administrator may limit the access of Microsoft-hosted agents to select users or
projects. Ask the owner of your Azure DevOps organization to grant you permission to use a Microsoft-
hosted agent. See agent pool security.
My pipelines running on Microsoft-hosted agents take more time to complete. How can I speed
them up?
If your pipeline has recently become slower, review our status page for any outages. We could be having
issues with our service. Or else, review any changes that you made in your application code or pipeline.
Your repository size during check-out might have increased, you may be uploading larger artifacts, or
you may be running more tests.
If you are just setting up a pipeline and are comparing the performance of Microsoft-hosted agents to
your local machine or a self-hosted agent, then note the specifications of the hardware that we use to
run your jobs. We are unable to provide you with bigger or powerful machines. You can consider using
self-hosted agents or scale set agents if this performance is not acceptable.
I need more agents. What can I do?
All Azure DevOps organizations are provided with several free parallel jobs for open-source projects,
and one free parallel job and limited minutes each month for private projects. If you need additional
minutes or parallel jobs for your open-source project, contact support. If you need additional minutes or
parallel jobs for your private project, then you can buy more.
My pipeline succeeds on self-hosted agent, but fails on Microsoft-hosted agents. What should I do?
Your self-hosted agent probably has all the right dependencies installed on it, whereas the same
dependencies, tools, and software are not installed on Microsoft-hosted agents. First, carefully review
the list of software that is installed on Microsoft-hosted agents by following the link to Included
software in the table above. Then, compare that with the software installed on your self-hosted agent.
In some cases, Microsoft-hosted agents may have the tools that you need (for example, Visual Studio),
but all of the necessary optional components may not have been installed. If you find differences, then
you have two options:
You can create a new issue on the repository, where we track requests for additional software.
Contacting support will not help you with setting up new software on Microsoft-hosted agents.
You can use self-hosted agents or scale set agents. With these agents, you are fully in control of
the images that are used to run your pipelines.
My build succeeds on my local machine, but fails on Microsoft-hosted agents. What should I do?
Your local machine probably has all the right dependencies installed on it, whereas the same
dependencies, tools, and software are not installed on Microsoft-hosted agents. First, carefully review
the list of software that is installed on Microsoft-hosted agents by following the link to Included
software in the table above. Then, compare that with the software installed on your local machine. In
some cases, Microsoft-hosted agents may have the tools that you need (e.g., Visual Studio), but all of the
necessary optional components may not have been installed. If you find differences, then you have two
options:
You can create a new issue on the repository, where we track requests for additional software.
This is your best bet for getting new software installed. Contacting support will not help you with
setting up new software on Microsoft-hosted agents.
You can use self-hosted agents or scale set agents. With these agents, you are fully in control of
the images that are used to run your pipelines.
My pipeline fails with the error: "no space left on device".
Microsoft-hosted agents only have 10 GB of disk space available for running your job. This space is
consumed when you check out source code, when you download packages, when you download docker
images, or when you produce intermediate files. Unfortunately, we cannot increase the free space
available on Microsoft-hosted images. You can restructure your pipeline so that it can fit into this space.
Or, you can consider using self-hosted agents or scale set agents.
My pipeline running on Microsoft-hosted agents requires access to servers on our corporate
network? How do we get a list of IP addresses to allow in our firewall?
See the section Agent IP ranges
Our pipeline running on Microsoft-hosted agents is unable to resolve the name of a server on our
corporate network. How can we fix this?
If you refer to the server by its DNS name, then make sure that your server is publicly accessible on the
Internet through its DNS name. If you refer to your server by its IP address, make sure that the IP
address is publicly accessible on the Internet. In both cases, ensure that any firewall in between the
agents and your corporate network has the agent IP ranges allowed.
How can I manually select versions of tools on the Hosted macOS agent?
Xamarin
To manually select a Xamarin SDK version to use on the Hosted macOS agent, before your Xamarin
build task, execute this command line as part of your build, replacing the Mono version number 5.4.1 as
needed (also replacing '.' characters with underscores: '_'). Choose the Mono version that is associated
with the Xamarin SDK version that you need.
/bin/bash -c "sudo $AGENT_HOMEDIRECTORY/scripts/select-xamarin-sdk.sh 5_4_1"

Mono versions associated with Xamarin SDK versions on the Hosted macOS agent can be found here.
This command does not select the Mono version beyond the Xamarin SDK. To manually select a Mono
version, see instructions below.
In case you are using a non-default version of Xcode for building your Xamarin.iOS or Xamarin.Mac
apps, you should additionally execute this command line:
/bin/bash -c "echo '##vso[task.setvariable variable=MD_APPLE_SDK_ROOT;]'$(xcodeRoot);sudo xcode-
select --switch $(xcodeRoot)/Contents/Developer"

where $(xcodeRoot) = /Applications/Xcode_10.1.app

Xcode versions on the Hosted macOS agent pool can be found here.
Xcode
If you use the Xcode task included with Azure Pipelines and TFS, you can select a version of Xcode in
that task's properties. Otherwise, to manually set the Xcode version to use on the Hosted macOS
agent pool, before your xcodebuild build task, execute this command line as part of your build,
replacing the Xcode version number 8.3.3 as needed:
/bin/bash -c "sudo xcode-select -s /Applications/Xcode_8.3.3.app/Contents/Developer"

Xcode versions on the Hosted macOS agent pool can be found here.
This command does not work for Xamarin apps. To manually select an Xcode version for building
Xamarin apps, see instructions above.
Mono
To manually select a Mono version to use on the Hosted macOS agent pool, before your Mono build
task, execute this script in each job of your build, replacing the Mono version number 5.4.1 as needed:

SYMLINK=5_4_1
MONOPREFIX=/Library/Frameworks/Mono.framework/Versions/$SYMLINK
echo "##vso[task.setvariable
variable=DYLD_FALLBACK_LIBRARY_PATH;]$MONOPREFIX/lib:/lib:/usr/lib:$DYLD_LIBRARY_FALLBACK_PATH"
echo "##vso[task.setvariable
variable=PKG_CONFIG_PATH;]$MONOPREFIX/lib/pkgconfig:$MONOPREFIX/share/pkgconfig:$PKG_CONFIG_PATH"
echo "##vso[task.setvariable variable=PATH;]$MONOPREFIX/bin:$PATH"

.NET Core
.NET Core 2.2.105 is default on VM images but Mono version 6.0 or greater requires .NET Core
2.2.300+. If you use the Mono 6.0 or greater, you will have to override .NET Core version using .NET
Core Tool Installer task.
Boost
The VM images contain prebuilt Boost libraries with their headers in the directory designated by
BOOST_ROOT environment variable. In order to include the Boost headers, the path $BOOST_ROOT/include
should be added to the search paths.
Example of g++ invocation with Boost libraries:

g++ -I "$BOOST_ROOT/include" ...

Videos
Self-hosted Linux agents
11/2/2020 • 21 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

To run your jobs, you'll need at least one agent. A Linux agent can build and deploy different kinds of apps,
including Java and Android apps. We support Ubuntu, Red Hat, and CentOS.

Before you begin:


If your pipelines are in Azure Pipelines and a Microsoft-hosted agent meets your needs, you can skip
setting up a private Linux agent.
Otherwise, you've come to the right place to set up an agent on Linux. Continue to the next section.

Learn about agents


If you already know what an agent is and how it works, feel free to jump right in to the following sections. But if
you'd like some more background about what they do and how they work, see Azure Pipelines agents.

Check prerequisites
The agent is based on .NET Core 2.1. You can run this agent on several Linux distributions. We support the
following subset of .NET Core supported distributions:
x64
CentOS 7, 6 (see note 1)
Debian 9
Fedora 30, 29
Linux Mint 18, 17
openSUSE 42.3 or later
Oracle Linux 7
Red Hat Enterprise Linux 8, 7, 6 (see note 1)
SUSE Enterprise Linux 12 SP2 or later
Ubuntu 18.04, 16.04
ARM32 (see note 2)
Debian 9
Ubuntu 18.04

NOTE
Note 1: RHEL 6 and CentOS 6 require installing the specialized rhel.6-x64 version of the agent.
NOTE
Note 2: ARM instruction set ARMv7 or above is required. Run uname -a to see your Linux distro's instruction set.

Regardless of your platform, you will need to install Git 2.9.0 or higher. We strongly recommend installing the
latest version of Git.
If you'll be using TFVC, you will also need the Oracle Java JDK 1.6 or higher. (The Oracle JRE and OpenJDK are not
sufficient for this purpose.)
The agent installer knows how to check for other dependencies. You can install those dependencies on supported
Linux platforms by running ./bin/installdependencies.sh in the agent directory.
TFS 2018 RTM and older : The shipped agent is based on CoreCLR 1.0. We recommend that, if able, you
should upgrade to a later agent version (2.125.0 or higher). See for more about what's required to run a newer
Azure Pipelines agent prereqs

agent.
If you must stay on the older agent, make sure your machine is prepared with our prerequisites for either of the
supported distributions:
Ubuntu systems
Red Hat/CentOS systems
Subversion
If you're building from a Subversion repo, you must install the Subversion client on the machine.
You should run agent setup manually the first time. After you get a feel for how agents work, or if you want to
automate setting up many agents, consider using unattended config.

Prepare permissions
Decide which user you'll use
As a one-time step, you must register the agent. Someone with permission to administer the agent queue must
complete these steps. The agent will not use this person's credentials in everyday operation, but they're required
to complete registration. Learn more about how agents communicate.
Authenticate with a personal access token (PAT)
1. Sign in with the user account you plan to use in your Team Foundation Server web portal (
https://{your-server}:8080/tfs/ ).

1. Sign in with the user account you plan to use in you Azure DevOps Server web portal (
https://{your-server}/DefaultCollection/ ).

1. Sign in with the user account you plan to use in your Azure DevOps organization (
https://dev.azure.com/{your_organization} ).

2. From your home page, open your profile. Go to your security details.

3. Create a personal access token.

4. For the scope select Agent Pools (read, manage) and make sure all the other boxes are cleared. If it's a
deployment group agent, for the scope select Deployment group (read, manage) and make sure all
the other boxes are cleared.
Select Show all scopes at the bottom of the Create a new personal access token window window
to see the complete list of scopes.
5. Copy the token. You'll use this token when you configure the agent.
Authenticate as a Windows user (TFS 2015 and TFS 2017)
As an alternative, on TFS 2017, you can use either a domain user or a local Windows user on each of your TFS
application tiers.
On TFS 2015, for macOS and Linux only, we recommend that you create a local Windows user on each of your
TFS application tiers and dedicate that user for the purpose of deploying build agents.
Confirm the user has permission
Make sure the user account that you're going to use has permission to register the agent.
Is the user an Azure DevOps organization owner or TFS or Azure DevOps Server administrator? Stop here , you
have permission.
Otherwise:
1. Open a browser and navigate to the Agent pools tab for your Azure Pipelines organization or Azure
DevOps Server or TFS server:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

2. Click the pool on the left side of the page and then click Security .
3. If the user account you're going to use is not shown, then get an administrator to add it. The administrator
can be an agent pool administrator, an Azure DevOps organization owner, or a TFS or Azure DevOps
Server administrator.
If it's a deployment group agent, the administrator can be an deployment group administrator, an Azure
DevOps organization owner, or a TFS or Azure DevOps Server administrator.
You can add a user to the deployment group administrator role in the Security tab on the Deployment
Groups page in Azure Pipelines .

NOTE
If you see a message like this: Sorr y, we couldn't add the identity. Please tr y a different identity. , you probably
followed the above steps for an organization owner or TFS or Azure DevOps Server administrator. You don't need to do
anything; you already have permission to administer the agent queue.

Download and configure the agent


Azure Pipelines
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to Azure Pipelines, and navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .
2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .


3. Select the Default pool, select the Agents tab, and choose New agent .
4. On the Get the agent dialog box, click Linux .
5. On the left pane, select the specific flavor. We offer x64 or ARM for most Linux distributions. We also offer
a specific build for Red Hat Enterprise Linux 6.
6. On the right pane, click the Download button.
7. Follow the instructions on the page.
8. Unpack the agent into the directory of your choice. cd to that directory and run ./config.sh .
Azure DevOps Server 2019 and Azure DevOps Server 2020
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to Azure DevOps Server 2019, and navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

3. Click Download agent .


4. On the Get agent dialog box, click Linux .
5. On the left pane, select the specific flavor. We offer x64 or ARM for most Linux distributions. We also offer
a specific build for Red Hat Enterprise Linux 6.
6. On the right pane, click the Download button.
7. Follow the instructions on the page.
8. Unpack the agent into the directory of your choice. cd to that directory and run ./config.sh .
TFS 2017 and TFS 2018
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to TFS, and navigate to the Agent pools tab:
a. Navigate to your project and choose Settings (gear icon) > Agent Queues .
b. Choose Manage pools .

3. Click Download agent .


4. On the Get agent dialog box, click Linux .
5. Click the Download button.
6. Follow the instructions on the page.
7. Unpack the agent into the directory of your choice. cd to that directory and run ./config.sh . Make sure
that the path to the directory contains no spaces because tools and scripts don't always properly escape
spaces.
TFS 2015
1. Browse to the latest release on GitHub.
2. Follow the instructions on that page to download the agent.
3. Configure the agent.

./config.sh

Server URL
Azure Pipelines: https://dev.azure.com/{your-organization}

Azure DevOps Server 2019: https://{your_server}/DefaultCollection

TFS 2017 and newer: https://{your_server}/tfs

TFS 2015: http://{your_server}:8080/tfs

Authentication type
Azure Pipelines
Choose PAT , and then paste the PAT token you created into the command prompt window.

NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent. Learn
more at Communication with Azure Pipelines or TFS.

TFS or Azure DevOps Server

IMPORTANT
Make sure your server is configured to support the authentication method you want to use.

When you configure your agent to connect to TFS, you've got the following options:
Alternate Connect to TFS or Azure DevOps Server using Basic authentication. After you select Alternate
you'll be prompted for your credentials.
Integrated Not supported on macOS or Linux.
Negotiate (Default) Connect to TFS or Azure DevOps Server as a user other than the signed-in user via a
Windows authentication scheme such as NTLM or Kerberos. After you select Negotiate you'll be prompted
for credentials.
PAT Supported only on Azure Pipelines and TFS 2017 and newer. After you choose PAT, paste the PAT
token you created into the command prompt window. Use a personal access token (PAT) if your Azure
DevOps Server or TFS instance and the agent machine are not in a trusted domain. PAT authentication is
handled by your Azure DevOps Server or TFS instance instead of the domain controller.

NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent on
Azure DevOps Server and the newer versions of TFS. Learn more at Communication with Azure Pipelines or TFS.

Run interactively
For guidance on whether to run the agent in interactive mode or as a service, see Agents: Interactive vs. service.
To run the agent interactively:
1. If you have been running the agent as a service, uninstall the service.
2. Run the agent.

./run.sh

To restart the agent, press Ctrl+C and then run run.sh to restart it.
To use your agent, run a job using the agent's pool. If you didn't choose a different pool, your agent will be in the
Default pool.
Run once
For agents configured to run interactively, you can choose to have the agent accept only one job. To run in this
configuration:

./run.sh --once

Agents in this mode will accept only one job and then spin down gracefully (useful for running in Docker on a
service like Azure Container Instances).

Run as a systemd service


If your agent is running on these operating systems you can run the agent as a systemd service:
Ubuntu 16 LTS or newer
Red Hat 7.1 or newer
We provide an example ./svc.sh script for you to run and manage your agent as a systemd service. This script
will be generated after you configure the agent. We encourage you to review, and if needed, update the script
before running it.
Some important caveats:
If you run your agent as a service, you cannot run the agent service as root user.
Users running SELinux have reported difficulties with the provided svc.sh script. Refer to this agent issue as
a starting point. SELinux is not an officially supported configuration.

NOTE
If you have a different distribution, or if you prefer other approaches, you can use whatever kind of service mechanism you
prefer. See Service files.

Commands
Change to the agent directory
For example, if you installed in the myagent subfolder of your home directory:

cd ~/myagent$

Install
Command:

sudo ./svc.sh install

This command creates a service file that points to ./runsvc.sh . This script sets up the environment (more details
below) and starts the agents host.
Start

sudo ./svc.sh start

Status

sudo ./svc.sh status

Stop

sudo ./svc.sh stop

Uninstall

You should stop before you uninstall.

sudo ./svc.sh uninstall

Update environment variables


When you configure the service, it takes a snapshot of some useful environment variables for your current logon
user such as PATH, LANG, JAVA_HOME, ANT_HOME, and MYSQL_PATH. If you need to update the variables (for
example, after installing some new software):
./env.sh
sudo ./svc.sh stop
sudo ./svc.sh start

The snapshot of the environment variables is stored in .env file ( PATH is stored in .path ) under agent root
directory, you can also change these files directly to apply environment variable changes.
Run instructions before the service starts
You can also run your own instructions and commands to run when the service starts. For example, you could
set up the environment or call scripts.
1. Edit runsvc.sh .
2. Replace the following line with your instructions:

# insert anything to setup env when running as a service

Service files
When you install the service, some service files are put in place.
systemd service file
A systemd service file is created:
/etc/systemd/system/vsts.agent.{tfs-name}.{agent-name}.service

For example, you have configured an agent (see above) with the name our-linux-agent . The service file will be
either:
Azure Pipelines : the name of your organization. For example if you connect to
https://dev.azure.com/fabrikam , then the service name would be
/etc/systemd/system/vsts.agent.fabrikam.our-linux-agent.service

TFS or Azure DevOps Ser ver : the name of your on-premises server. For example if you connect to
http://our-server:8080/tfs , then the service name would be
/etc/systemd/system/vsts.agent.our-server.our-linux-agent.service

sudo ./svc.sh install generates this file from this template: ./bin/vsts.agent.service.template

.service file
sudo ./svc.sh start finds the service by reading the .service file, which contains the name of systemd service
file described above.
Alternative service mechanisms
We provide the ./svc.sh script as a convenient way for you to run and manage your agent as a systemd
service. But you can use whatever kind of service mechanism you prefer (for example: initd or upstart).
You can use the template described above as to facilitate generating other kinds of service files.

Use a cgroup to avoid agent failure


It's important to avoid situations in which the agent fails or become unusable because otherwise the agent can't
stream pipeline logs or report pipeline status back to the server. You can mitigate the risk of this kind of problem
being caused by high memory pressure by using cgroups and a lower oom_score_adj . After you've done this,
Linux reclaims system memory from pipeline job processes before reclaiming memory from the agent process.
Learn how to configure cgroups and OOM score.
Replace an agent
To replace an agent, follow the Download and configure the agent steps again.
When you configure an agent using the same name as an agent that already exists, you're asked if you want to
replace the existing agent. If you answer Y , then make sure you remove the agent (see below) that you're
replacing. Otherwise, after a few minutes of conflicts, one of the agents will shut down.

Remove and re-configure an agent


To remove the agent:
1. Stop and uninstall the service as explained above.
2. Remove the agent.

./config.sh remove

3. Enter your credentials.


After you've removed the agent, you can configure it again.

Unattended config
The agent can be set up from a script with no human intervention. You must pass --unattended and the answers
to all questions.
To configure an agent, it must know the URL to your organization or collection and credentials of someone
authorized to set up agents. All other responses are optional. Any command-line parameter can be specified
using an environment variable instead: put its name in upper case and prepend VSTS_AGENT_INPUT_ . For example,
VSTS_AGENT_INPUT_PASSWORD instead of specifying --password .

Required options
--unattended - agent setup will not prompt for information, and all settings must be provided on the
command line
--url <url> - URL of the server. For example: https://dev.azure.com/myorganization or http://my-azure-
devops-server:8080/tfs
--auth <type> - authentication type. Valid values are:
pat (Personal access token)
negotiate (Kerberos or NTLM)
alt (Basic authentication)
integrated (Windows default credentials)

Authentication options
If you chose --auth pat :
--token <token> - specifies your personal access token
If you chose --auth negotiate or --auth alt :
--userName <userName> - specifies a Windows username in the format domain\userName or
[email protected]
--password <password> - specifies a password
Pool and agent names
--pool <pool> - pool name for the agent to join
--agent <agent> - agent name
--replace - replace the agent in a pool. If another agent is listening by the same name, it will start failing
with a conflict
Agent setup
--work <workDirectory> - work directory where job data is stored. Defaults to _work under the root of the
agent directory. The work directory is owned by a given agent and should not share between multiple agents.
--acceptTeeEula - accept the Team Explorer Everywhere End User License Agreement (macOS and Linux
only)
--disableloguploads - don't stream or send console log output to the server. Instead, you may retrieve them
from the agent host's filesystem after the job completes.
Windows-only startup
--runAsService - configure the agent to run as a Windows service (requires administrator permission)
--runAsAutoLogon - configure auto-logon and run the agent on startup (requires administrator permission)
--windowsLogonAccount <account> - used with --runAsService or --runAsAutoLogon to specify the Windows
user name in the format domain\userName or [email protected]
--windowsLogonPassword <password> - used with --runAsService or --runAsAutoLogon to specify Windows
logon password
--overwriteAutoLogon - used with --runAsAutoLogon to overwrite the existing auto logon on the machine
--noRestart - used with --runAsAutoLogon to stop the host from restarting after agent configuration
completes
Deployment group only
--deploymentGroup - configure the agent as a deployment group agent
--deploymentGroupName <name> - used with --deploymentGroup to specify the deployment group for the agent
to join
--projectName <name> - used with --deploymentGroup to set the project name
--addDeploymentGroupTags - used with --deploymentGroup to indicate that deployment group tags should be
added
--deploymentGroupTags <tags> - used with --addDeploymentGroupTags to specify the comma separated list of
tags for the deployment group agent - for example "web, db"
./config.sh --help always lists the latest required and optional responses.

Diagnostics
If you're having trouble with your self-hosted agent, you can try running diagnostics. After configuring the agent:

./run.sh --diagnostics

This will run through a diagnostic suite that may help you troubleshoot the problem. The diagnostics feature is
available starting with agent version 2.165.0.

Help on other options


To learn about other options:

./config.sh --help
The help provides information on authentication alternatives and unattended configuration.

Capabilities
Your agent's capabilities are cataloged and advertised in the pool so that only the builds and releases it can
handle are assigned to it. See Build and release agent capabilities.
In many cases, after you deploy an agent, you'll need to install software or utilities. Generally you should install
on your agents whatever software and tools you use on your development machine.
For example, if your build includes the npm task, then the build won't run unless there's a build agent in the pool
that has npm installed.

IMPORTANT
After you install new software on an agent, you must restart the agent for the new capability to show up in the pool so
that the build can run.

FAQ
How do I make sure I have the latest v2 agent version?
1. Navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

2. Click the pool that contains the agent.


3. Make sure the agent is enabled.
4. Navigate to the capabilities tab:
1. From the Agent pools tab, select the desired agent pool.
2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


NOTE
Microsoft-hosted agents don't display system capabilities. For a list of software installed on Microsoft-
hosted agents, see Use a Microsoft-hosted agent.

1. From the Agent pools tab, select the desired pool.

2. Select Agents and choose the desired agent.


3. Choose the Capabilities tab.

1. From the Agent pools tab, select the desired pool.


2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


Select the desired agent, and choose the Capabilities tab.

Select the desired agent, and choose the Capabilities tab.


From the Agent pools tab, select the desired agent, and choose the Capabilities tab.

5. Look for the Agent.Version capability. You can check this value against the latest published agent version.
See Azure Pipelines Agent and check the page for the highest version number listed.
6. Each agent automatically updates itself when it runs a task that requires a newer version of the agent. If
you want to manually update some agents, right-click the pool, and select Update all agents .
Can I update my v2 agents that are part of an Azure DevOps Server pool?
Yes. Beginning with Azure DevOps Server 2019, you can configure your server to look for the agent package files
on a local disk. This configuration will override the default version that came with the server at the time of its
release. This scenario also applies when the server doesn't have access to the internet.
1. From a computer with Internet access, download the latest version of the agent package files (in .zip or
.tar.gz form) from the Azure Pipelines Agent GitHub Releases page.
2. Transfer the downloaded package files to each Azure DevOps Server Application Tier by using a method
of your choice (such as USB drive, Network transfer, and so on). Place the agent files under the
%ProgramData%\Microsoft\Azure DevOps\Agents folder.

3. You're all set! Your Azure DevOps Server will now use the local files whenever the agents are updated.
Each agent automatically updates itself when it runs a task that requires a newer version of the agent. But
if you want to manually update some agents, right-click the pool, and then choose Update all agents .
Why is sudo needed to run the service commands?
./svc.sh uses systemctl , which requires sudo .
Source code: systemd.svc.sh.template on GitHub
I'm running a firewall and my code is in Azure Repos. What URLs does the agent need to communicate with?
If you're running an agent in a secure network behind a firewall, make sure the agent can initiate communication
with the following URLs and IP addresses.
For organizations using the *.visualstudio.com domain:

https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com

For organizations using the dev.azure.com domain:

https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net

To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your
IP version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place,
as you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24

IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48

How do I run the agent with self-signed certificate?


Run the agent with self-signed certificate
How do I run the agent behind a web proxy?
Run the agent behind a web proxy
How do I restart the agent
If you are running the agent interactively, see the restart instructions in Run interactively. If you are running the
agent as a systemd service, follow the steps to Stop and then Start the agent.
How do I configure the agent to bypass a web proxy and connect to Azure Pipelines?
If you want the agent to bypass your proxy and connect to Azure Pipelines directly, then you should configure
your web proxy to enable the agent to access the following URLs.
For organizations using the *.visualstudio.com domain:

https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com

For organizations using the dev.azure.com domain:

https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net

To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your
IP version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place,
as you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24

IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48

NOTE
This procedure enables the agent to bypass a web proxy. Your build pipeline and scripts must still handle bypassing your
web proxy for each task and tool you run in your build.
For example, if you are using a NuGet task, you must configure your web proxy to support bypassing the URL for the
server that hosts the NuGet feed you're using.

I'm using TFS and the URLs in the sections above don't work for me. Where can I get help?
Web site settings and security
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Self-hosted macOS agents
11/2/2020 • 21 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

To build and deploy Xcode apps or Xamarin.iOS projects, you'll need at least one macOS agent. This agent can
also build and deploy Java and Android apps.

Before you begin:


If your pipelines are in Azure Pipelines and a Microsoft-hosted agent meets your needs, you can skip
setting up a self-hosted macOS agent.
Otherwise, you've come to the right place to set up an agent on macOS. Continue to the next section.

Learn about agents


If you already know what an agent is and how it works, feel free to jump right in to the following sections. But if
you'd like some more background about what they do and how they work, see Azure Pipelines agents.

Check prerequisites
Make sure your machine has these prerequisites:
macOS Sierra (10.12) or higher
Git 2.9.0 or higher (latest version strongly recommended - you can easily install with Homebrew)
These prereqs are required for agent version 2.125.0 and higher.
These prereqs are required for agent version 2.124.0 and below. If you're able, we recommend upgrading to
a newer macOS (10.12+) and upgrading to the newest agent.
Make sure your machine has these prerequisites:
OS X Yosemite (10.10), El Capitan (10.11), or macOS Sierra (10.12)
Git 2.9.0 or higher (latest version strongly recommended)
Meets all prereqs for .NET Core 1.x
If you'll be using TFVC, you will also need the Oracle Java JDK 1.6 or higher. (The Oracle JRE and OpenJDK are not
sufficient for this purpose.)

Prepare permissions
If you're building from a Subversion repo, you must install the Subversion client on the machine.
You should run agent setup manually the first time. After you get a feel for how agents work, or if you want to
automate setting up many agents, consider using unattended config.
Decide which user you'll use
As a one-time step, you must register the agent. Someone with permission to administer the agent queue must
complete these steps. The agent will not use this person's credentials in everyday operation, but they're required
to complete registration. Learn more about how agents communicate.
Authenticate with a personal access token (PAT)
1. Sign in with the user account you plan to use in your Team Foundation Server web portal (
https://{your-server}:8080/tfs/ ).

1. Sign in with the user account you plan to use in you Azure DevOps Server web portal (
https://{your-server}/DefaultCollection/ ).

1. Sign in with the user account you plan to use in your Azure DevOps organization (
https://dev.azure.com/{your_organization} ).

2. From your home page, open your profile. Go to your security details.

3. Create a personal access token.

4. For the scope select Agent Pools (read, manage) and make sure all the other boxes are cleared. If it's a
deployment group agent, for the scope select Deployment group (read, manage) and make sure all
the other boxes are cleared.
Select Show all scopes at the bottom of the Create a new personal access token window window
to see the complete list of scopes.
5. Copy the token. You'll use this token when you configure the agent.
Authenticate as a Windows user (TFS 2015 and TFS 2017)
As an alternative, on TFS 2017, you can use either a domain user or a local Windows user on each of your TFS
application tiers.
On TFS 2015, for macOS and Linux only, we recommend that you create a local Windows user on each of your
TFS application tiers and dedicate that user for the purpose of deploying build agents.
Confirm the user has permission
Make sure the user account that you're going to use has permission to register the agent.
Is the user an Azure DevOps organization owner or TFS or Azure DevOps Server administrator? Stop here , you
have permission.
Otherwise:
1. Open a browser and navigate to the Agent pools tab for your Azure Pipelines organization or Azure
DevOps Server or TFS server:
1. Choose Azure DevOps , Organization settings .
2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .


2. Click the pool on the left side of the page and then click Security .
3. If the user account you're going to use is not shown, then get an administrator to add it. The administrator
can be an agent pool administrator, an Azure DevOps organization owner, or a TFS or Azure DevOps
Server administrator.
If it's a deployment group agent, the administrator can be an deployment group administrator, an Azure
DevOps organization owner, or a TFS or Azure DevOps Server administrator.
You can add a user to the deployment group administrator role in the Security tab on the Deployment
Groups page in Azure Pipelines .

NOTE
If you see a message like this: Sorr y, we couldn't add the identity. Please tr y a different identity. , you probably
followed the above steps for an organization owner or TFS or Azure DevOps Server administrator. You don't need to do
anything; you already have permission to administer the agent queue.

Download and configure the agent


Azure Pipelines
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to Azure Pipelines, and navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

3. Select the Default pool, select the Agents tab, and choose New agent .
4. On the Get the agent dialog box, click macOS .
5. Click the Download button.
6. Follow the instructions on the page.
7. Clear the extended attribute on the tar file: xattr -c vsts-agent-osx-x64-V.v.v.tar.gz .
8. Unpack the agent into the directory of your choice. cd to that directory and run ./config.sh . Make sure
that the path to the directory contains no spaces because tools and scripts don't always properly escape
spaces.
Azure DevOps Server 2019 and Azure DevOps Server 2020
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to Azure DevOps Server, and navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .


3. Click Download agent .
4. On the Get agent dialog box, click macOS .
5. Click the Download button.
6. Follow the instructions on the page.
7. Clear the extended attribute on the tar file: xattr -c vsts-agent-osx-x64-V.v.v.tar.gz .
8. Unpack the agent into the directory of your choice. cd to that directory and run ./config.sh . Make sure
that the path to the directory contains no spaces because tools and scripts don't always properly escape
spaces.
TFS 2017 and TFS 2018
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to Azure Pipelines or TFS, and navigate to the Agent pools tab:
a. Navigate to your project and choose Settings (gear icon) > Agent Queues .

b. Choose Manage pools .

3. Click Download agent .


4. On the Get agent dialog box, click macOS .
5. Click the Download button.
6. Follow the instructions on the page.
7. Clear the extended attribute on the tar file: xattr -c vsts-agent-osx-x64-V.v.v.tar.gz .
8. Unpack the agent into the directory of your choice. cd to that directory and run ./config.sh . Make sure
that the path to the directory contains no spaces because tools and scripts don't always properly escape
spaces.
TFS 2015
1. Browse to the latest release on GitHub.
2. Follow the instructions on that page to download the agent.
3. Configure the agent.

./config.sh

Server URL
Azure Pipelines: https://dev.azure.com/{your-organization}

TFS 2017 and newer: https://{your_server}/tfs

TFS 2015: http://{your_server}:8080/tfs

Authentication type
Azure Pipelines
Choose PAT , and then paste the PAT token you created into the command prompt window.

NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent. Learn
more at Communication with Azure Pipelines or TFS.

TFS or Azure DevOps Server

IMPORTANT
Make sure your server is configured to support the authentication method you want to use.

When you configure your agent to connect to TFS, you've got the following options:
Alternate Connect to TFS or Azure DevOps Server using Basic authentication. After you select Alternate
you'll be prompted for your credentials.
Integrated Not supported on macOS or Linux.
Negotiate (Default) Connect to TFS or Azure DevOps Server as a user other than the signed-in user via a
Windows authentication scheme such as NTLM or Kerberos. After you select Negotiate you'll be prompted
for credentials.
PAT Supported only on Azure Pipelines and TFS 2017 and newer. After you choose PAT, paste the PAT token
you created into the command prompt window. Use a personal access token (PAT) if your Azure DevOps
Server or TFS instance and the agent machine are not in a trusted domain. PAT authentication is handled
by your Azure DevOps Server or TFS instance instead of the domain controller.

NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent on
Azure DevOps Server and the newer versions of TFS. Learn more at Communication with Azure Pipelines or TFS.

Run interactively
For guidance on whether to run the agent in interactive mode or as a service, see Agents: Interactive vs. service.
To run the agent interactively:
1. If you have been running the agent as a service, uninstall the service.
2. Run the agent.

./run.sh

To restart the agent, press Ctrl+C and then run run.sh to restart it.
To use your agent, run a job using the agent's pool. If you didn't choose a different pool, your agent will be in the
Default pool.
Run once
For agents configured to run interactively, you can choose to have the agent accept only one job. To run in this
configuration:

./run.sh --once

Agents in this mode will accept only one job and then spin down gracefully (useful for running on a service like
Azure Container Instances).

Run as a launchd service


We provide the ./svc.sh script for you to run and manage your agent as a launchd LaunchAgent service. This
script will be generated after you configure the agent. The service has access to the UI to run your UI tests.

NOTE
If you prefer other approaches, you can use whatever kind of service mechanism you prefer. See Service files.

Tokens
In the section below, these tokens are replaced:
{agent-name}

{tfs-name}

For example, you have configured an agent (see above) with the name our-osx-agent . In the following examples,
{tfs-name} will be either:

Azure Pipelines: the name of your organization. For example if you connect to
https://dev.azure.com/fabrikam , then the service name would be vsts.agent.fabrikam.our-osx-agent

TFS: the name of your on-premises TFS AT server. For example if you connect to
http://our-server:8080/tfs , then the service name would be vsts.agent.our-server.our-osx-agent

Commands
Change to the agent directory
For example, if you installed in the myagent subfolder of your home directory:

cd ~/myagent$
Install
Command:

./svc.sh install

This command creates a launchd plist that points to ./runsvc.sh . This script sets up the environment (more
details below) and starts the agent's host.
Start
Command:

./svc.sh start

Output:

starting vsts.agent.{tfs-name}.{agent-name}
status vsts.agent.{tfs-name}.{agent-name}:

/Users/{your-name}/Library/LaunchAgents/vsts.agent.{tfs-name}.{agent-name}.plist

Started:
13472 0 vsts.agent.{tfs-name}.{agent-name}

The left number is the pid if the service is running. If second number is not zero, then a problem occurred.
Status
Command:

./svc.sh status

Output:

status vsts.agent.{tfs-name}.{agent-name}:

/Users/{your-name}/Library/LaunchAgents/vsts.{tfs-name}.{agent-name}.testsvc.plist

Started:
13472 0 vsts.agent.{tfs-name}.{agent-name}

The left number is the pid if the service is running. If second number is not zero, then a problem occurred.
Stop
Command:

./svc.sh stop

Output:

stopping vsts.agent.{tfs-name}.{agent-name}
status vsts.agent.{tfs-name}.{agent-name}:

/Users/{your-name}/Library/LaunchAgents/vsts.{tfs-name}.{agent-name}.testsvc.plist

Stopped
Uninstall

You should stop before you uninstall.

Command:

./svc.sh uninstall

Automatic login and lock


Normally, the agent service runs only after the user logs in. If you want the agent service to automatically start
when the machine restarts, you can configure the machine to automatically log in and lock on startup. See Set
your Mac to automatically log in during startup - Apple Support.

NOTE
For more information, see the Terminally Geeky: use automatic login more securely blog. The .plist file mentioned in that
blog may no longer be available at the source, but a copy can be found here: Lifehacker - Make OS X load your desktop
before you log in.

Update environment variables


When you configure the service, it takes a snapshot of some useful environment variables for your current logon
user such as PATH, LANG, JAVA_HOME, ANT_HOME, and MYSQL_PATH. If you need to update the variables (for
example, after installing some new software):

./env.sh
./svc.sh stop
./svc.sh start

The snapshot of the environment variables is stored in .env file under agent root directory, you can also change
that file directly to apply environment variable changes.
Run instructions before the service starts
You can also run your own instructions and commands to run when the service starts. For example, you could set
up the environment or call scripts.
1. Edit runsvc.sh .
2. Replace the following line with your instructions:

# insert anything to setup env when running as a service

Service Files
When you install the service, some service files are put in place.
.plist service file
A .plist service file is created:

~/Library/LaunchAgents/vsts.agent.{tfs-name}.{agent-name}.plist

For example:
~/Library/LaunchAgents/vsts.agent.fabrikam.our-osx-agent.plist

sudo ./svc.sh install generates this file from this template: ./bin/vsts.agent.plist.template

.service file
./svc.sh startfinds the service by reading the .service file, which contains the path to the plist service file
described above.
Alternative service mechanisms
We provide the ./svc.sh script as a convenient way for you to run and manage your agent as a launchd
LaunchAgent service. But you can use whatever kind of service mechanism you prefer.
You can use the template described above as to facilitate generating other kinds of service files. For example, you
modify the template to generate a service that runs as a launch daemon if you don't need UI tests and don't want
to configure automatic log on and lock. See Apple Developer Library: Creating Launch Daemons and Agents.

Replace an agent
To replace an agent, follow the Download and configure the agent steps again.
When you configure an agent using the same name as an agent that already exists, you're asked if you want to
replace the existing agent. If you answer Y , then make sure you remove the agent (see below) that you're
replacing. Otherwise, after a few minutes of conflicts, one of the agents will shut down.

Remove and re-configure an agent


To remove the agent:
1. Stop and uninstall the service as explained above.
2. Remove the agent.

./config.sh remove

3. Enter your credentials.


After you've removed the agent, you can configure it again.

Unattended config
The agent can be set up from a script with no human intervention. You must pass --unattended and the answers
to all questions.
To configure an agent, it must know the URL to your organization or collection and credentials of someone
authorized to set up agents. All other responses are optional. Any command-line parameter can be specified
using an environment variable instead: put its name in upper case and prepend VSTS_AGENT_INPUT_ . For example,
VSTS_AGENT_INPUT_PASSWORD instead of specifying --password .

Required options
--unattended - agent setup will not prompt for information, and all settings must be provided on the
command line
--url <url> - URL of the server. For example: https://dev.azure.com/myorganization or http://my-azure-
devops-server:8080/tfs
--auth <type> - authentication type. Valid values are:
pat (Personal access token)
negotiate (Kerberos or NTLM)
alt (Basic authentication)
integrated (Windows default credentials)

Authentication options
If you chose --auth pat :
--token <token> - specifies your personal access token
If you chose --auth negotiate or --auth alt :
--userName <userName> - specifies a Windows username in the format domain\userName or
[email protected]
--password <password> - specifies a password
Pool and agent names
--pool <pool> - pool name for the agent to join
--agent <agent> - agent name
--replace - replace the agent in a pool. If another agent is listening by the same name, it will start failing with
a conflict
Agent setup
--work <workDirectory> - work directory where job data is stored. Defaults to _work under the root of the
agent directory. The work directory is owned by a given agent and should not share between multiple agents.
--acceptTeeEula - accept the Team Explorer Everywhere End User License Agreement (macOS and Linux only)
--disableloguploads - don't stream or send console log output to the server. Instead, you may retrieve them
from the agent host's filesystem after the job completes.
Windows-only startup
--runAsService - configure the agent to run as a Windows service (requires administrator permission)
--runAsAutoLogon - configure auto-logon and run the agent on startup (requires administrator permission)
--windowsLogonAccount <account> - used with --runAsService or --runAsAutoLogon to specify the Windows
user name in the format domain\userName or [email protected]
--windowsLogonPassword <password> - used with --runAsService or --runAsAutoLogon to specify Windows
logon password
--overwriteAutoLogon - used with --runAsAutoLogon to overwrite the existing auto logon on the machine
--noRestart - used with --runAsAutoLogon to stop the host from restarting after agent configuration
completes
Deployment group only
--deploymentGroup - configure the agent as a deployment group agent
--deploymentGroupName <name> - used with --deploymentGroup to specify the deployment group for the agent
to join
--projectName <name> - used with --deploymentGroup to set the project name
--addDeploymentGroupTags - used with --deploymentGroup to indicate that deployment group tags should be
added
--deploymentGroupTags <tags> - used with --addDeploymentGroupTags to specify the comma separated list of
tags for the deployment group agent - for example "web, db"
./config.sh --help always lists the latest required and optional responses.
Diagnostics
If you're having trouble with your self-hosted agent, you can try running diagnostics. After configuring the agent:

./run.sh --diagnostics

This will run through a diagnostic suite that may help you troubleshoot the problem. The diagnostics feature is
available starting with agent version 2.165.0.

Help on other options


To learn about other options:

./config.sh --help

The help provides information on authentication alternatives and unattended configuration.

Capabilities
Your agent's capabilities are cataloged and advertised in the pool so that only the builds and releases it can
handle are assigned to it. See Build and release agent capabilities.
In many cases, after you deploy an agent, you'll need to install software or utilities. Generally you should install on
your agents whatever software and tools you use on your development machine.
For example, if your build includes the npm task, then the build won't run unless there's a build agent in the pool
that has npm installed.

IMPORTANT
After you install new software on an agent, you must restart the agent for the new capability to show up in the pool so
that the build can run.

FAQ
How do I make sure I have the latest v2 agent version?
1. Navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

2. Click the pool that contains the agent.


3. Make sure the agent is enabled.
4. Navigate to the capabilities tab:
1. From the Agent pools tab, select the desired agent pool.
2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


NOTE
Microsoft-hosted agents don't display system capabilities. For a list of software installed on Microsoft-
hosted agents, see Use a Microsoft-hosted agent.

1. From the Agent pools tab, select the desired pool.

2. Select Agents and choose the desired agent.


3. Choose the Capabilities tab.

1. From the Agent pools tab, select the desired pool.


2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


Select the desired agent, and choose the Capabilities tab.

Select the desired agent, and choose the Capabilities tab.


From the Agent pools tab, select the desired agent, and choose the Capabilities tab.

5. Look for the Agent.Version capability. You can check this value against the latest published agent version.
See Azure Pipelines Agent and check the page for the highest version number listed.
6. Each agent automatically updates itself when it runs a task that requires a newer version of the agent. If
you want to manually update some agents, right-click the pool, and select Update all agents .
Can I update my v2 agents that are part of an Azure DevOps Server pool?
Yes. Beginning with Azure DevOps Server 2019, you can configure your server to look for the agent package files
on a local disk. This configuration will override the default version that came with the server at the time of its
release. This scenario also applies when the server doesn't have access to the internet.
1. From a computer with Internet access, download the latest version of the agent package files (in .zip or
.tar.gz form) from the Azure Pipelines Agent GitHub Releases page.
2. Transfer the downloaded package files to each Azure DevOps Server Application Tier by using a method of
your choice (such as USB drive, Network transfer, and so on). Place the agent files under the
%ProgramData%\Microsoft\Azure DevOps\Agents folder.

3. You're all set! Your Azure DevOps Server will now use the local files whenever the agents are updated. Each
agent automatically updates itself when it runs a task that requires a newer version of the agent. But if you
want to manually update some agents, right-click the pool, and then choose Update all agents .
Where can I learn more about how the launchd service works?
Apple Developer Library: Creating Launch Daemons and Agents
I'm running a firewall and my code is in Azure Repos. What URLs does the agent need to communicate with?
If you're running an agent in a secure network behind a firewall, make sure the agent can initiate communication
with the following URLs and IP addresses.
For organizations using the *.visualstudio.com domain:

https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com

For organizations using the dev.azure.com domain:

https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net

To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your IP
version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place, as
you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24

IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48

How do I run the agent with self-signed certificate?


Run the agent with self-signed certificate
How do I run the agent behind a web proxy?
Run the agent behind a web proxy
How do I restart the agent
If you are running the agent interactively, see the restart instructions in Run interactively. If you are running the
agent as a service, follow the steps to Stop and then Start the agent.
How do I configure the agent to bypass a web proxy and connect to Azure Pipelines?
If you want the agent to bypass your proxy and connect to Azure Pipelines directly, then you should configure
your web proxy to enable the agent to access the following URLs.
For organizations using the *.visualstudio.com domain:
https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com

For organizations using the dev.azure.com domain:

https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net

To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your IP
version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place, as
you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24

IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48

NOTE
This procedure enables the agent to bypass a web proxy. Your build pipeline and scripts must still handle bypassing your
web proxy for each task and tool you run in your build.
For example, if you are using a NuGet task, you must configure your web proxy to support bypassing the URL for the
server that hosts the NuGet feed you're using.

I'm using TFS and the URLs in the sections above don't work for me. Where can I get help?
Web site settings and security
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Self-hosted Windows agents
11/2/2020 • 19 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015 | Previous versions (XAML builds)

IMPORTANT
For TFS 2015, see Self-hosted Windows agents - TFS 2015.

To build and deploy Windows, Azure, and other Visual Studio solutions you'll need at least one Windows agent.
Windows agents can also build Java and Android apps.

Before you begin:


If your code is in Azure Pipelines and a Microsoft-hosted agent meets your needs, you can skip setting
up a self-hosted Windows agent.
If your code is in an on-premises Team Foundation Server (TFS) 2015 server, see Deploy an agent on
Windows for on-premises TFS 2015.
Otherwise, you've come to the right place to set up an agent on Windows. Continue to the next section.

Learn about agents


If you already know what an agent is and how it works, feel free to jump right in to the following sections. But if
you'd like some more background about what they do and how they work, see Azure Pipelines agents.

Check prerequisites
Make sure your machine has these prerequisites:
Windows 7, 8.1, or 10 (if using a client OS)
Windows 2008 R2 SP1 or higher (if using a server OS)
PowerShell 3.0 or higher
.NET Framework 4.6.2 or higher

IMPORTANT
Starting December 2019, the minimum required .NET version for build agents is 4.6.2 or higher.

Recommended:
Visual Studio build tools (2015 or higher)
If you're building from a Subversion repo, you must install the Subversion client on the machine.
You should run agent setup manually the first time. After you get a feel for how agents work, or if you want to
automate setting up many agents, consider using unattended config.
Hardware specs
The hardware specs for your agents will vary with your needs, team size, etc. It's not possible to make a general
recommendation that will apply to everyone. As a point of reference, the Azure DevOps team builds the hosted
agents code using pipelines that utilize hosted agents. On the other hand, the bulk of the Azure DevOps code is
built by 24-core server class machines running 4 self-hosted agents apiece.

Prepare permissions
Decide which user you'll use
As a one-time step, you must register the agent. Someone with permission to administer the agent queue must
complete these steps. The agent will not use this person's credentials in everyday operation, but they're
required to complete registration. Learn more about how agents communicate.
Authenticate with a personal access token (PAT)
1. Sign in with the user account you plan to use in your Team Foundation Server web portal (
https://{your-server}:8080/tfs/ ).

1. Sign in with the user account you plan to use in you Azure DevOps Server web portal (
https://{your-server}/DefaultCollection/ ).

1. Sign in with the user account you plan to use in your Azure DevOps organization (
https://dev.azure.com/{your_organization} ).

2. From your home page, open your profile. Go to your security details.

3. Create a personal access token.

4. For the scope select Agent Pools (read, manage) and make sure all the other boxes are cleared. If it's
a deployment group agent, for the scope select Deployment group (read, manage) and make sure
all the other boxes are cleared.
Select Show all scopes at the bottom of the Create a new personal access token window window
to see the complete list of scopes.
5. Copy the token. You'll use this token when you configure the agent.
Authenticate as a Windows user (TFS 2015 and TFS 2017)
As an alternative, on TFS 2017, you can use either a domain user or a local Windows user on each of your TFS
application tiers.
On TFS 2015, for macOS and Linux only, we recommend that you create a local Windows user on each of your
TFS application tiers and dedicate that user for the purpose of deploying build agents.
Confirm the user has permission
Make sure the user account that you're going to use has permission to register the agent.
Is the user an Azure DevOps organization owner or TFS or Azure DevOps Server administrator? Stop here ,
you have permission.
Otherwise:
1. Open a browser and navigate to the Agent pools tab for your Azure Pipelines organization or Azure
DevOps Server or TFS server:
1. Choose Azure DevOps , Organization settings .
2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .


2. Click the pool on the left side of the page and then click Security .
3. If the user account you're going to use is not shown, then get an administrator to add it. The
administrator can be an agent pool administrator, an Azure DevOps organization owner, or a TFS or
Azure DevOps Server administrator.
If it's a deployment group agent, the administrator can be an deployment group administrator, an Azure
DevOps organization owner, or a TFS or Azure DevOps Server administrator.
You can add a user to the deployment group administrator role in the Security tab on the Deployment
Groups page in Azure Pipelines .

NOTE
If you see a message like this: Sorr y, we couldn't add the identity. Please tr y a different identity. , you probably
followed the above steps for an organization owner or TFS or Azure DevOps Server administrator. You don't need to do
anything; you already have permission to administer the agent queue.

Download and configure the agent


Azure Pipelines
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to Azure Pipelines, and navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

3. Select the Default pool, select the Agents tab, and choose New agent .
4. On the Get the agent dialog box, choose Windows .
5. On the left pane, select the processor architecture of the installed Windows OS version on your machine.
The x64 agent version is intended for 64-bit Windows, whereas the x86 version is intended for 32-bit
Windows. If you aren't sure which version of Windows is installed, follow these instructions to find out.
6. On the right pane, click the Download button.
7. Follow the instructions on the page to download the agent.
8. Unpack the agent into the directory of your choice. Then run config.cmd . This will ask you a series of
questions to configure the agent.
Azure DevOps Server 2019 and Azure DevOps Server 2020
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to Azure DevOps Server 2019, and navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .


3. Click Download agent .
4. On the Get agent dialog box, click Windows .
5. On the left pane, select the processor architecture of the installed Windows OS version on your machine.
The x64 agent version is intended for 64-bit Windows, whereas the x86 version is intended for 32-bit
Windows. If you aren't sure which version of Windows is installed, follow these instructions to find out.
6. On the right pane, click the Download button.
7. Follow the instructions on the page to download the agent.
8. Unpack the agent into the directory of your choice. Then run config.cmd . This will ask you a series of
questions to configure the agent.
TFS 2017 and TFS 2018
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to TFS, and navigate to the Agent pools tab:
a. Navigate to your project and choose Settings (gear icon) > Agent Queues .

b. Choose Manage pools .

3. Click Download agent .


4. On the Get agent dialog box, click Windows .
5. Click the Download button.
6. Follow the instructions on the page to download the agent.
7. Unpack the agent into the directory of your choice. Then run config.cmd . Make sure that the path to the
directory contains no spaces because tools and scripts don't always properly escape spaces.
NOTE
We strongly recommend you configure the agent from an elevated PowerShell window. If you want to configure as a
service, this is required .

Server URL and authentication


When setup asks for your server URL, for Azure DevOps Services, answer
https://dev.azure.com/{your-organization} .

When setup asks for your server URL, for TFS, answer https://{your_server}/tfs .
When setup asks for your authentication type, choose PAT . Then paste the PAT token you created into the
command prompt window.

NOTE
When using PAT as the authentication method, the PAT token is only used during the initial configuration of the agent.
Later, if the PAT expires or needs to be renewed, no further changes are required by the agent.

IMPORTANT
Make sure your server is configured to support the authentication method you want to use.

When you configure your agent to connect to TFS, you've got the following options:
Alternate Connect to TFS using Basic authentication. After you select Alternate you'll be prompted for
your credentials.
Negotiate Connect to TFS as a user other than the signed-in user via a Windows authentication scheme
such as NTLM or Kerberos. After you select Negotiate you'll be prompted for credentials.
Integrated (Default) Connect a Windows agent to TFS using the credentials of the signed-in user via a
Windows authentication scheme such as NTLM or Kerberos. You won't be prompted for credentials after
you choose this method.
PAT Supported only on Azure Pipelines and TFS 2017 and newer. After you choose PAT, paste the PAT
token you created into the command prompt window. Use a personal access token (PAT) if your TFS
instance and the agent machine are not in a trusted domain. PAT authentication is handled by your TFS
instance instead of the domain controller.

NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent. If the
PAT needs to be regenerated, no further changes are needed to the agent.

Learn more at Communication with Azure Pipelines or TFS.


Choose interactive or service mode
For guidance on whether to run the agent in interactive mode or as a service, see Agents: Interactive vs. service.
If you choose to run as a service (which we recommend), the username you run as should be 20 characters or
fewer.
Run the agent
Run interactively
If you configured the agent to run interactively, to run it:

.\run.cmd

To restart the agent, press Ctrl+C to stop the agent and then run run.cmd to restart it.
Run once
For agents configured to run interactively, you can choose to have the agent accept only one job. To run in this
configuration:

.\run.cmd --once

Agents in this mode will accept only one job and then spin down gracefully (useful for running in Docker on a
service like Azure Container Instances).
Run as a service
If you configured the agent to run as a service, it starts automatically. You can view and control the agent
running status from the services snap-in. Run services.msc and look for one of:
"Azure Pipelines Agent (name of your agent)".
"VSTS Agent (name of your agent)".
"vstsagent.(organization name).(name of your agent)".
To restart the agent, right-click the entry and choose Restar t .

NOTE
If you need to change the agent's logon account, don't do it from the Services snap-in. Instead, see the information
below to re-configure the agent.

To use your agent, run a job using the agent's pool. If you didn't choose a different pool, your agent will be in
the Default pool.

Replace an agent
To replace an agent, follow the Download and configure the agent steps again.
When you configure an agent using the same name as an agent that already exists, you're asked if you want to
replace the existing agent. If you answer Y , then make sure you remove the agent (see below) that you're
replacing. Otherwise, after a few minutes of conflicts, one of the agents will shut down.

Remove and re-configure an agent


To remove the agent:

.\config remove

After you've removed the agent, you can configure it again.

Unattended config
Unattended config
The agent can be set up from a script with no human intervention. You must pass --unattended and the
answers to all questions.
To configure an agent, it must know the URL to your organization or collection and credentials of someone
authorized to set up agents. All other responses are optional. Any command-line parameter can be specified
using an environment variable instead: put its name in upper case and prepend VSTS_AGENT_INPUT_ . For
example, VSTS_AGENT_INPUT_PASSWORD instead of specifying --password .
Required options
--unattended - agent setup will not prompt for information, and all settings must be provided on the
command line
--url <url> - URL of the server. For example: https://dev.azure.com/myorganization or http://my-azure-
devops-server:8080/tfs
--auth <type> - authentication type. Valid values are:
pat (Personal access token)
negotiate (Kerberos or NTLM)
alt (Basic authentication)
integrated (Windows default credentials)

Authentication options
If you chose --auth pat :
--token <token> - specifies your personal access token
If you chose --auth negotiate or --auth alt :
--userName <userName> - specifies a Windows username in the format domain\userName or
[email protected]
--password <password> - specifies a password
Pool and agent names
--pool <pool> - pool name for the agent to join
--agent <agent> - agent name
--replace - replace the agent in a pool. If another agent is listening by the same name, it will start failing
with a conflict
Agent setup
--work <workDirectory> - work directory where job data is stored. Defaults to _work under the root of the
agent directory. The work directory is owned by a given agent and should not share between multiple
agents.
--acceptTeeEula - accept the Team Explorer Everywhere End User License Agreement (macOS and Linux
only)
--disableloguploads - don't stream or send console log output to the server. Instead, you may retrieve them
from the agent host's filesystem after the job completes.
Windows-only startup
--runAsService - configure the agent to run as a Windows service (requires administrator permission)
--runAsAutoLogon - configure auto-logon and run the agent on startup (requires administrator permission)
--windowsLogonAccount <account> - used with --runAsService or --runAsAutoLogon to specify the Windows
user name in the format domain\userName or [email protected]
--windowsLogonPassword <password> - used with --runAsService or --runAsAutoLogon to specify Windows
logon password
--overwriteAutoLogon - used with --runAsAutoLogon to overwrite the existing auto logon on the machine
--noRestart - used with --runAsAutoLogon to stop the host from restarting after agent configuration
completes
Deployment group only
--deploymentGroup - configure the agent as a deployment group agent
--deploymentGroupName <name> - used with --deploymentGroup to specify the deployment group for the agent
to join
--projectName <name> - used with --deploymentGroup to set the project name
--addDeploymentGroupTags - used with --deploymentGroup to indicate that deployment group tags should be
added
--deploymentGroupTags <tags> - used with --addDeploymentGroupTags to specify the comma separated list of
tags for the deployment group agent - for example "web, db"
.\config --help always lists the latest required and optional responses.

Diagnostics
If you're having trouble with your self-hosted agent, you can try running diagnostics. After configuring the
agent:

.\run --diagnostics

This will run through a diagnostic suite that may help you troubleshoot the problem. The diagnostics feature is
available starting with agent version 2.165.0.

Help on other options


To learn about other options:

.\config --help

The help provides information on authentication alternatives and unattended configuration.

Capabilities
Your agent's capabilities are cataloged and advertised in the pool so that only the builds and releases it can
handle are assigned to it. See Build and release agent capabilities.
In many cases, after you deploy an agent, you'll need to install software or utilities. Generally you should install
on your agents whatever software and tools you use on your development machine.
For example, if your build includes the npm task, then the build won't run unless there's a build agent in the
pool that has npm installed.

IMPORTANT
After you install new software on an agent, you must restart the agent for the new capability to show up in the pool so
that the build can run.

FAQ
How do I make sure I have the latest v2 agent version?
1. Navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .


2. Click the pool that contains the agent.
3. Make sure the agent is enabled.
4. Navigate to the capabilities tab:
1. From the Agent pools tab, select the desired agent pool.

2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


NOTE
Microsoft-hosted agents don't display system capabilities. For a list of software installed on Microsoft-
hosted agents, see Use a Microsoft-hosted agent.

1. From the Agent pools tab, select the desired pool.

2. Select Agents and choose the desired agent.


3. Choose the Capabilities tab.

1. From the Agent pools tab, select the desired pool.


2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


Select the desired agent, and choose the Capabilities tab.

Select the desired agent, and choose the Capabilities tab.


From the Agent pools tab, select the desired agent, and choose the Capabilities tab.

5. Look for the Agent.Version capability. You can check this value against the latest published agent
version. See Azure Pipelines Agent and check the page for the highest version number listed.
6. Each agent automatically updates itself when it runs a task that requires a newer version of the agent. If
you want to manually update some agents, right-click the pool, and select Update all agents .
Can I update my v2 agents that are part of an Azure DevOps Server pool?
Yes. Beginning with Azure DevOps Server 2019, you can configure your server to look for the agent package
files on a local disk. This configuration will override the default version that came with the server at the time of
its release. This scenario also applies when the server doesn't have access to the internet.
1. From a computer with Internet access, download the latest version of the agent package files (in .zip or
.tar.gz form) from the Azure Pipelines Agent GitHub Releases page.
2. Transfer the downloaded package files to each Azure DevOps Server Application Tier by using a method
of your choice (such as USB drive, Network transfer, and so on). Place the agent files under the
%ProgramData%\Microsoft\Azure DevOps\Agents folder.

3. You're all set! Your Azure DevOps Server will now use the local files whenever the agents are updated.
Each agent automatically updates itself when it runs a task that requires a newer version of the agent.
But if you want to manually update some agents, right-click the pool, and then choose Update all
agents .
What version of the agent runs with TFS 2017?
T F S VERSIO N M IN IM UM A GEN T VERSIO N

2017 RTM 2.105.7

2017.3 2.112.0

I'm running a firewall and my code is in Azure Repos. What URLs does the agent need to communicate with?
If you're running an agent in a secure network behind a firewall, make sure the agent can initiate
communication with the following URLs and IP addresses.
For organizations using the *.visualstudio.com domain:
https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com

For organizations using the dev.azure.com domain:

https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net

To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your
IP version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in
place, as you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24

IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48

How do I run the agent with self-signed certificate?


Run the agent with self-signed certificate
How do I run the agent behind a web proxy?
Run the agent behind a web proxy
How do I restart the agent
If you are running the agent interactively, see the restart instructions in Run interactively. If you are running the
agent as a service, restart the agent by following the steps in Run as a service.
How do I set different environment variables for each individual agent?
Create a .env file under agent's root directory and put environment variables you want to set into the file as
following format:

MyEnv0=MyEnvValue0
MyEnv1=MyEnvValue1
MyEnv2=MyEnvValue2
MyEnv3=MyEnvValue3
MyEnv4=MyEnvValue4
How do I configure the agent to bypass a web proxy and connect to Azure Pipelines?
If you want the agent to bypass your proxy and connect to Azure Pipelines directly, then you should configure
your web proxy to enable the agent to access the following URLs.
For organizations using the *.visualstudio.com domain:

https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com

For organizations using the dev.azure.com domain:

https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net

To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your
IP version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in
place, as you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24

IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48

NOTE
This procedure enables the agent to bypass a web proxy. Your build pipeline and scripts must still handle bypassing your
web proxy for each task and tool you run in your build.
For example, if you are using a NuGet task, you must configure your web proxy to support bypassing the URL for the
server that hosts the NuGet feed you're using.

I'm using TFS and the URLs in the sections above don't work for me. Where can I get help?
Web site settings and security
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Deploy an agent on Windows for TFS 2015
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015 | Previous versions (XAML builds)
To build and deploy Windows, Azure, and other Visual Studio solutions you may need a Windows agent. Windows
agents can also build and deploy Java and Android apps.

Before you begin:


If you use Azure Pipelines or TFS 2017 and newer, then you need to use a newer agent. See Deploy an
agent on Windows.
If you use TFS, you might already have a build and release agent running. An agent is automatically or
optionally deployed in some cases when you set up Team Foundation Server.
Otherwise, you've come to the right place to set up an agent on Windows for TFS 2015. Continue to the
next section.

Learn about agents


If you already know what an agent is and how it works, feel free to jump right in to the following sections. But if
you'd like some more background about what they do and how they work, see Azure Pipelines agents.

Check prerequisites
Before you begin, make sure your agent machine is prepared with these prerequisites:
An operating system that is supported by Visual Studio 2013 or newer
Visual Studio 2013 or Visual Studio 2015
PowerShell 3 or newer (Where can I get a newer version of PowerShell?)

Download and configure the agent


1. Make sure you're logged on the machine as an agent pool administrator. See Agent pools.
2. Navigate to the Agent pools tab: http://{your_server}:8080/tfs/_admin/_AgentPool

3. Click Download agent .


4. Unzip the .zip file into the folder on disk from which you would like to run the agent. To avoid "path too
long" issues on the file system, keep the path short. For example: C:\Agent\
5. Run Command Prompt as Administrator, and then run:

ConfigureAgent.cmd

6. Respond to the prompts.


Choose interactive or service mode
For guidance on whether to run the agent in interactive mode or as a service, see Agents: Interactive vs. service.
Run as a service
If you chose to run the agent as a Windows service, then the agent running status can be controlled from the
Services snap-in. Run services.msc and look for "VSO Agent (<name of your agent>)".
If you need to change the logon account, don't do it from the services snap-in. Instead, from an elevated
Command Prompt, run:

C:\Agent\Agent\VsoAgent.exe /ChangeWindowsServiceAccount

Run interactively
If you chose to run interactively, then to run the agent:

C:\Agent\Agent\VsoAgent.exe

Command-line parameters
You can use command-line parameters when you configure the agent ( ConfigureAgent.cmd ) and when you run the
agent ( Agent\VsoAgent.exe ). These are useful to avoid being prompted during unattended installation scripts and
for power users.
Common parameters
/Login:UserName,Password[;AuthType=(AAD|Basic|PAT)]
Used for configuration commands against an Azure DevOps organization. The parameter is used to specify the
pool administrator credentials. The credentials are used to perform the pool administration changes and are not
used later by the agent.
When using personal access tokens (PAT) authentication type, specify anything for the user name and specify the
PAT as the password.
If passing the parameter from PowerShell, be sure to escape the semicolon or encapsulate the entire argument in
quotes. For example: '/Login:user,password;AuthType=PAT'. Otherwise the semicolon will be interpreted by
PowerShell to indicate the end of one statement and the beginning of another.
/NoPrompt
Indicates not to prompt and to accept the default for any values not provided on the command-line.
/WindowsServiceLogonAccount:WindowsServiceLogonAccount
Used for configuration commands to specify the identity to use for the Windows service. To specify a domain
account, use the form Domain\SAMAccountName or the user principal name (for example [email protected]).
Alternatively a built-in account can be provided, for example /WindowsServiceLogonAccount:"NT
AUTHORITY\NETWORK SERVICE".
/WindowsServiceLogonPassword:WindowsServiceLogonPassword
Required if the /WindowsServiceLogonAccount parameter is provided.

/Configure
Configure supports the /NoPrompt switch for automated installation scenarios and will return a non-zero exit
code on failure.
For troubleshooting configuration errors, detailed logs can be found in the _diag folder under the agent
installation directory.
/ServerUrl:ServerUrl
The server URL should not contain the collection name. For example, http://example:8080/tfs or
https://dev.azure.com/example
/Name:AgentName
The friendly name to identify the agent on the server.
/PoolName:PoolName
The pool that will contain the agent, for example: /PoolName:Default
/WorkFolder:WorkFolder
The default work folder location is a _work folder directly under the agent installation directory. You can change
the location to be outside of the agent installation directory, for example: /WorkFolder:C:_work. One reason you
may want to do this is to avoid "path too long" issues on the file system.
/Force
Replaces the server registration if a conflicting agent exists on the server. A conflict could be encountered based on
the name. Or a conflict could be encountered if based on the ID a previously configured agent is being
reconfigured in-place without unconfiguring first.
/NoStart
Used when configuring an interactive agent to indicate the agent should not be started after the configuration
completes.
/RunningAsService
Used to indicate the agent should be configured to run as a Windows service.
/StartMode:(Automatic|Manual|Disabled)

/ChangeWindowsServiceAccount
Change Windows service account supports the /NoPrompt switch for automated installation scenarios and will
return a non-zero exit code on failure.
For troubleshooting errors, detailed logs can be found in the _diag folder under the agent installation directory.

/Unconfigure

/Version
Prints the version number.

/?
Prints usage information.

Capabilities
Your agent's capabilities are cataloged and advertised in the pool so that only the builds and releases it can handle
are assigned to it. See Build and release agent capabilities.
In many cases, after you deploy an agent, you'll need to install software or utilities. Generally you should install on
your agents whatever software and tools you use on your development machine.
For example, if your build includes the npm task, then the build won't run unless there's a build agent in the pool
that has npm installed.

IMPORTANT
After you install new software on an agent, you must restart the agent for the new capability to show up in the pool so that
the build can run.
FAQ
What version of PowerShell do I need? Where can I get a newer version?
The Windows Agent requires PowerShell version 3 or later. To check your PowerShell version:

$PSVersionTable.PSVersion

If you need a newer version of PowerShell, you can download it:


PowerShell 3: Windows Management Framework 3.0
PowerShell 5: Windows Management Framework 5.0
Why do I get this message when I try to queue my build?
When you try to queue a build or a deployment, you might get a message such as: "No agent could be found with
the following capabilities: msbuild, visualstudio, vstest." One common cause of this problem is that you need to
install prerequisite software such as Visual Studio on the machine where the build agent is running.
What version of the agent runs with my version of TFS?
T F S VERSIO N A GEN T VERSIO N

2015 RTM 1.83.2

2015.1 1.89.0

2015.2 1.95.1

2015.3 1.95.3

Can I still configure and use XAML build controllers and agents?
Yes. If you are an existing customer with custom build processes you are not yet ready to migrate, you can
continue to use XAML builds, controllers, and agents.

I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure virtual machine scale set agents
11/7/2020 • 21 minutes to read • Edit Online

Azure Pipelines
Azure virtual machine scale set agents, hereafter referred to as scale set agents, are a form of self-hosted agents
that can be autoscaled to meet your demands. This elasticity reduces your need to run dedicated agents all the
time. Unlike Microsoft-hosted agents, you have flexibility over the size and the image of machines on which
agents run.
If you like Microsoft-hosted agents but are limited by what they offer, you should consider scale set agents. Here
are some examples:
You need more memory, more processor, more storage, or more IO than what we offer in native Microsoft-
hosted agents.
You need NCv2 VM with particular instruction sets for machine learning.
You need to deploy to a private Azure App Service in a private VNET with no inbound connectivity.
You need to open corporate firewall to specific IP addresses so that Microsoft-hosted agents can
communicate with your servers.
You need to restrict network connectivity of agent machines and allow them to reach only approved sites.
You can't get enough agents from Microsoft to meet your needs.
Your jobs exceed the Microsoft-hosted agent timeout.
You can't partition Microsoft-hosted parallel jobs to individual projects or teams in your organization.
You want to run several consecutive jobs on an agent to take advantage of incremental source and machine-
level package caches.
You want to run additional configuration or cache warmup before an agent begins accepting jobs.
If you like self-hosted agents but wish that you could simplify managing them, you should consider scale set
agents. Here are some examples:
You don't want to run dedicated agents around the clock. You want to de-provision agent machines that are
not being used to run jobs.
You run untrusted code in your pipeline and want to re-image agent machines after each job.
You want to simplify periodically updating the base image for your agents.

NOTE
You cannot run Mac agents using scale sets. You can only run Windows or Linux agents this way.

Create the scale set


In preparation for creating scale set agents, you must first create a virtual machine scale set in the Azure portal.
You must create the virtual machine scale set in a certain way so that Azure Pipelines can manage it. In particular,
you must disable Azure's autoscaling so that Azure Pipelines can determine how to perform scaling based on
number of incoming pipeline jobs. We recommend that you use the following steps to create the scale set.
In the following example, a new resource group and virtual machine scale set are created with Azure Cloud Shell
using the UbuntuLTS VM image.
NOTE
In this example, the UbuntuLTS VM image is used for the scale set. If you require a customized VM image as the basis for
your agent, create the customized image before creating the scale set, by following the steps in Create a scale set with
custom image, software, or disk size.

1. Browse to Azure Cloud Shell at https://shell.azure.com/ .


2. Run the following command to verify your default Azure subscription.

az account list -o table

If your desired subscription isn't listed as the default, select your desired subscription.

az account set -s <your subscription ID>

3. Create a resource group for your virtual machine scale set.

az group create \
--location westus \
--name vmssagents

4. Create a virtual machine scale set in your resource group. In this example the UbuntuLTS VM image is
specified.

az vmss create \
--name vmssagentspool \
--resource-group vmssagents \
--image UbuntuLTS \
--vm-sku Standard_D2_v3 \
--storage-sku StandardSSD_LRS \
--authentication-type SSH \
--instance-count 2 \
--disable-overprovision \
--upgrade-policy-mode manual \
--single-placement-group false \
--platform-fault-domain-count 1 \
--load-balancer ""

Because Azure Pipelines manages the scale set, the following settings are required or recommended:
--disable-overprovision - required
--upgrade-policy-mode manual - required
--load-balancer "" - Azure pipelines doesn't require a load balancer to route jobs to the agents in the
scale set agent pool, but configuring a load balancer is one way to get an IP address for your scale set
agents that you could use for firewall rules. Another option for getting an IP address for your scale set
agents is to create your scale set using the --public-ip-address options. For more information about
configuring your scale set with a load balancer or public IP address, see the Virtual Machine Scale Sets
documentation and az vmss create.
--instance-count 2 - this setting is not required, but it will give you an opportunity to verify that the
scale set is fully functional before you create an agent pool. Creation of the two VMs can take several
minutes. Later, when you create the agent pool, Azure Pipelines will delete these two VMs and create
new ones.
IMPORTANT
If you run this script using Azure CLI on Windows, you must enclose the "" in --load-balancer "" with single
quotes like this: --load-balancer '""'

If your VM size supports Ephemeral OS disks, the following parameters to enable Ephemeral OS disks are
optional but recommended to improve virtual machine reimage times.
--ephemeral-os-disk true
--os-disk-caching readonly

IMPORTANT
Ephemeral OS disks are not supported on all VM sizes. For list of supported VM sizes, see Ephemeral OS disks for
Azure VMs.

Select any Linux or Windows image - either from Azure Marketplace or your own custom image - to
create the scale set. Do not pre-install Azure Pipelines agent in the image. Azure Pipelines automatically
installs the agent as it provisions new virtual machines. In the above example, we used a plain UbuntuLTS
image. For instructions on creating and using a custom image, see FAQ.
Select any VM SKU and storage SKU.

NOTE
Licensing considerations limit us from distributing Microsoft-hosted images. We are unable to provide these
images for you to use in your scale set agents. But, the scripts that we use to generate these images are open
source. You are free to use these scripts and create your own custom images.

5. After creating your scale set, navigate to your scale set in the Azure portal and verify the following
settings:
Upgrade policy - Manual

You can also verify this setting by running the following Azure CLI command.
az vmss show --resource-group vmssagents --name vmssagentspool --output table

Name ResourceGroup Location Zones Capacity Overprovision


UpgradePolicy
-------------- --------------- ---------- ------- ---------- --------------- -----------
----
vmssagentspool vmssagents westus 0 False Manual

Scaling - Manual scale

Create the scale set agent pool


1. Navigate to your Azure DevOps Project settings , select Agent pools under Pipelines , and select Add
pool to create a new agent pool.
IMPORTANT
You may create your scale set pool in Project settings or Organization settings , but when you delete a scale
set pool, you must delete it from Organization settings , and not Project settings .

2. Select Azure vir tual machine scale set for the pool type. Select the Azure subscription that contains
the scale set, choose Authorize , and choose the desired virtual machine scale set from that subscription.
If you have an existing service connection you can choose that from the list instead of the subscription.

IMPORTANT
To configure a scale set agent pool, you must have either Owner or User Access Administrator permissions on the
selected subscription. If you have one of these permissions but get an error when you choose Authorize , see
troubleshooting.

3. Choose the desired virtual machine scale set from that subscription.
4. Specify a name for your agent pool.
5. Configure the following options:
Automatically tear down vir tual machines after ever y use - A new VM instance is used for
every job. After running a job, the VM will go offline and be reimaged before it picks up another job.
Save an unhealthy agent for investigation - Whether to save unhealthy agent VMs for
troubleshooting instead of deleting them.
Maximum number of vir tual machines in the scale set - Azure Pipelines will automatically
scale-up the number of agents, but won't exceed this limit.
Number of agents to keep on standby - Azure Pipelines will automatically scale-down the
number of agents, but will ensure that there are always this many agents available to run new jobs. If
you set this to 0 , for example to conserve cost for a low volume of jobs, Azure Pipelines will start a VM
only when it has a job.
Delay in minutes before deleting excess idle agents - To account for the variability in build load
throughout the day, Azure Pipelines will wait this long before deleting an excess idle agent.
Configure VMs to run interactive tests (Windows Server OS Only) - Windows agents can either
be configured to run unelevated with autologon and with interactive UI, or they can be configured to
run with elevated permissions. Check this box to run unelevated with interactive UI. In either case, the
agent user is a member of the Administrators group.
6. When your settings are configured, choose Create to create the agent pool.

Use scale set agent pool


Using a scale set agent pool is similar to any other agent pool. You can use it in classic build, release, or YAML
pipelines. User permissions, pipeline permissions, approvals, and other checks work the same way as in any
other agent pool. For more information, see Agent pools.

IMPORTANT
Caution must be exercised when making changes directly to the scale set in the Azure portal.
You may not change many of the the scale set configuration settings in the Azure portal. Azure Pipelines updates the
configuration of the scale set. Any manual changes you make to the scale set may interfere with the operation of Azure
Pipelines.
You may not rename or delete a scale set without first deleting the scale set pool in Azure Pipelines.

How Azure Pipelines manages the scale set


Once the scale set agent pool is created, Azure Pipelines automatically scales the agent machines.
Azure Pipelines samples the state of the agents in the pool and virtual machines in the scale set every 5 minutes.
The decision to scale up or down is based on the number of idle agents at that time. An agent is considered idle
if it is online and is not running a pipeline job. Azure Pipelines performs a scale up operation if either of the
following conditions is satisfied:
The number of idle agents falls below the number of standby agents you specify
There are no idle agents to service pipeline jobs waiting in the queue
If one of these conditions is met, Azure Pipelines grows the number of VMs. Scaling up is done in increments of
a certain percentage of the maximum pool size. Allow 20 minutes for machines to be created for each step.
Azure Pipelines scales down the agents when the number of idle agents exceeds the standby count for more
than 30 minutes (configurable using Delay in minutes before deleting excess idle agents ).
To put all of this into an example, consider a scale set agent pool that is configured with 2 standby agents and 4
maximum agents. Let us say that you want to tear down the VM after each use. Also, let us assume that there are
no VMs to start with in the scale set.
Since the number of idle agents is 0, and since the number of idle agents is below the standby count of 2,
Azure Pipelines scales up and adds two VMs to the scale set. Once these agents come online, there will be
2 idle agents.
Let us say that 1 pipeline job arrives and is allocated to one of the agents.
At this time, the number of idle agents is 1, and that is less than the standby count of 2. So, Azure
Pipelines scales up and adds 2 more VMs (the increment size used in this example). At this time, the pool
has 3 idle agents and 1 busy agent.
Let us say that the job on the first agent completes. Azure Pipeline takes that agent offline to re-image
that machine. After a few minutes, it comes back with a fresh image. At this time, we'll have 4 idle agents.
If no other jobs arrive for 30 minutes (configurable using Delay in minutes before deleting excess
idle agents ), Azure Pipelines determines that there are more idle agents than are necessary. So, it scales
down the pool to two agents.
Throughout this operation, the goal for Azure Pipelines is to reach the desired number of idle agents on standby.
Pools scale up and down slowly. Over the course of a day, the pool will scale up as requests are queued in the
morning and scale down as the load subsides in the evening. You may observe more idle agents than you desire
at various times. This is expected as Azure Pipelines converges gradually to the constraints that you specify.

NOTE
It can take an hour or more for Azure Pipelines to scale up or scale down the virtual machines. Azure Pipelines will scale up
in steps, monitor the operations for errors, and react by deleting unusable machines and by creating new ones in the
course of time. This corrective operation can take over an hour.

To achieve maximum stability, scale set operations are done sequentially. For example if the pool needs to scale
up and there are also unhealthy machines to delete, Azure Pipelines will first scale up the pool. Once the pool
has scaled up to reach the desired number of idle agents on standby, the unhealthy machines will be deleted,
depending on the Save an unhealthy agent for investigation setting. For more information, see Unhealthy
agents.
Due to the sampling size of 5 minutes, it is possible that all agents can be running pipelines for a short period of
time and no scaling up will occur.

Customizing Pipeline Agent Configuration


You can customize the configuration of the Azure Pipeline Agent by defining environment variables in your
operating system custom image for your scale set. For example, the scale set agent working directory defaults to
C:\a for Windows and /agent/_work for Linux. If you want to change the working directory, set an environment
variable named VSTS_AGENT_INPUT_WORK with the desired working directory. More information can be found
in the Pipelines Agent Unattended Configuration documentation. Some examples include:
VSTS_AGENT_INPUT_WORK
VSTS_AGENT_INPUT_PROXYURL
VSTS_AGENT_INPUT_PROXYUSERNAME
VSTS_AGENT_INPUT_PROXYPASSWORD
IMPORTANT
Caution must be exercised when customizing the Pipelines agent. Some settings will conflict with other required settings,
causing the agent to fail to register, and the VM to be deleted. These settings should not be set or altered:
VSTS_AGENT_INPUT_URL
VSTS_AGENT_INPUT_AUTH
VSTS_AGENT_INPUT_TOKEN
VSTS_AGENT_INPUT_USERNAME
VSTS_AGENT_INPUT_PASSWORD
VSTS_AGENT_INPUT_POOL
VSTS_AGENT_INPUT_AGENT
VSTS_AGENT_INPUT_RUNASSERVICE
... and anything related to Deployment Groups.

Customizing Virtual Machine Startup via the Custom Script Extension


Users may want to execute startup scripts on their scaleset agent machines before those machines start running
pipeline jobs. Some common use cases for startup scripts include installing software, warming caches, or
fetching repos. You can execute startup scripts by installing the Custom Script Extension for Windows or Custom
Script Extension for Linux. This extension will be executed on every virtual machine in the scaleset immediately
after it is created or reimaged. The custom script extension will be executed before the Azure Pipelines agent
extension is executed.
Here is an example to create a custom script extension for Linux.

az vmss extension set \


--vmss-name <scaleset name> \
--resource-group <resource group> \
--name CustomScript \
--version 2.0 \
--publisher Microsoft.Azure.Extensions
--settings '{ \"FileUris\":[\"https://<myGitHubRepoUrl>/myScript.sh\"], \"commandToExecute\": \"bash
/myScript.sh /myArgs \" }'

Here is an example to create a custom script extension for Windows.

az vmss extension set \


--vmss-name <scaleset name> \
--resource-group <resource group> \
--name CustomScriptExtension \
--version 1.9 \
--publisher Microsoft.Compute \
--settings '{ \"FileUris\":[\"https://<myGitHubRepoUrl>/myscript.ps1\"], \"commandToExecute\":
\"Powershell.exe -ExecutionPolicy Unrestricted -File myscript.ps1 -myargs 0 \" }'

IMPORTANT
The scripts executed in the Custom Script Extension must return with exit code 0 in order for the VM to finish the VM
creation process. If the custom script extension throws an exception or returns a non-zero exit code, the Azure Pipeline
extension will not be executed and the VM will not register with Azure DevOps agent pool.

Lifecycle of a Scale Set Agent


Here is the flow of operations for an Azure Pipelines Virtual Machine Scale Set Agent
1. The Azure DevOps Scale Set Agent Pool sizing job determines the pool has too few idle agents and needs
to scale up. Azure Pipelines makes a call to Azure Scale Sets to increase the scale set capacity.
2. The Azure Scale Set begins creating the new virtual machines. Once the virtual machines are running,
Azure Scale Sets sequentially executes any installed VM extensions.
3. If the Custom Script Extension is installed, it is executed before the Azure Pipelines Agent extension. If the
Custom Script Extension returns a non-zero exit code, the VM creation process is aborted and will be
deleted.
4. The Azure Pipelines Agent extension is executed. This extension downloads the latest version of the Azure
Pipelines Agent along with a configuration script which can be found here.
https://vstsagenttools.blob.core.windows.net/tools/ElasticPools/Linux/6/enableagent.sh
https://vstsagenttools.blob.core.windows.net/tools/ElasticPools/Windows/6/enableagent.ps1

NOTE
These URLs may change.

5. The configuration script creates a local user named AzDevOps if the operating system is Windows Server
or Linux. For Windows 10 Client OS, the agent runs as LocalSystem. The script then unzips, installs, and
configures the Azure Pipelines Agent. As part of configuration, the agent registers with the Azure DevOps
agent pool and appears in the agent pool list in the Offline state.
6. For most scenarios, the configuration script then immediately starts the agent to run as the local user
AzDevOps . The agent goes Online and is ready to run pipeline jobs.

If the pool is configured for interactive UI, the virtual machine reboots after the agent is configured. After
reboot, the local user will auto-login and immediately start the pipelines agent. The agent then goes
Online and is ready to run pipeline jobs.

Create a scale set with custom image, software, or disk size


If you just want to create a scale set with the default 128 GB OS disk using a publicly available Azure image, then
skip straight to step 6 and use the public image name (UbuntuLTS, Win2019DataCenter, etc.) to create the scale
set. Otherwise follow these steps to customize your VM image.
1. Create a VM with your desired OS image and optionally expand the OS disk size from 128 GB to
<myDiskSizeGb> .

If starting with an available Azure Image, for example = (Win2019DataCenter, UbuntuLTS):

az vm create --resource-group <myResourceGroup> --name <MyVM> --image <myBaseImage> --os-disk-


size-gb <myDiskSize> --admin-username myUserName --admin-password myPassword

If starting with a generalized VHD:


a. First create the VM with an unmanaged disk of the desired size and then convert to a
managed disk:

az vm create --resource-group <myResourceGroup> --name <MyVM> --image <myVhdUrl> --os-


type windows --os-disk-size-gb <myDiskSizeGb> --use-unmanaged-disk --admin-username
<myUserName> --admin-password <myPassword> --storage-account <myVhdStorageAccount>
b. Shut down the VM

az vm stop --resource-group <myResourceGroup> --name <MyVM>

c. Deallocate the VM

az vm deallocate --resource-group <myResourceGroup> --name <MyVM>

d. Convert to a managed disk

az vm convert --resource-group <myResourceGroup> --name <MyVM>

e. Restart the VM

az vm start --resource-group <myResourceGroup> --name <MyVM>

2. Remote Desktop (or SSH) to the VM's public IP address to customize the image. You may need to open
ports in the firewall to unblock the RDP (3389) or SSH (22) ports.
a. Windows - If <MyDiskSizeGb> is greater than 128 GB, extend the OS disk size to fill the disk size
you declared above.
Open DiskPart tool as administrator and run these DiskPart commands:
a. list volume (to see the volumes)
b. select volume 2 (depends on which volume is the OS drive)
c. extend size 72000 (to extend the drive by 72 GB, from 128 GB to 200 GB)
3. Install any additional software on the VM.
4. To customize the permissions of the pipeline agent user, you can create a user named AzDevOps , and
grant that user the permissions you require. This user will be created by the scaleset agent startup script if
it does not already exist.
5. Reboot the VM when finished with customizations
6. Generalize the VM.
Windows - From an admin console window:

C:\Windows\System32\sysprep\sysprep.exe /generalize /oobe /shutdown

Linux :

sudo waagent -deprovision+user -force`

IMPORTANT
Wait for the VM to finish generalization and shutdown. Do not proceed until the VM has stopped. Allow 60
minutes.

7. Deallocate the VM
az vm deallocate --resource-group <myResourceGroup> --name <MyVM>

8. Mark the VM as Generalized

az vm generalize --resource-group <myResourceGroup> --name <MyVM>

9. Create a VM Image based on the generalized image. When performing these steps to update an existing
scaleset image, make note of the image ID url in the output.

az image create --resource-group <myResourceGroup> --name <MyImage> --source <MyVM>

10. Create the scale set based on the custom VM image

az vmss create --resource-group <myResourceGroup> --name <myScaleSet> --image <MyImage> --admin-


username <myUsername> --admin-password <myPassword> --instance-count 2 --disable-overprovision --
upgrade-policy-mode manual --load-balancer '""'

11. Verify that both VMs created in the scale set come online, have different names, and reach the Succeeded
state
You are now ready to create an agent pool using this scale set.

Update an existing scale set with a new custom image


To update the image on an existing scaleset, follow the steps in the previous Create a scale set with custom
image, software, or disk size section up through the az image create step to generate the custom OS image.
Make note of the ID property URL that is output from the az image create command. Then update the scaleset
with the new image as shown in the following example. After the scaleset image has been updated, all future
VMs in the scaleset will be created with the new image.

az vmss update --resource-group <myResourceGroup> --name <myScaleSet> --set


virtualMachineProfile.storageProfile.imageReference.id=<id url>

Supported Operating Systems


Scale set agents currently supports Ubuntu Linux, Windows Server/DataCenter 2016/2019, and Windows 10
client.
Known issues
Debian or RedHat Linux are not supported. Only Ubuntu is.
Windows 10 client does not support running the pipeline agent as a local user and therefore the agent
cannot interact with the UI. The agent will run as Local Service instead.

Troubleshooting issues
Navigate to your Azure DevOps Project settings , select Agent pools under Pipelines , and select your agent
pool. Click the tab labeled Diagnostics .
The Diagnostic tab shows all actions executed by Azure DevOps to Create, Delete, or Reimage VMs in your Azure
Scale Set. Diagnostics also logs any errors encountered while trying to perform these actions. Review the errors
to make sure your scaleset has sufficient resources to scale up. If your Azure subscription has reached the
resource limit in VMs, CPU cores, disks, or IP Addresses, those errors will show up here.
Unhealthy Agents
When agents or virtual machines are failing to start, not connecting to Azure DevOps, or going offline
unexpectedly, Azure DevOps logs the failures to the Agent Pool's Diagnostics tab and tries to delete the
associated virtual machine. Networking configuration, image customization, and pending reboots can cause
these issues. Connecting to the VM to debug and gather logs can help with the investigation.
If you would like Azure DevOps to save an unhealthy agent VM for investigation and not automatically delete it
when it detects the unhealthy state, navigate to your Azure DevOps Project settings , select Agent pools
under Pipelines , and select your agent pool. Choose Settings , select the option Save an unhealthy agent
for investigation , and choose Save .

Now, when an unhealthy agent is detected in the scale set, Azure DevOps saves that agent and associated virtual
machine. The saved agent will be visible on the Diagnostics tab of the Agent pool UI. Navigate to your Azure
DevOps Project settings , select Agent pools under Pipelines , select your agent pool, choose Diagnostics ,
and make note of the agent name.

Find the associated virtual machine in your Azure virtual machine scale set via the Azure portal, in the
Instances list.

Select the instance, choose Connect , and perform your investigation.

To delete the saved agent when you are done with your investigation, navigate to your Azure DevOps Project
settings , select Agent pools under Pipelines , and select your agent pool. Choose the tab labeled
Diagnostics . Find the agent on the Agents saved for investigation card, and choose Delete . This removes
the agent from the pool and deletes the associated virtual machine.

FAQ
Where can I find the images used for Microsoft-hosted agents?
How do I configure scale set agents to run UI tests?
How can I delete agents?
Can I configure the scale set agent pool to have zero agents on standby?
Where can I find the images used for Microsoft-hosted agents?
Licensing considerations limit us from distributing Microsoft-hosted images. We are unable to provide these
images for you to use in your scale set agents. But, the scripts that we use to generate these images are open
source. You are free to use these scripts and create your own custom images.
How do I configure scale set agents to run UI tests?
Create a Scale Set with a Windows Server OS and when creating the Agent Pool select the "Configure VMs to
run interactive tests" option.
How can I delete agents?
Navigate to your Azure DevOps Project settings , select Agent pools under Pipelines , and select your agent
pool. Click the tab labeled Agents . Click the 'Enabled' toggle button to disable the agent. The disabled agent will
complete the pipeline it is currently running and will not pick up additional work. Within a few minutes after
completing its current pipeline job, the agent will be deleted.
Can I configure the scale set agent pool to have zero agents on standby?
Yes, if you set Number of agents to keep on standby to zero, for example to conserve cost for a low volume
of jobs, Azure Pipelines starts a VM only when it has a job.
Run a self-hosted agent behind a web proxy
2/26/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during
configuration. This allows your agent to connect to Azure Pipelines or TFS through the proxy. This in turn allows
the agent to get sources and download artifacts. Finally, it passes the proxy details through to tasks which also
need proxy settings in order to reach the web.

Azure Pipelines, TFS 2018 RTM and newer


(Applies to agent version 2.122 and newer.)
To enable the agent to run behind a web proxy, pass --proxyurl , --proxyusername and --proxypassword during
agent configuration.
For example:
Windows
macOS and Linux

./config.cmd --proxyurl http://127.0.0.1:8888 --proxyusername "myuser" --proxypassword "mypass"

We store your proxy credential responsibly on each platform to prevent accidental leakage. On Linux, the
credential is encrypted with a symmetric key based on the machine ID. On macOS, we use the Keychain. On
Windows, we use the Credential Store.

NOTE
Agent version 122.0, which shipped with TFS 2018 RTM, has a known issue configuring as a service on Windows. Because
the Windows Credential Store is per user, you must configure the agent using the same user the service is going to run as.
For example, in order to configure the agent service run as mydomain\buildadmin , you must launch config.cmd as
mydomain\buildadmin . You can do that by logging into the machine with that user or using Run as a different user
in the Windows shell.

How the agent handles the proxy within a build or release job
The agent will talk to Azure DevOps/TFS service through the web proxy specified in the .proxy file.
Since the code for the Get Source task in builds and Download Artifact task in releases are also baked into the
agent, those tasks will follow the agent proxy configuration from the .proxy file.
The agent exposes proxy configuration via environment variables for every task execution. Task authors need to
use azure-pipelines-task-lib methods to retrieve proxy configuration and handle the proxy within their task.
Note that many tools do not automatically use the agent configured proxy settings. For example, tools such as
curl and dotnet may require proxy environment variables such as http_proxy to also be set on the machine.

TFS 2017.2 and older


IMPORTANT
You also can use this method for Azure Pipelines and newer versions of TFS. We strongly recommend the more modern
method, which you can access by switching to the TFS 2018 or Azure Pipelines docs.

In the agent root directory, create a .proxy file with your proxy server url.
Windows
macOS and Linux

echo http://name-of-your-proxy-server:8888 | Out-File .proxy

If your proxy doesn't require authentication, then you're ready to configure and run the agent. See Deploy an
agent on Windows.

NOTE
For backwards compatibility, if the proxy is not specified as described above, the agent also checks for a proxy URL from the
VSTS_HTTP_PROXY environment variable.

Proxy authentication
If your proxy requires authentication, the simplest way to handle it is to grant permissions to the user under
which the agent runs. Otherwise, you can provide credentials through environment variables. When you provide
credentials through environment variables, the agent keeps the credentials secret by masking them in job and
diagnostic logs. To grant credentials through environment variables, set the following variables:
Windows
macOS and Linux

$env:VSTS_HTTP_PROXY_USERNAME = "proxyuser"
$env:VSTS_HTTP_PROXY_PASSWORD = "proxypassword"

NOTE
This procedure enables the agent infrastructure to operate behind a web proxy. Your build pipeline and scripts must still
handle proxy configuration for each task and tool you run in your build. For example, if you are using a task that makes a
REST API call, you must configure the proxy for that task.

Specify proxy bypass URLs


Create a .proxybypass file in the agent's root directory that specifies regular expressions (in ECMAScript syntax)
to match URLs that should bypass the proxy. For example:

github\.com
bitbucket\.com
Run a self-hosted agent in Docker
11/2/2020 • 9 minutes to read • Edit Online

This article provides instructions for running your Azure Pipelines agent in Docker. You can set up a self-hosted
agent in Azure Pipelines to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux
hosts) with Docker. This is useful when you want to run agents with outer orchestration, such as Azure Container
Instances. In this article, you'll walk through a complete container example, including handling agent self-update.
Both Windows and Linux are supported as container hosts. You pass a few environment variables to docker run ,
which configures the agent to connect to Azure Pipelines or Azure DevOps Server. Finally, you customize the
container to suit your needs. Tasks and scripts might depend on specific tools being available on the container's
PATH , and it's your responsibility to ensure that these tools are available.

This feature requires agent version 2.149 or later. Azure DevOps 2019 didn't ship with a compatible agent version.
However, you can upload the correct agent package to your application tier if you want to run Docker agents.

Windows
Enable Hyper-V
Hyper-V isn't enabled by default on Windows. If you want to provide isolation between containers, you must
enable Hyper-V. Otherwise, Docker for Windows won't start.
Enable Hyper-V on Windows 10
Enable Hyper-V on Windows Server 2016

NOTE
You must enable virtualization on your machine. It's typically enabled by default. However, if Hyper-V installation fails, refer
to your system documentation for how to enable virtualization.

Install Docker for Windows


If you're using Windows 10, you can install the Docker Community Edition. For Windows Server 2016, install the
Docker Enterprise Edition.
Switch Docker to use Windows containers
By default, Docker for Windows is configured to use Linux containers. To allow running the Windows container,
confirm that Docker for Windows is running the Windows daemon.
Create and build the Dockerfile
Next, create the Dockerfile.
1. Open a command prompt.
2. Create a new directory:

mkdir C:\dockeragent

3. Change directories to this new directory:


cd C:\dockeragent

4. Save the following content to a file called C:\dockeragent\Dockerfile (no file extension):

FROM mcr.microsoft.com/windows/servercore:ltsc2019

WORKDIR /azp

COPY start.ps1 .

CMD powershell .\start.ps1

5. Save the following content to C:\dockeragent\start.ps1 :

if (-not (Test-Path Env:AZP_URL)) {


Write-Error "error: missing AZP_URL environment variable"
exit 1
}

if (-not (Test-Path Env:AZP_TOKEN_FILE)) {


if (-not (Test-Path Env:AZP_TOKEN)) {
Write-Error "error: missing AZP_TOKEN environment variable"
exit 1
}

$Env:AZP_TOKEN_FILE = "\azp\.token"
$Env:AZP_TOKEN | Out-File -FilePath $Env:AZP_TOKEN_FILE
}

Remove-Item Env:AZP_TOKEN

if ($Env:AZP_WORK -and -not (Test-Path Env:AZP_WORK)) {


New-Item $Env:AZP_WORK -ItemType directory | Out-Null
}

New-Item "\azp\agent" -ItemType directory | Out-Null

# Let the agent ignore the token env variables


$Env:VSO_AGENT_IGNORE = "AZP_TOKEN,AZP_TOKEN_FILE"

Set-Location agent

Write-Host "1. Determining matching Azure Pipelines agent..." -ForegroundColor Cyan

$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$(Get-Content
${Env:AZP_TOKEN_FILE})"))
$package = Invoke-RestMethod -Headers @{Authorization=("Basic $base64AuthInfo")}
"$(${Env:AZP_URL})/_apis/distributedtask/packages/agent?platform=win-x64&`$top=1"
$packageUrl = $package[0].Value.downloadUrl

Write-Host $packageUrl

Write-Host "2. Downloading and installing Azure Pipelines agent..." -ForegroundColor Cyan

$wc = New-Object System.Net.WebClient


$wc.DownloadFile($packageUrl, "$(Get-Location)\agent.zip")

Expand-Archive -Path "agent.zip" -DestinationPath "\azp\agent"

try
{
Write-Host "3. Configuring Azure Pipelines agent..." -ForegroundColor Cyan

.\config.cmd --unattended `
--agent "$(if (Test-Path Env:AZP_AGENT_NAME) { ${Env:AZP_AGENT_NAME} } else { ${Env:computername}
})" `
--url "$(${Env:AZP_URL})" `
--auth PAT `
--token "$(Get-Content ${Env:AZP_TOKEN_FILE})" `
--pool "$(if (Test-Path Env:AZP_POOL) { ${Env:AZP_POOL} } else { 'Default' })" `
--work "$(if (Test-Path Env:AZP_WORK) { ${Env:AZP_WORK} } else { '_work' })" `
--replace

# remove the administrative token before accepting work


Remove-Item $Env:AZP_TOKEN_FILE

Write-Host "4. Running Azure Pipelines agent..." -ForegroundColor Cyan

.\run.cmd
}
finally
{
Write-Host "Cleanup. Removing Azure Pipelines agent..." -ForegroundColor Cyan

.\config.cmd remove --unattended `


--auth PAT `
--token "$(Get-Content ${Env:AZP_TOKEN_FILE})"
}

6. Run the following command within that directory:

docker build -t dockeragent:latest .

This command builds the Dockerfile in the current directory.


The final image is tagged dockeragent:latest . You can easily run it in a container as dockeragent , because
the latest tag is the default if no tag is specified.
Start the image
Now that you have created an image, you can run a container.
1. Open a command prompt.
2. Run the container. This installs the latest version of the agent, configures it, and runs the agent. It targets the
Default pool of a specified Azure DevOps or Azure DevOps Server instance of your choice:

docker run -e AZP_URL=<Azure DevOps instance> -e AZP_TOKEN=<PAT token> -e AZP_AGENT_NAME=mydockeragent


dockeragent:latest

Optionally, you can control the pool and agent work directory by using additional environment variables.
If you want a fresh agent container for every pipeline run, pass the --once flag to the run command. You must
also use a container orchestration system, like Kubernetes or Azure Container Instances, to start new copies of the
container when the work completes.

Linux
Install Docker
Depending on your Linux Distribution, you can either install Docker Community Edition or Docker Enterprise
Edition.
Create and build the Dockerfile
Next, create the Dockerfile.
1. Open a terminal.
2. Create a new directory (recommended):

mkdir ~/dockeragent

3. Change directories to this new directory:

cd ~/dockeragent

4. Save the following content to ~/dockeragent/Dockerfile :

FROM ubuntu:18.04

# To make it easier for build and release pipelines to run apt-get,


# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes

RUN apt-get update \


&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0

WORKDIR /azp

COPY ./start.sh .
RUN chmod +x start.sh

CMD ["./start.sh"]

NOTE
Tasks might depend on executables that your container is expected to provide. For instance, you must add the zip
and unzip packages to the RUN apt-get command in order to run the ArchiveFiles and ExtractFiles
tasks.

5. Save the following content to ~/dockeragent/start.sh , making sure to use Unix-style (LF) line endings:

#!/bin/bash
set -e

if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi

if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
fi

AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi

unset AZP_TOKEN

if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi

rm -rf /azp/agent
mkdir /azp/agent
cd /azp/agent

export AGENT_ALLOW_RUNASROOT="1"

cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."

./config.sh remove --unattended \


--auth PAT \
--token $(cat "$AZP_TOKEN_FILE")
fi
}

print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}

# Let the agent ignore the token env variables


export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE

print_header "1. Determining matching Azure Pipelines agent..."

AZP_AGENT_RESPONSE=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;api-version=3.0-preview' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=linux-x64")

if echo "$AZP_AGENT_RESPONSE" | jq . >/dev/null 2>&1; then


AZP_AGENTPACKAGE_URL=$(echo "$AZP_AGENT_RESPONSE" \
| jq -r '.value | map([.version.major,.version.minor,.version.patch,.downloadUrl]) | sort | .
[length-1] | .[3]')
fi

if [ -z "$AZP_AGENTPACKAGE_URL" -o "$AZP_AGENTPACKAGE_URL" == "null" ]; then


echo 1>&2 "error: could not determine a matching Azure Pipelines agent - check that account
'$AZP_URL' is correct and the token is valid for that account"
exit 1
fi

print_header "2. Downloading and installing Azure Pipelines agent..."

curl -LsS $AZP_AGENTPACKAGE_URL | tar -xz & wait $!

source ./env.sh

trap 'cleanup; exit 130' INT


trap 'cleanup; exit 143' TERM

print_header "3. Configuring Azure Pipelines agent..."

./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!

# remove the administrative token before accepting work


rm $AZP_TOKEN_FILE

print_header "4. Running Azure Pipelines agent..."

# `exec` the node runtime so it's aware of TERM and INT signals
# AgentService.js understands how to handle agent self-update and restart
# Running it with the --once flag at the end will shut down the agent after the build is executed
exec ./externals/node/bin/node ./bin/AgentService.js interactive

6. Run the following command within that directory:

docker build -t dockeragent:latest .

This command builds the Dockerfile in the current directory.


The final image is tagged dockeragent:latest . You can easily run it in a container as dockeragent , because
the latest tag is the default if no tag is specified.
Start the image
Now that you have created an image, you can run a container.
1. Open a terminal.
2. Run the container. This installs the latest version of the agent, configures it, and runs the agent. It targets the
Default pool of a specified Azure DevOps or Azure DevOps Server instance of your choice:

docker run -e AZP_URL=<Azure DevOps instance> -e AZP_TOKEN=<PAT token> -e AZP_AGENT_NAME=mydockeragent


dockeragent:latest

Optionally, you can control the pool and agent work directory by using additional environment variables.
If you want a fresh agent container for every pipeline run, pass the --once flag to the run command. You must
also use a container orchestration system, like Kubernetes or Azure Container Instances, to start new copies of the
container when the work completes.

Environment variables
EN VIRO N M EN T VA RIA B L E DESC RIP T IO N

AZP_URL The URL of the Azure DevOps or Azure DevOps Server


instance.

AZP_TOKEN Personal Access Token (PAT) with Agent Pools (read,


manage) scope, created by a user who has permission to
configure agents, at AZP_URL .

AZP_AGENT_NAME Agent name (default value: the container hostname).


EN VIRO N M EN T VA RIA B L E DESC RIP T IO N

AZP_POOL Agent pool name (default value: Default ).

AZP_WORK Work directory (default value: _work ).

Add tools and customize the container


You have created a basic build agent. You can extend the Dockerfile to include additional tools and their
dependencies, or build your own container by using this one as a base layer. Just make sure that the following are
left untouched:
The start.sh script is called by the Dockerfile.
The start.sh script is the last command in the Dockerfile.
Ensure that derivative containers don't remove any of the dependencies stated by the Dockerfile.

Use Docker within a Docker container


In order to use Docker from within a Docker container, you bind-mount the Docker socket.
Cau t i on

Doing this has serious security implications. The code inside the container can now run as root on your Docker
host.
If you're sure you want to do this, see the bind mount documentation on Docker.com.

Use Azure Kubernetes Service cluster


Deploy and configure Azure Kubernetes Service
Follow the steps in Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster by using the Azure portal. After
this, your PowerShell or Shell console can use the kubectl command line.
Deploy and configure Azure Container Registry
Follow the steps in Quickstart: Create an Azure container registry by using the Azure portal. After this, you can
push and pull containers from Azure Container Registry.
Configure secrets and deploy a replica set
1. Create the secrets on the AKS cluster.

kubectl create secret generic azdevops \


--from-literal=AZP_URL=https://dev.azure.com/yourOrg \
--from-literal=AZP_TOKEN=YourPAT \
--from-literal=AZP_POOL=NameOfYourPool

2. Run this command to push your container to Container Registry:

docker push <acr-server>/dockeragent:latest

3. Configure Container Registry integration for existing AKS clusters.

az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>

4. Save the following content to ~/AKS/ReplicationController.yaml :


apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: kubepodcreation
image: AKRTestcase.azurecr.io/kubepodcreation:5306
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock

This Kubernetes YAML creates a replica set and a deployment, where replicas: 1 indicates the number or
the agents that are running on the cluster.
5. Run this command:

kubectl apply -f ReplicationController.yaml

Now your agents will run the AKS cluster.

Common errors
If you're using Windows, and you get the following error:

‘standard_init_linux.go:178: exec user process caused "no such file or directory"

Install Git Bash by downloading and installing git-scm.


Run this command:
dos2unix ~/dockeragent/Dockerfile
dos2unix ~/dockeragent/start.sh
git add .
git commit -m 'Fixed CR'
git push

Try again. You no longer get the error.


Run the agent with a self-signed certificate
2/26/2020 • 3 minutes to read • Edit Online

Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
This topic explains how to run a v2 self-hosted agent with self-signed certificate.

Work with SSL server certificate


Enter server URL > https://corp.tfs.com/tfs
Enter authentication type (press enter for Integrated) >
Connecting to server ...
An error occurred while sending the request.

Agent diagnostic log shows:

[2017-11-06 20:55:33Z ERR AgentServer] System.Net.Http.HttpRequestException: An error occurred while sending


the request. ---> System.Net.Http.WinHttpException: A security error occurred

This error may indicate the server certificate you used on your TFS server is not trusted by the build machine.
Make sure you install your self-signed ssl server certificate into the OS certificate store.

Windows: Windows certificate store


Linux: OpenSSL certificate store
macOS: OpenSSL certificate store for agent version 2.124.0 or below
Keychain for agent version 2.125.0 or above

You can easily verify whether the certificate has been installed correctly by running few commands. You should be
good as long as SSL handshake finished correctly even you get a 401 for the request.

Windows: PowerShell Invoke-WebRequest -Uri https://corp.tfs.com/tfs -UseDefaultCredentials


Linux: curl -v https://corp.tfs.com/tfs
macOS: curl -v https://corp.tfs.com/tfs (agent version 2.124.0 or below, curl needs to be built for OpenSSL)
curl -v https://corp.tfs.com/tfs (agent version 2.125.0 or above, curl needs to be built for Secure
Transport)

If somehow you can't successfully install certificate into your machine's certificate store due to various reasons,
like: you don't have permission or you are on a customized Linux machine. The agent version 2.125.0 or above has
the ability to ignore SSL server certificate validation error.

IMPORTANT
This is not secure and not recommended, we highly suggest you to install the certificate into your machine certificate store.

Pass --sslskipcertvalidation during agent configuration

./config.cmd/sh --sslskipcertvalidation
NOTE
There is limitation of using this flag on Linux and macOS
The libcurl library on your Linux or macOS machine needs to built with OpenSSL, More Detail

Git get sources fails with SSL certificate problem (Windows agent only)
We ship command-line Git as part of the Windows agent. We use this copy of Git for all Git related operation.
When you have a self-signed SSL certificate for your on-premises TFS server, make sure to configure the Git we
shipped to allow that self-signed SSL certificate. There are 2 approaches to solve the problem.
1. Set the following git config in global level by the agent's run as user.

git config --global http."https://tfs.com/".sslCAInfo certificate.pem

NOTE
Setting system level Git config is not reliable on Windows. The system .gitconfig file is stored with the copy of Git we
packaged, which will get replaced whenever the agent is upgraded to a new version.

2. Enable git to use SChannel during configure with 2.129.0 or higher version agent Pass --gituseschannel
during agent configuration

./config.cmd --gituseschannel

NOTE
Git SChannel has more restrict requirement for your self-signed certificate. Self-singed certificate that generated by
IIS or PowerShell command may not be capable with SChanel.

Work with SSL client certificate


IIS has a SSL setting that requires all incoming requests to TFS must present client certificate in addition to the
regular credential.
When that IIS SSL setting enabled, you need to use 2.125.0 or above version agent and follow these extra steps in
order to configure the build machine against your TFS server.
Prepare all required certificate information
CA certificate(s) in .pem format (This should contains the public key and signature of the CA certificate,
you need put the root ca certificate and all your intermediate ca certificates into one .pem file)
Client certificate in .pem format (This should contains the public key and signature of the Client
certificate)
Client certificate private key in .pem format (This should contains only the private key of the Client
certificate)
Client certificate archive package in .pfx format (This should contains the signature, public key and
private key of the Client certificate)
Use SAME password to protect Client certificate private key and Client certificate archive package, since
they both have client certificate's private key
Install CA certificate(s) into machine certificate store
Linux: OpenSSL certificate store
macOS: System or User Keychain
Windows: Windows certificate store
Pass --sslcacert , --sslclientcert , --sslclientcertkey . --sslclientcertarchive and
--sslclientcertpassword during agent configuration.

.\config.cmd/sh --sslcacert ca.pem --sslclientcert clientcert.pem --sslclientcertkey clientcert-key-


pass.pem --sslclientcertarchive clientcert-archive.pfx --sslclientcertpassword "mypassword"

Your client certificate private key password is securely stored on each platform.

Linux: Encrypted with a symmetric key based on the machine ID


macOS: macOS Keychain
Windows: Windows Credential Store

Learn more about agent client certificate support.


Provision deployment groups
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.

A deployment group is a logical set of deployment target machines that have agents installed on each one.
Deployment groups represent the physical environments; for example, "Dev", "Test", "UAT", and "Production". In
effect, a deployment group is just another grouping of agents, much like an agent pool.
When authoring an Azure Pipelines or TFS Release pipeline, you can specify the deployment targets for a job
using a deployment group. This makes it easy to define parallel execution of deployment tasks.
Deployment groups:
Specify the security context and runtime targets for the agents. As you create a deployment group, you
add users and give them appropriate permissions to administer, manage, view, and use the group.
Let you view live logs for each server as a deployment takes place, and download logs for all servers to
track your deployments down to individual machines.
Enable you to use machine tags to limit deployment to specific sets of target servers.

Create a deployment group


You define groups on the Deployment Groups tab of the Azure Pipelines section, and install the agent on
each server in the group. After you prepare your target servers, they appear in the Deployment Groups tab.
The list indicates if a server is available, the tags you assigned to each server, and the latest deployment to each
server.
The tags you assign allow you to limit deployment to specific servers when the deployment group is used in a
Deployment group job. Tags are each limited to 256 characters, but there is no limit to the number of tags you
can use. You manage the security for a deployment group by assigning security roles.

Deploy agents to a deployment group


Every target machine in the deployment group requires the build and release agent to be installed. You can do
this using the script that is generated in the Deployment Groups tab of Azure Pipelines . You can choose the
type of agent to suit the target operating system and platform; such as Windows and Linux.
If the target machines are Azure VMs, you can quickly and easily prepare them by installing the Azure
Pipelines Agent Azure VM extension on each of the VMs, or by using the Azure Resource Group
Deployment task in your release pipeline to create a deployment group dynamically.
You can force the agents on the target machines to be upgraded to the latest version without needing to
redeploy them by choosing the Upgrade targets command on the shortcut menu for a deployment group.
For more information, see Provision agents for deployment groups.
Monitor releases for deployment groups
When release is executing, you see an entry in the live logs page for each server in the deployment group. After
a release has completed, you can download the log files for every server to examine the deployments and
resolve issues. To navigate quickly to a release pipeline or a release, use the links in the Releases tab.

Share a deployment group


Each deployment group is a member of a deployment pool , and you can share the deployment pool and
groups across projects provided that:
The user sharing the deployment pool has User permission for the pool containing the group.
The user sharing the deployment pool has permission to create a deployment group in the project where it
is being shared.
The project does not already contain a deployment group that is a member of the same deployment pool.
The tags you assign to each machine in the pool are scoped at project level, so you can specify a different tag
for the same machine in each deployment group.
Add a deployment pool and group to another project
To manage a deployment pool, or to add an existing deployment pool and the groups it contains to another
project, choose the Manage link in the Agent Pool section of the Deployment Group page. In the
Deployment Pools page, select the projects for which you want the deployment group to be available, then
save the changes.
When you navigate to the Deployment Groups page in the target project(s), you will see the deployment
group you added and you can assign project-specific machine tags as required.
Create a new deployment pool
You can add a new deployment pool, share it amongst your projects, and then add deployment groups to it. In
the Deployment Pools page, choose + New . In the New deployment pool panel, enter a name for the pool
and then select the projects for which you want it to be available.
When you navigate to the Deployment Groups page in the target project(s), you will see the deployment
group you added and you can assign project-specific machine tags as required.
Automatically deploy to new targets in a deployment group
When new targets are added to a deployment group, you can configure the environment to automatically
deploy the last successful release to the new targets.
Related topics
Run on machine group job
Deploy an agent on Windows
Deploy an agent on macOS
Deploy an agent on Linux

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature
on our Azure DevOps Developer Community. Support page.
Provision agents for deployment groups
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Deployment groups make it easy to define logical groups of target machines for deployment, and install the
required agent on each machine. This topic explains how to create a deployment group, and install and provision
the agent on each virtual or physical machine in your deployment group.
You can install the agent in any one of these ways:
Run the script that is generated automatically when you create a deployment group.
Install the Azure Pipelines Agent Azure VM extension on each of the VMs.
Use the Azure Resource Group Deployment task in your release pipeline.
For information about agents and pipelines, see:
Parallel jobs in Team Foundation Server.
Parallel jobs in Azure Pipelines.
Pricing for Azure Pipelines features

Run the installation script on the target servers


1. In the Deployment groups tab of Azure Pipelines , choose +New to create a new group.
2. Enter a name for the group, and optionally a description, then choose Create .
3. In the Register machines using command line section of the next page, select the target machine
operating system.
4. Choose Use a personal access token in the script for authentication . Learn more.
5. Choose Copy the script to clipboard .
6. Log onto each target machine in turn using the account with the appropriate permissions and:
Open an Administrator PowerShell command prompt, paste in the script you copied, then execute it
to register the machine with this group.
If you get an error when running the script that a secure channel could not be created, execute this
command at the Administrator PowerShell prompt:
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

When prompted to configure tags for the agent, press Y and enter any tags you will use to identify
subsets of the machines in the group for partial deployments.

Tags you assign allow you to limit deployment to specific servers when the deployment group is
used in a Run on machine group job.
When prompted for the user account, press Return to accept the defaults.
Wait for the script to finish with the message
Service vstsagent.{organization-name}.{computer-name} started successfully .
7. In the Deployment groups page of Azure Pipelines , open the Machines tab and verify that the agents
are running. If the tags you configured are not visible, refresh the page.

Install the Azure Pipelines Agent Azure VM extension


1. In the Deployment groups tab of Azure Pipelines , choose +New to create a new group.
2. Enter a name for the group, and optionally a description, then choose Create .
3. In the Azure portal, for each VM that will be included in the deployment group open the Extension blade,
choose + Add to open the New resource list, and select Azure Pipelines Agent .

4. In the Install extension blade, specify the name of the Azure Pipelines subscription to use. For example, if
the URL is https://dev.azure.com/contoso , just specify contoso .
5. Specify the project name and the deployment group name.
6. Optionally, specify a name for the agent. If not specified, it uses the VM name appended with -DG .
7. Enter the Personal Access Token (PAT) to use for authentication against Azure Pipelines.
8. Optionally, specify a comma-separated list of tags that will be configured on the agent. Tags are not case-
sensitive, and each must no more than 256 characters.
9. Choose OK to begin installation of the agent on this VM.
10. Add the extension to any other VMs you want to include in this deployment group.

Use the Azure Resource Group Deployment task


You can use the Azure Resource Group Deployment task to deploy an Azure Resource Manager (ARM) template
that installs the Azure Pipelines Agent Azure VM extension as you create a virtual machine, or to update the
resource group to apply the extension after the virtual machine has been created. Alternatively, you can use the
advanced deployment options of the Azure Resource Group Deployment task to deploy the agent to deployment
groups.
Install the "Azure Pipelines Agent" Azure VM extension using an ARM template
An ARM template is a JSON file that declaratively defines a set of Azure resources. The template can be
automatically read and the resources provisioned by Azure. In a single template, you can deploy multiple services
along with their dependencies.
For a Windows VM, create an ARM template and add a resources element under the
Microsoft.Compute/virtualMachine resource as shown here:

"resources": [
{
"name": "[concat(parameters('vmNamePrefix'),copyIndex(),'/TeamServicesAgent')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "[parameters('location')]",
"apiVersion": "2015-06-15",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/',
concat(parameters('vmNamePrefix'),copyindex()))]"
],
"properties": {
"publisher": "Microsoft.VisualStudio.Services",
"type": "TeamServicesAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"VSTSAccountName": "[parameters('VSTSAccountName')]",
"TeamProject": "[parameters('TeamProject')]",
"DeploymentGroup": "[parameters('DeploymentGroup')]",
"AgentName": "[parameters('AgentName')]",
"Tags": "[parameters('Tags')]"
},
"protectedSettings": {
"PATToken": "[parameters('PATToken')]"
}
}
}
]

where:
VSTSAccountName is required. The Azure Pipelines subscription to use. Example: If your URL is
https://dev.azure.com/contoso , just specify contoso
TeamProject is required. The project that has the deployment group defined within it
DeploymentGroup is required. The deployment group against which deployment agent will be registered
AgentName is optional. If not specified, the VM name with -DG appended will be used
Tags is optional. A comma-separated list of tags that will be set on the agent. Tags are not case sensitive and
each must be no more than 256 characters
PATToken is required. The Personal Access Token that will be used to authenticate against Azure Pipelines to
download and configure the agent

NOTE
If you are deploying to a Linux VM, ensure that the type parameter in the code is TeamServicesAgentLinux .
For more information about ARM templates, see Define resources in Azure Resource Manager templates.
To use the template:
1. In the Deployment groups tab of Azure Pipelines , choose +New to create a new group.
2. Enter a name for the group, and optionally a description, then choose Create .
3. In the Releases tab of Azure Pipelines , create a release pipeline with a stage that contains the Azure
Resource Group Deployment task.
4. Provide the parameters required for the task such as the Azure subscription, resource group name, location,
and template information, then save the release pipeline.
5. Create a release from the release pipeline to install the agents.
Install agents using the advanced deployment options
1. In the Deployment groups tab of Azure Pipelines , choose +New to create a new group.
2. Enter a name for the group, and optionally a description, then choose Create .
3. In the Releases tab of Azure Pipelines , create a release pipeline with a stage that contains the Azure
Resource Group Deployment task.
4. Select the task and expand the Advanced deployment options for vir tual machines section. Configure
the parameters in this section as follows:
Enable Prerequisites : select Configure with Deployment Group Agent .
Azure Pipelines/TFS endpoint : Select an existing Team Foundation Server/TFS service connection
that points to your target. Agent registration for deployment groups requires access to your Visual
Studio project. If you do not have an existing service connection, choose Add and create one now.
Configure it to use a Personal Access Token (PAT) with scope restricted to Deployment Group .
Project : Specify the project containing the deployment group.
Deployment Group : Specify the name of the deployment group against which the agents will be
registered.
Copy Azure VM tags to agents : When set (ticked), any tags already configured on the Azure VM
will be copied to the corresponding deployment group agent. By default, all Azure tags are copied
using the format Key: Value . For example, Role: Web .
5. Provide the other parameters required for the task such as the Azure subscription, resource group name,
and location, then save the release pipeline.
6. Create a release from the release pipeline to install the agents.

Related topics
Run on machine group job
Deploy an agent on Windows
Deploy an agent on macOS
Deploy an agent on Linux

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Deployment group jobs
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Deployment groups make it easy to define groups of target servers for deployment. Tasks that you define in a
deployment group job run on some or all of the target servers, depending on the arguments you specify for the
tasks and the job itself.
You can select specific sets of servers from a deployment group to receive the deployment by specifying the
machine tags that you have defined for each server in the deployment group. You can also specify the proportion
of the target servers that the pipeline should deploy to at the same time. This ensures that the app running on
these servers is capable of handling requests while the deployment is taking place.
YAML
Classic

NOTE
Deployment group jobs are not yet supported in YAML. You can use Virtual machine resources in Environments to do a
rolling deployment to VMs in YAML pipelines.

Rolling deployments can be configured by specifying the keyword rolling: under strategy: node of a
deployment job.

strategy:
rolling:
maxParallel: [ number or percentage as x% ]
preDeploy:
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
deploy:
steps:
...
routeTraffic:
steps:
...
postRouteTraffic:
steps:
...
on:
failure:
steps:
...
success:
steps:
...
YAML builds are not yet available on TFS.

Timeouts
Use the job timeout to specify the timeout in minutes for jobs in this job. A zero value for this option means that
the timeout is effectively infinite and so, by default, jobs run until they complete or fail. You can also set the
timeout for each task individually - see task control options. Jobs targeting Microsoft-hosted agents have
additional restrictions on how long they may run.

Related topics
Jobs
Conditions
Deploy to Azure VMs using deployment groups in
Azure Pipelines
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

In earlier versions of Azure Pipelines, applications that needed to be deployed to multiple servers required a
significant amount of planning and maintenance. Windows PowerShell remoting had to be enabled manually,
required ports opened, and deployment agents installed on each of the servers. The pipelines then had to be
managed manually if a roll-out deployment was required.
All the above challenges have been evolved seamlessly with the introduction of the Deployment Groups.
A deployment group installs a deployment agent on each of the target servers in the configured group and
instructs the release pipeline to gradually deploy the application to those servers. Multiple pipelines can be created
for the roll-out deployments so that the latest version of an application can be delivered in a phased manner to
multiple user groups for validation of newly introduced features.

NOTE
Deployment groups are a concept used in Classic pipelines. If you are using YAML pipelines, see Environments.

In this tutorial, you learn about:


Provisioning VM infrastructure to Azure using a template
Creating an Azure Pipelines deployment group
Creating and running a CI/CD pipeline to deploy the solution with a deployment group

Prerequisites
A Microsoft Azure account.
An Azure DevOps organization.
Use the Azure DevOps Demo Generator to provision the tutorial project on your Azure DevOps organization.

Setting up the Azure deployment environment


The following resources are provisioned on the Azure using an ARM template:
Six Virtual Machines (VM) web servers with IIS configured
SQL server VM (DB server)
Azure Network Load Balancer
1. Click the Deploy to Azure button below to initiate resource provisioning. Provide all the necessary
information and select Purchase . You may use any combination of allowed administrative usernames and
passwords as they are not used again in this tutorial. The Env Prefix Name is prefixed to all of the resource
names in order to ensure that those resources are generated with globally unique names. Try to use
something personal or random, but if you see a naming conflict error during validation or creation, try
changing this parameter and running again.

NOTE
It takes approximately 10-15 minutes to complete the deployment. If you receive any naming conflict errors, try
changing the parameter you provide for Env Prefix Name .

2. Once the deployment completes, you can review all of the resources generated in the specified resource
group using the Azure portal. Select the DB server VM with sqlSr v in its name to view its details.
3. Make a note of the DNS name . This value is required in a later step. You can use the copy button to copy it
to the clipboard.

Creating and configuring a deployment group


Azure Pipelines makes it easier to organize servers required for deploying applications. A deployment group is a
collection of machines with deployment agents. Each of the machines interacts with Azure Pipelines to coordinate
the deployment of the app.
Since there is no configuration change required for the build pipeline, the build is triggered automatically after the
project is provisioned. When you queue a release later on, this build is used.
1. Navigate to the Azure DevOps project created by the demo generator.
2. From under Pipelines , navigate to Deployment groups .

3. Select Add a deployment group .


4. Enter the Deployment group name of Release and select Create . A registration script is generated. You
can register the target servers using the script provided if working on your own. However, in this tutorial, the
target servers are automatically registered as part of the release pipeline. The release definition uses stages
to deploy the application to the target servers. A stage is a logical grouping of the tasks that defines the
runtime target on which the tasks will execute. Each deployment group stage executes tasks on the machines
defined in the deployment group.
5. From under Pipelines , navigate to Releases . Select the release pipeline named Deployment Groups and
select Edit .
6. Select the Tasks tab to view the deployment tasks in pipeline. The tasks are organized as three stages called
Agent phase , Deployment group phase , and IIS Deployment phase .
7. Select the Agent phase . In this stage, the target servers are associated with the deployment group using the
Azure Resource Group Deployment task. To run, an agent pool and specification must be defined. Select the
Azure Pipelines pool and vs2017-win2016 specification.

8. Select the Azure Resource Group Deployment task. Configure a service connection to the Azure
subscription used earlier to create infrastructure. After authorizing the connection, select the resource group
created for this tutorial.

9. This task will run on the virtual machines hosted in Azure, and will need to be able to connect back to this
pipeline in order to complete the deployment group requirements. To secure the connection, they will need a
personal access token (PAT) . From the User settings dropdown, open Personal access tokens in a
new tab. Most browsers support opening a link in a new tab via right-click context menu or Ctrl+Click .
10. In the new tab, select New Token .
11. Enter a name and select the Full access scope. Select Create to create the token. Once created, copy the
token and close the browser tab. You return to the Azure Pipeline editor.

12. Under Azure Pipelines ser vice connection , select New .

13. Enter the Connection URL to the current instance of Azure DevOps. This URL is something like
https://dev.azure.com/[Your account] . Paste in the Personal Access Token created earlier and specify a
Ser vice connection name . Select Verify and save .
14. Select the current Team project and the Deployment group created earlier.

15. Select the Deployment group phase stage. This stage executes tasks on the machines defined in the
deployment group. This stage is linked to the SQL-Svr-DB tag. Choose the Deployment Group from the
dropdown.
16. Select the IIS Deployment phase stage. This stage deploys the application to the web servers using the
specified tasks. This stage is linked to the WebSr v tag. Choose the Deployment Group from the
dropdown.
17. Select the Disconnect Azure Network Load Balancer task. As the target machines are connected to the
NLB, this task will disconnect the machines from the NLB prior to the deployment and reconnect them back
to the NLB after the deployment. Configure the task to use the Azure connection, resource group, and load
balancer (there should only be one).
18. Select the IIS Web App Manage task. This task runs on the deployment target machines registered with
the deployment group configured for the task/stage. It creates a web app and application pool locally with
the name Par tsUnlimited running under the port 80
19. Select the IIS Web App Deploy task. This task runs on the deployment target machines registered with the
deployment group configured for the task/stage. It deploys the application to the IIS server using Web
Deploy .
20. Select the Connect Azure Network Load Balancer task. Configure the task to use the Azure connection,
resource group, and load balancer (there should only be one).
21. Select the Variables tab and enter the variable values as below.

VA RIA B L E N A M E VA RIA B L E VA L UE

DatabaseName PartsUnlimited-Dev

DBPassword P2ssw0rd@123

DBUserName sqladmin

DefaultConnectionString Data Source=[YOUR_DNS_NAME];Initial


Catalog=PartsUnlimited-Dev;User
ID=sqladmin;Password=P2ssw0rd@123;MultipleActiveRes
ultSets=False;Connection Timeout=30;

ServerName localhost

IMPORTANT
Make sure to replace your SQL server DNS name (which you noted from Azure portal earlier) in
DefaultConnectionString variable.

Your DefaultConnectionString should be similar to this string after replacing the SQL DNS:
Data Source=cust1sqljo5zndv53idtw.westus2.cloudapp.azure.com;Initial Catalog=PartsUnlimited-Dev;User
ID=sqladmin;Password=P2ssw0rd@123;MultipleActiveResultSets=False;Connection Timeout=30;

The final variable list should look something like this:


NOTE
You may receive an error that the DefaultConnectionString variable must be saved as a secret. If that happens,
select the variable and click the padlock icon that appears next to its value to protect it.

Queuing a release and reviewing the deployment


1. Select Save and confirm.
2. Select Create release and confirm. Follow the release through to completion. The deployment is then ready
for review.
3. In the Azure portal, open one of the web VMs in your resource group. You can select any that have websrv
in the name.

4. Copy the DNS of the VM. The Azure Load Balancer will distribute incoming traffic among healthy
instances of servers defined in a load-balanced set. As a result, the DNS of all web server instances is the
same.

5. Open a new browser tab to the DNS of the VM. Confirm the deployed app is running.
Summary
In this tutorial, you deployed a web application to a set of Azure VMs using Azure Pipelines and Deployment
Groups. While this scenario covered a handful of machines, you can easily scale the process up to support
hundreds, or even thousands, of machines using virtually any configuration.

Cleaning up resources
This tutorial created an Azure DevOps project and some resources in Azure. If you're not going to continue to use
these resources, delete them with the following steps:
1. Delete the Azure DevOps project created by the Azure DevOps Demo Generator.
2. All Azure resources created during this tutorial were assigned to the resource group specified during
creation. Deleting that group will delete the resources they contain. This deletion can be done via the CLI or
portal.

Next steps
Provision agents for deployment groups
Set retention policies for builds, tests, and releases
11/2/2020 • 17 minutes to read • Edit Online

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

In this article, learn how to manage the retention policies for your project.
Retention policies let you set how long to keep runs, tests, and releases stored in the system. To save storage
space, you want to delete older runs, tests, and releases.
The following retention policies are available in Azure DevOps in your Project settings :
Pipeline - Set how long to keep artifacts, symbols, attachments, runs, and pull request runs.
Release (classic) - Set whether to save builds and view the default and maximum retention settings.
Test - Set how long to keep automated and manual test runs, results, and attachments.
If you are using an on-premises server, you can also specify retention policy defaults for a project and when
releases are permanently destroyed. Learn more about release retention.

Prerequisites
By default, members of the Contributors, Build Admins, Project Admins, and Release Admins groups can manage
retention policies.
To manage test results, you must have one of the following subscriptions:
Enterprise
Test Professional
MSDN Platforms
You can also buy monthly access to Azure Test Plans and assign the Basic + Test Plans access level. See Testing
access by user role.

Configure retention policies


1. Sign in to your project ( https://dev.azure.com/{yourorganization}/{yourproject} ).
2. Go to the Settings tab of your project's settings.
3. Select Settings or Release retention in Pipelines or Retention in Test .
In Pipelines , use Settings to configure retention for artifacts, symbols, attachments, runs, and pull
request runs.
In Pipelines , use Release retention to set when to keep builds consumed by releases.
In Test , use Retention to set how long to keep test runs.
Set run retention policies
In most cases, you don't need to retain completed runs longer than a certain number of days. Using retention
policies, you can control how many days you want to keep each run before deleting it.
Along with defining how many days to retain runs, you can also decide the minimum number of runs that
should be kept for each pipeline.
1. Go to the Settings tab of your project's settings.
2. Select Settings in the Pipelines section.
Set the number of days to keep artifacts, symbols, and attachments.
Set the number of days to keep runs
Set the number of days to keep pull request runs
Set the number of recent runs to keep for each pipeline
NOTE
The only way to configure retention policies for YAML and classic pipelines is through the project settings as described
above. You cannot configure per-pipeline retention policies any more.

The setting for number of recent runs to keep for each pipeline requires a little more explanation. The
interpretation of this setting varies based on the type of repository you build in your pipeline.
Azure Repos: Azure Pipelines always retains the configured number of latest runs for the default branch
and for each protected branch of the repository. A branch that has any branch policies configured is
considered to be a protected branch. As an example, consider a repository with the default branch called
main . Also, let us assume that the release branch in this repository has a branch policy. In this case, if
you configured the policy to retain 3 runs, then the latest 3 runs of main as well as the latest 3 runs of
release branch are retained. In addition, the latest 3 runs of this pipeline (irrespective of the branch) are
also retained.
To clarify this logic further, let us say that the list of runs for this pipeline is as follows with the most recent
run at the top. The table shows which runs will be retained if you have configured to retain the latest 3
runs (ignoring the effect of the number of days setting):

RETA IN ED / N OT
RUN # B RA N C H RETA IN ED WH Y?

Run 10 main Retained Latest 3 for main

Run 9 branch1 Retained Latest 3 for pipeline

Run 8 branch2 Retained Latest 3 for pipeline

Run 7 main Retained Latest 3 for main

Run 6 main Retained Latest 3 for main

Run 5 main Not retained Neither latest 3 for main,


nor for pipeline

Run 4 main Not retained Neither latest 3 for main,


nor for pipeline

Run 3 branch1 Not retained Neither latest 3 for main,


nor for pipeline

Run 2 release Retained Latest 3 for release

Run 1 main Not retained Neither latest 3 for main,


nor for pipeline

All other Git repositories: Azure Pipelines retains the configured number of latest runs for the default
branch of the repository and for the whole pipeline.
TFVC: Azure Pipelines retains the configured number of latest runs for the whole pipeline, irrespective of
the branch.
What parts of the run get deleted
When the retention policies mark a build for deletion, you can control which information related to the build is
deleted:
Build record: You can choose to delete the entire build record or keep basic information about the build even
after the build is deleted.
Source label: If you label sources as part of the build, then you can choose to delete the tag (for Git) or the
label (for TFVC) created by a build.
Automated test results: You can choose to delete the automated test results associated with the build (for
example, results published by the Publish Test Results build task).
The following information is deleted when a build is deleted:
Logs
Published artifacts
Published symbols
The following information is deleted when a run is deleted:
Logs
All artifacts
All symbols
Binaries
Test results
Run metadata
When are runs deleted
Your retention policies are processed once a day. The time that the policies get processed variables because we
spread the work throughout the day for load-balancing purposes. There is no option to change this process.
A run is deleted if all of the following conditions are true:
It exceeds the number of days configured in the retention settings
It is not one of the recent runs as configured in the retention settings
It is not marked to be retained indefinitely
It is not retained by a release
Your retention policies run every day at 3:00 A.M. UTC. There is no option to change the time the policies run.

Delete a run
You can delete runs using the context menu on the Pipeline run details page.

NOTE
If any retention policies currently apply to the run, they must be removed before the run can be deleted. For instructions,
see Pipeline run details - delete a run.
Set release retention policies
NOTE
If you are using Azure Pipelines, you can view but not change the global release retention policies for your project.

The release retention policies for a classic release pipeline determine how long a release and the run linked to it
are retained. Using these policies, you can control how many days you want to keep each release after it has
been last modified or deployed and the minimum number of releases that should be retained for each
pipeline.
The retention timer on a release is reset every time a release is modified or deployed to a stage. The minimum
number of releases to retain setting takes precedence over the number of days. For example, if you specify to
retain a minimum of three releases, the most recent three will be retained indefinitely - irrespective of the
number of days specified. However, you can manually delete these releases when you no longer require them.
See FAQ below for more details about how release retention works.
As an author of a release pipeline, you can customize retention policies for releases of your pipeline on the
Retention tab.
The retention policy for YAML and build pipelines is the same. You can see your pipeline's retention settings in
Project Settings for Pipelines in the Settings section.
You can also customize these policies on a stage-by-stage basis.
Global release retention policy
If you are using an on-premises Team Foundation Server, you can specify release retention policy defaults and
maximums for a project. You can also specify when releases are permanently destroyed (removed from the
Deleted tab in the build explorer).
If you are using Azure Pipelines, you can view but not change these settings for your project.
Global release retention policy settings can be managed from the Release settings of your project:
Azure Pipelines:
https://dev.azure.com/{organization}/{project}/_settings/release?app=ms.vss-build-web.build-release-hub-
group
On-premises:
https://{your_server}/tfs/{collection_name}/{project}/_admin/_apps/hub/ms.vss-releaseManagement-
web.release-project-admin-hub

The maximum retention policy sets the upper limit for how long releases can be retained for all release
pipelines. Authors of release pipelines cannot configure settings for their definitions beyond the values specified
here.
The default retention policy sets the default retention values for all the release pipelines. Authors of build
pipelines can override these values.
The destruction policy helps you keep the releases for a certain period of time after they are deleted. This
policy cannot be overridden in individual release pipelines.

NOTE
In TFS, release retention management is restricted to specifying the number of days, and this is available only in TFS
2015.3 and newer.

Stage-specific retention policies


You may want to retain more releases that have been deployed to specific stages. For example, your team may
want to keep:
Releases deployed to Production stage for 60 days, with a minimum of three last deployed releases.
Releases deployed to Pre-production stage for 15 days, with a minimum of one last deployed release.
Releases deployed to QA stage for 30 days, with a minimum of two last deployed releases.
Releases deployed to Dev stage for 10 days, with a minimum of one last deployed release.
The following example retention policy for a release pipeline meets the above requirements:

In this example, if a release that is deployed to Dev is not promoted to QA for 10 days, it is a potential candidate
for deletion. However, if that same release is deployed to QA eight days after being deployed to Dev, its retention
timer is reset, and it is retained in the system for another 30 days.
When specifying custom policies per pipeline, you cannot exceed the maximum limits set by administrator.
Interaction between build and release retention policies
The build linked to a release has its own retention policy, which may be shorter than that of the release. If you
want to retain the build for the same period as the release, set the Retain associated ar tifacts checkbox for
the appropriate stages. This overrides the retention policy for the build, and ensures that the artifacts are
available if you need to redeploy that release.
When you delete a release pipeline, delete a release, or when the retention policy deletes a release automatically,
the retention policy for the associated build will determine when that build is deleted.

NOTE
In TFS, interaction between build and release retention is available in TFS 2017 and newer.

Set test retention policies


You can set manual and automated test run policies.
Manual test-runs retention policies
To delete manual test results after a specific number of days, set the retention limit at the project level. Azure
DevOps keeps manual test results related to builds, even after you delete those builds. That way, build policies
don't delete your test results before you can analyze the data.
1. Sign into your Azure DevOps. You'll need at least project administrator permissions.
2. Go to your project and then select project settings at the bottom of the page.

3. In the Retention page under the Test section, select a limit for how long you want to keep manual test data.
Automated test-runs retention policies
By default, Azure DevOps keeps automated test results related to builds only as long as you keep those builds. To
keep test results after you delete your builds, edit the build retention policy. If you use Git for version control, you
can specify how long to keep automated test results based on the branch.
1. Sign into Azure DevOps. You'll need at least build level permissions to edit build pipelines.
2. Go to your project and then select project settings at the bottom of the page.

3. Select Settings under Pipelines and modify your retention policies.


Other automated test results
To clean up automated test results that are left over from deleted builds or test results that aren't related to
builds, for example, results published from external test systems, set the retention limits at the project level as
shown in the Manual test-runs retention policies

Set artifact retention policies


You can set artifact retention policies for pipeline runs in the Pipeline settings.
1. Sign in to your project ( https://dev.azure.com/{yourorganization}/{yourproject} ).
2. Go to on the Settings tab of your project's settings.
3. Select Settings in Pipelines .
4. Edit Days to keep ar tifacts, symbols, and attachments .

Use the Copy Files task to save data longer


You can use the Copy Files task to save your build and artifact data for longer than what is set in the retention
policies. The Copy Files task is preferable to the Publish Build Artifacts task because data saved with the
Publish Build Ar tifacts task will get periodically cleaned up and deleted.
YAML
Classic
- task: CopyFiles@2
displayName: 'Copy Files to: \\mypath\storage\$(Build.BuildNumber)'
inputs:
SourceFolder: '$(Build.SourcesDirectory)'
Contents: '_buildOutput/**'
TargetFolder: '\\mypath\storage\$(Build.BuildNumber)'

You can also customize these policies on a branch-by-branch basis if you are building from Git repositories.

Global build retention policy


You can specify build retention policy defaults and maximums for a project collection. You can also specify when
builds are permanently destroyed (removed from the Deleted tab in the build explorer).
TFS 2017 and newer: https://{your_server}/tfs/DefaultCollection/_admin/_buildQueue

TFS 2015.3: http://{your_server}:8080/tfs/DefaultCollection/_admin/_buildQueue

TFS 2015 RTM: http://{your_server}:8080/tfs/DefaultCollection/_admin/_buildQueue#_a=settings

The maximum retention policy sets the upper limit for how long runs can be retained for all build pipelines.
Authors of build pipelines cannot configure settings for their definitions beyond the values specified here.
The default retention policy sets the default retention values for all the build pipelines. Authors of build
pipelines can override these values.
The Permanently destroy releases helps you keep the runs for a certain period of time after they are deleted.
This policy cannot be overridden in individual build pipelines.

Git repositories
If your repository type is one of the following, you can define multiple retention policies with branch filters:
Azure Repos Git or TFS Git.
GitHub.
Other/external Git.
For example, your team may want to keep:
User branch builds for five days, with a minimum of a single successful or partially successful build for each
branch.
Main and feature branch builds for 10 days, with a minimum of three successful or partially successful builds
for each of these branches. You exclude a special feature branch that you want to keep for a longer period of
time.
Builds from the special feature branch and all other branches for 15 days, with a minimum of a single
successful or partially successful build for each branch.
The following example retention policy for a build pipeline meets the above requirements:
When specifying custom policies for each pipeline, you cannot exceed the maximum limits set by administrator.
Clean up pull request builds
If you protect your Git branches with pull request builds, then you can use retention policies to automatically
delete the completed builds. To do it, add a policy that keeps a minimum of 0 builds with the following branch
filter:

refs/pull/*

TFVC and Subversion repositories


For TFVC and Subversion repository types you can modify a single policy with the same options shown above.
Policy order
When the system is purging old builds, it evaluates each build against the policies in the order you have
specified. You can drag and drop a policy lower or higher in the list to change this order.
The "All" branches policy is automatically added as the last policy in the evaluation order to enforce the
maximum limits for all other branches.
FAQ
If I mark a run or a release to be retained indefinitely, does the retention policy still apply?
No. Neither the pipeline's retention policy nor the maximum limits set by the administrator are applied when you
mark an individual run or release to be retained indefinitely. It will remain until you stop retaining it indefinitely.
How do I specify that runs deployed to production will be retained longer?
If you use classic releases to deploy to production, then customize the retention policy on the release pipeline.
Specify the number of days that releases deployed to production must be retained. In addition, indicate that runs
associated with that release are to be retained. This will override the run retention policy.
If you use multi-stage YAML pipelines to production, the only retention policy you can configure is in the project
settings. You cannot customize retention based on the environment to which the build is deployed.
I did not mark runs to be retained indefinitely. However, I see a large number of runs being retained. How can
I prevent this?
This could be for one of the following reasons:
The runs are marked by someone in your project to be retained indefinitely.
The runs are consumed by a release, and the release holds a retention lock on these runs. Customize the
release retention policy as explained above.
If you believe that the runs are no longer needed or if the releases have already been deleted, then you can
manually delete the runs.
How does 'minimum releases to keep' setting work?
Minimum releases to keep are defined at stage level. It denotes that Azure DevOps will always retain the given
number of last deployed releases for a stage even if the releases are out of retention period. A release will be
considered under minimum releases to keep for a stage only when the deployment started on that stage. Both
succesfull and failed deployments are considered. Releases which are waiting for approval are not considered.
How is retention period decided when release is deployed to multiple stages having different retention
period?
Final retention period is decided by considering days to retain settings of all the stages on which release is
deployed and taking max days to keep among them. Minimum releases to keep is governed at stage level and
do not change based on release deployed to multiple stages or not. Retain associated artifacts will be applicable
when release is deployed to a stage for which it is set true.
I deleted a stage for which I have some old releases. What retention will be considered for this case?
As the stage is deleted, so the stage level retention settings are not applicable now. Azure DevOps will fall back to
project level default retention for such case.
My organization requires us to retain builds and releases longer than what is allowed in the settings. How can I
request a longer retention?
The only way to retain a run or a release longer than what is allowed through retention settings is to manually
mark it to be retained indefinitely. There is no way to configure a longer retention setting. You can also explore
the possibility of using the REST APIs in order to download information and artifacts about the runs and upload
them to your own storage or artifact repository.
I lost some of the runs. Is there any way to get them back?
If you believe that you have lost the runs due to a bug in the service, then create a support ticket immediately to
recover the lost information. If the runs have been deleted as expected due to a retention policy or if the runs
have been deleted longer than a week ago, then it is not possible to recover the lost runs.
How do I use the Build.Cleanup capability of agents?
Setting a Build.Cleanup capability on agents will cause the pool's cleanup jobs to be directed to just those
agents, leaving the rest free to do regular work. When a pipeline run is deleted, artifacts stored outside of Azure
DevOps are cleaned up through a job run on the agents. When the agent pool gets saturated with cleanup jobs,
this can cause a problem. The solution to that is to designate a subset of agents in the pool that are the cleanup
agents. If any agents have Build.Cleanup set, only those agents will run the cleanup jobs, leaving the rest of the
agents free to continue running pipeline jobs.
Are automated test results that are published as part of a release retained until the release is deleted?
Test results published within a stage of a release are associated with both the release and the run. These test
results are retained as specified by the retention policy configured for the run and for the test results. If you are
not deploying Team Foundation or Azure Pipelines Build, and are still publishing test results, the retention of
these results is governed by the retention settings of the release they belong to.
Are manual test results deleted?
No. Manual test results are not deleted.
Configure and pay for parallel jobs
11/2/2020 • 13 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
This article describes the licensing model for Azure Pipelines in Team Foundation Server 2017 (TFS 2017) or
newer. We don't charge you for Team Foundation Build (TFBuild) so long as you have a TFS Client Access License
(CAL).
A TFS parallel job gives you the ability to run a single release at a time in a project collection. You can keep
hundreds or even thousands of release jobs in your collection. But, to run more than one release at a time, you
need additional parallel jobs.
One free parallel job is included with every collection in a Team Foundation Server. Every Visual Studio
Enterprise subscriber in a Team Foundation Server contributes one additional parallel job.
You can buy additional private jobs from the Visual Studio Marketplace.

IMPORTANT
Starting with Azure DevOps Server 2019, you do not have to pay for self-hosted concurrent jobs in releases. You are only
limited by the number of agents that you have.

Learn how to estimate how many parallel jobs you need and buy more parallel jobs for your organization.

What is a parallel job?


When you define a pipeline, you can define it as a collection of jobs. When a pipeline runs, you can run multiple
jobs as part of that pipeline. Each running job consumes a parallel job that runs on an agent. When there aren't
enough parallel jobs available for your organization, the jobs are queued up and run one after the other.
In Azure Pipelines, you can run parallel jobs on Microsoft-hosted infrastructure or your own (self-hosted)
infrastructure. Each parallel job allows you to run a single job at a time in your organization. You do not need to
pay for parallel jobs if you are using an on-premises server. The concept of parallel jobs only applies to Azure
DevOps Services.
Microsoft-hosted vs. self-hosted parallel jobs
If you want to run your jobs on machines that Microsoft manages, use Microsoft-hosted parallel jobs. Your jobs
will run on Microsoft-hosted agents.
If you want Azure Pipelines to orchestrate your builds and releases, but use your own machines to run them, use
self-hosted parallel jobs. For self-hosted parallel jobs, you'll start by deploying our self-hosted agents on your
machines. You can register any number of these self-hosted agents in your organization.

How much do parallel jobs cost?


We provide a free tier of service by default in every organization for both hosted and self-hosted parallel jobs.
Parallel jobs are purchased at the organization level, and they are shared by all projects in an organization.
Microsoft-hosted
Self-hosted
For Microsoft-hosted parallel jobs, you get 10 free Microsoft-hosted parallel jobs that can run for up to 360
minutes (6 hours) each time for public projects. For private projects, you get one free job that can run for up to
60 minutes each time. There is no time limit on parallel jobs for public projects and a 30 hour time limit per
month for private projects.

N UM B ER O F PA RA L L EL JO B S T IM E L IM IT

Public project 10 free Microsoft-hosted parallel jobs No overall time limit per month
that can run for up to 360 minutes (6
hours) each time

Private project One free job that can run for up to 60 1,800 minutes (30 hours) per month
minutes each time

When the free tier is no longer sufficient, you can pay for additional capacity per parallel job. Paid parallel jobs
remove the monthly time limit and allow you to run each job for up to 360 minutes (6 hours). Buy Microsoft-
hosted parallel jobs.
When you purchase your first Microsoft-hosted parallel job, the number of parallel jobs you have in the
organization is still 1. To be able to run two jobs concurrently, you will need to purchase two parallel jobs if you
are currently on the free tier. The first purchase only removes the time limits on the first job.

TIP
If your pipeline exceeds the maximum job timeout, try splitting your pipeline into multiple jobs. For more information on
jobs, see Specify jobs in your pipeline.

Do I need parallel jobs in TFS 2015? Short answer: no. More details

How many parallel jobs do I need?


As the number of queued builds and releases exceeds the number of parallel jobs you have, your build and
release queues will grow longer. When you find the queue delays are too long, you can purchase additional
parallel jobs as needed.
Figure out how many parallel jobs you need by first seeing how many parallel jobs your organization currently
uses:
1. Browse to Organization settings > Pipelines > Retention and parallel jobs > Parallel jobs .
URL example: https://{your_organization}/_admin/_buildQueue?_a=resourceLimits

2. View the maximum number of parallel jobs that are available in your organization.
3. Select View in-progress jobs to display all the builds and releases that are actively consuming an
available parallel job or that are queued waiting for a parallel job to be available.
Estimate costs
A simple rule of thumb: Estimate that you'll need one parallel job for every four to five users in your
organization.
In the following scenarios, you might need multiple parallel jobs:
If you have multiple teams, and if each of them require CI, you'll likely need a parallel job for each team.
If your CI trigger applies to multiple branches, you'll likely need a parallel job for each active branch.
If you develop multiple applications by using one organization or server, you'll likely need additional parallel
jobs: one to deploy each application at the same time.

How do I buy more parallel jobs?


To buy more parallel jobs:
Billing must be set up for your organization
You need Project Collection Administrator or organization Owner permissions
Buy parallel jobs
Buy more parallel jobs within your organization settings:
1. Sign in to your organization ( https://dev.azure.com/{yourorganization} ).
2. Select Organization settings .
3. Select Parallel jobs under Pipelines, and then select either Purchase parallel jobs for Microsoft-hosted
jobs or Change for self-hosted jobs.
4. Enter your desired amount, and then Save .

How do I change the quantity of parallel jobs for my organization?


1. Sign in to your organization ( https://dev.azure.com/{yourorganization} ).
2. Select Organization settings .
3. Select Parallel jobs under Pipelines, and then select either Purchase parallel jobs or Change for
Microsoft-hosted jobs or Change for self-hosted jobs.
4. Enter a lesser or greater quantity of Microsoft-hosted or self-hosted jobs, and then select Save .

IMPORTANT
Hosted XAML build controller isn't supported. If you have an organization where you need to run XAML builds, set up an
on-premises build server and switch to an on-premises build controller. For more information about the hosted XAML
model, see Get started with XAML.

How is a parallel job consumed in DevOps Services?


Consider an organization that has only one Microsoft-hosted parallel job. This job allows users in that
organization to collectively run only one job at a time. When additional jobs are triggered, they are queued and
will wait for the previous job to finish.
If you use release or YAML pipelines, then a run consumes a parallel job only when it's being actively deployed to
a stage. While the release is waiting for an approval or a manual intervention, it does not consume a parallel job.
When you run a server job or deploy to a deployment group using release pipelines, you don't consume any
parallel jobs.
1. FabrikamFiber CI Build 102 (master branch) starts first.
2. Deployment of FabrikamFiber Release 11 is triggered by completion of FabrikamFiber CI Build 102.
3. FabrikamFiber CI Build 101 (feature branch) is triggered. The build can't start yet because Release 11's
deployment is active. So the build stays queued.
4. Release 11 waits for approvals. Fabrikam CI Build 101 starts because a release that's waiting for approvals
does not consume a parallel job.
5. Release 11 is approved. It resumes only after Fabrikam CI Build 101 is completed.

How is a parallel job consumed?


For example, a collection in a Team Foundation Server has one parallel job. This allows users in that collection to
run only one release at a time. When additional releases are triggered, they are queued and will wait for the
previous one to complete.
A release requires a parallel job only when it is being actively deployed to a stage. Waiting for an approval does
not consume a parallel job. However, waiting for a manual intervention in the middle of a deployment does
consume a parallel job.

1. FabrikamFiber Release 10 is first to be deployed.


2. Deployment of FabrikamFiber Release 11 starts after Release 10's deployment is complete.
3. Release 12 is queued until Release 11's deployment is active.
4. Release 11 waits for an approval. Release 12's deployment starts because a release waiting for approvals
does not consume a parallel job.
5. Even though Release 11 is approved, it resumes only after Release 12's deployment is completed.
6. Release 11 is waiting for manual intervention. Release 13 cannot start because the manual intervention state
consumes a parallel job.
Manual intervention does not consume a job in TFS 2017.1 and newer.

Parallel processing within a single release


Parallel processing within a single release does not require additional parallel jobs. So long as you have enough
agents, you can deploy to multiple stages in a release at the same time.
For example, suppose your collection has three parallel jobs. You can have more than three agents running at the
same time to perform parallel operations within releases. For instance, notice below that four or five agents are
actively running jobs from three parallel jobs.

Parallel jobs in an organization


For example, here's an organization that has multiple Team Foundation Servers. Two of their users have Visual
Studio Enterprise subscriptions that they can use at the same time across all their on-premises servers and in
each collection so long as the customer adds them as users to both the servers as explained below.
Determine the number of parallel jobs you need
You can begin by seeing if your teams can get by with the parallel jobs you've got by default. As the number of
queued releases exceeds the number of parallel jobs you have, your release queues will grow longer. When you
find the queue delays are too long, you can purchase additional parallel jobs as needed.
Simple estimate
A simple rule of thumb: Estimate that you'll need one parallel job for every 10 users in your server.
Detailed estimate
In the following scenarios you might need multiple parallel jobs:
If you have multiple teams, if each of them require a CI build, and if each of the CI builds is configured to
trigger a release, then you'll likely need a parallel job for each team.
If you develop multiple applications in one collection, then you'll likely need additional parallel jobs: one
to deploy each application at the same time.

Use your Visual Studio Enterprise subscription benefit


Users who have Visual Studio Enterprise subscriptions are assigned to VS Enterprise access level in the Users
hub of TFS instance. Each of these users contributes one additional parallel job to each collection. You can use
this benefit on all Team Foundation Servers in your organization.
1. Browse to Ser ver settings , Access levels .
URL example: http://{your_server}:8080/tfs/_admin/_licenses

2. On the left side of the page, click VS Enterprise .


3. Add your users who have Visual Studio Enterprise subscriptions.
After you've added these users, additional licenses will appear on the resource limits page described below.

Purchase additional parallel jobs


If you need to run more parallel releases, you can buy additional private jobs from the Visual Studio
marketplace. Since there is no way to directly purchase parallel jobs from Marketplace for a TFS instance at
present, you must first buy parallel jobs for an Azure DevOps organization. After you buy the private jobs for an
Azure DevOps organization, you enter the number of purchased parallel jobs manually on the resource limits
page described below.

View and manage parallel jobs


1. Browse to Collection settings , Pipelines , Resource limits .

URL example: http://{your_server}:8080/tfs/DefaultCollection/_admin/_buildQueue?_a=resourceLimits

2. View or edit the number of purchased parallel jobs.

FAQ
How do I qualify for the free tier of public projects?
We'll automatically apply the free tier limits for public projects if you meet both of these conditions:
Your pipeline is part of an Azure Pipelines public project.
Your pipeline builds a public repository from GitHub or from the same public project in your Azure DevOps
organization.
Can I assign a parallel job to a specific project or agent pool?
Currently, there isn't a way to partition or dedicate parallel job capacity to a specific project or agent pool. For
example:
You purchase two parallel jobs in your organization.
You start two runs in the first project, and both the parallel jobs are consumed.
You start a run in the second project. That run won't start until one of the runs in your first project is
completed.
Are there limits on who can use Azure Pipelines?
You can have as many users as you want when you're using Azure Pipelines. There is no per-user charge for
using Azure Pipelines. Users with both basic and stakeholder access can author as many builds and releases as
they want.
Are there any limits on the number of builds and release pipelines that I can create?
No. You can create hundreds or even thousands of pipelines for no charge. You can register any number of self-
hosted agents for no charge.
As a Visual Studio Enterprise subscriber, do I get additional parallel jobs for TFS and Azure Pipelines?
Yes. Visual Studio Enterprise subscribers get one parallel job in Team Foundation Server 2017 or later and one
self-hosted parallel job in each Azure DevOps Services organization where they are a member.
What about the option to pay for hosted agents by the minute?
Some of our earlier customers are still on a per-minute plan for the hosted agents. In this plan, you pay
$0.05/minute for the first 20 hours after the free tier, and $0.01/minute after 20 hours. Because of the following
limitations in this plan, you might want to consider moving to the parallel jobs model:
When you're using the per-minute plan, you can run only one job at a time.
If you run builds for more than 14 paid hours in a month, the per-minute plan might be less cost-effective
than the parallel jobs model.
I use XAML build controllers with my organization. How am I charged for those?
You can register one XAML build controller for each self-hosted parallel job in your organization. Your
organization gets at least one free self-hosted parallel job, so you can register one XAML build controller for no
additional charge. For each additional XAML build controller, you'll need an additional self-hosted parallel job.
Who can use the system?
TFS users with a TFS CAL can author as many releases as they want.
To approve releases, a TFS CAL is not necessary; any user with stakeholder access can approve or reject releases.
Do I need parallel jobs to run builds on TFS?
No, on TFS you don't need parallel jobs to run builds. You can run as many builds as you want at the same time
for no additional charge.
Do I need parallel jobs to manage releases in versions before TFS 2017?
No.
In TFS 2015, so long as your users have a TFS CAL, they can manage releases for no additional charge in trial
mode. We called it "trial mode" to indicate that we would eventually charge for managing releases. Despite this
label, we fully support managing releases in TFS 2015.
Pipeline permissions and security roles
11/2/2020 • 11 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

To support security of your pipeline operations, you can add users to a built-in security group, set individual
permissions for a user or group, or add users to pre-defined roles. You manage security for the following objects
from Azure Pipelines in the web portal, either from the user or admin context.
This topic provides a description of the permissions and roles used to secure operations. To learn how to add a
user or group to Azure Pipelines, see Users.
For permissions, you grant or restrict permissions by setting the permission state to Allow or Deny, either for a
security group or an individual user. For a role, you add a user or group to the role. To learn more about how
permissions are set, including inheritance, see [About permissions and
inheritance(../../organizations/security/about-permissions.md). To learn how inheritance is supported for role-
based membership, see About security roles.

Default permissions assigned to built-in security groups


Once you have been added as a team member, you are a member of the Contributors group. This allows you to
define and manage builds and releases. The most common built-in groups include Readers, Contributors, and
Project Administrators. These groups are assigned the default permissions as listed below.

NOTE
When the Free access to Pipelines for Stakeholders preview feature is enabled for the organization, Stakeholders get
access to all Build and Release features. This is indicated by the preview icon shown in the following table. Without this
feature enabled, stakeholders can only view and approve releases. To learn more, see Provide Stakeholders access to edit
build and release pipelines.

STA K EH O L DE C O N T RIB UTO B UIL D P RO JEC T REL EA SE


TA SK RS REA DERS RS A DM IN S A DM IN S A DM IN S

View release ️
✔ ️
✔ ️
✔ ️
✔ ️

pipelines

Define builds ️
✔ ️
✔ ️

with
continuous
integration

Define ️
✔ ️
✔ ️

releases and
manage
deployments
STA K EH O L DE C O N T RIB UTO B UIL D P RO JEC T REL EA SE
TA SK RS REA DERS RS A DM IN S A DM IN S A DM IN S

Approve ️
✔ ️
✔ ️
✔ ️

releases

Azure ️
✔ ️
✔ ️

Artifacts (5
users free)

Queue builds, ️
✔ ️
✔ ️

edit build
quality

Manage build ️
✔ ️

queues and
build qualities

Manage build ️
✔ ️
✔ ️

retention
policies, delete
and destroy
builds

Administer ️
✔ ️

build
permissions

Manage ️
✔ ️

release
permissions

Create and ️
✔ ️
✔ ️
✔ ️

edit task
groups

Manage task ️
✔ ️
✔ ️

group
permissions

Can view ️
✔ ️
✔ ️
✔ ️
✔ ️

library items
such as
variable
groups

Use and ️
✔ ️
✔ ️

manage
library items
such as
variable
groups

Build
TA SK STA K EH O L DERS REA DERS C O N T RIB UTO RS B UIL D P RO JEC T A DM IN S
A DM IN S

View builds ️
✔ ️
✔ ️
✔ ️
✔ ️

View build ️
✔ ️
✔ ️
✔ ️
✔ ️

pipeline

Administer build ️
✔ ️

permissions

Delete or Edit ️
✔ ️
✔ ️

build pipeline

Delete or Destroy ️
✔ ️

builds

Edit build quality ️


✔ ️
✔ ️

Manage build ️
✔ ️

qualities

Manage build ️
✔ ️

queue

Override check-in ️

validation by
build

Queue builds ️
✔ ️
✔ ️

Retain indefinitely ️
✔ ️

Stop builds ️
✔ ️

Update build ️

information

Release
TA SK STA K EH O L DERS REA DERS C O N T RIB UTO RS P RO JEC T A DM IN S REL EA SE
A DM IN S

Approve releases ️
✔ ️
✔ ️
✔ ️

View releases ️
✔ ️
✔ ️
✔ ️
✔ ️

View release ️
✔ ️
✔ ️
✔ ️

pipeline

Administer ️
✔ ️

release
permissions
Delete release ️
✔ ️
✔ ️

pipeline or
release stage

Delete releases ️
✔ ️
✔ ️

Edit release ️
✔ ️

pipeline

Edit release stage ️


✔ ️
✔ ️

Manage ️
✔ ️

deployments

Manage release ️
✔ ️
✔ ️

approvers

Manage releases ️
✔ ️

Task groups
TA SK STA K EH O L DER REA DERS C O N T RIB UTO R B UIL D P RO JEC T REL EA SE
S S A DM IN S A DM IN S A DM IN S

Administer ️
✔ ️
✔ ️

task group
permissions

Delete task ️
✔ ️
✔ ️

group

Edit task ️
✔ ️
✔ ️

group

Build
TA SK STA K EH O L DERS REA DERS C O N T RIB UTO RS B UIL D P RO JEC T A DM IN S
A DM IN S

View builds ️
✔ ️
✔ ️
✔ ️

View build ️
✔ ️
✔ ️
✔ ️

definition

Administer build ️
✔ ️

permissions

Delete or Edit ️
✔ ️
✔ ️

build definitions

Delete or Destroy ️
✔ ️

builds

Edit build quality ️


✔ ️
✔ ️

Manage build ️
✔ ️

qualities

Manage build ️
✔ ️

queue

Override check-in ️

validation by
build

Queue builds ️
✔ ️
✔ ️

Retain indefinitely ️
✔ ️

Stop builds ️
✔ ️

Update build ️

information

Release
TA SK STA K EH O L DERS REA DERS C O N T RIB UTO RS P RO JEC T A DM IN S REL EA SE
A DM IN S

Approve releases ️
✔ ️
✔ ️
✔ ️

View releases ️
✔ ️
✔ ️
✔ ️
✔ ️

View release ️
✔ ️
✔ ️
✔ ️

definition

Administer ️
✔ ️

release
permissions

Delete release ️
✔ ️
✔ ️

definition or
release stage

Delete releases ️
✔ ️
✔ ️

Edit release ️
✔ ️

definition

Edit release stage ️


✔ ️
✔ ️

Manage ️
✔ ️

deployments

Manage release ️
✔ ️
✔ ️

approvers

Manage releases ️
✔ ️

Security of agents and library entities


You use pre-defined roles and manage membership in those roles to configure security on agent pools. You can
configure this in a hierarchical manner either for all pools, or for an individual pool.
Roles are also defined to help you configure security on shared library entities such as variable groups and service
connection. Membership of these roles can be configured hierarchically, as well as at either project level or
individual entity level.

Pipeline permissions
Build and YAML pipeline permissions follow a hierarchical model. Defaults for all the permissions can be set at the
project level and can be overridden on an individual build pipeline.
To set the permissions at project level for all pipelines in a project, choose Security from the action bar on the
main page of Builds hub.
To set or override the permissions for a specific pipeline, choose Security from the context menu of the pipeline.
The following permissions are defined for pipelines. All of these can be set at both the levels.

P ERM ISSIO N DESC RIP T IO N

Administer build permissions Can change any of the other permissions listed here.

Queue builds Can queue new builds.

Delete build pipeline Can delete build pipeline(s).

Delete builds Can delete builds for a pipeline. Builds that are deleted are
retained in the Deleted tab for a period of time before they
are destroyed.

Destroy builds Can delete builds from the Deleted tab.

Edit build pipeline Can create pipelines and save any changes to a build pipeline,
including configuration variables, triggers, repositories, and
retention policy.

Edit build quality Can add tags to a build.

Override check-in validation by build Applies to TFVC gated check-in builds. This does not apply to
PR builds.

Retain indefinitely Can toggle the retain indefinitely flag on a build.

Stop builds Can stop builds queued by other team members or by the
system.

View build pipeline Can view build pipeline(s).

View builds Can view builds belonging to build pipeline(s).

Update build information It is recommended to leave this alone. It's intended to enable
service accounts, not team members.

Manage build qualities Only applies to XAML builds


P ERM ISSIO N DESC RIP T IO N

Manage build queue Only applies to XAML builds

Default values for all of these permissions are set for team project collections and project groups. For example,
Project Collection Administrators , Project Administrators , and Build Administrators are given all of the
above permissions by default.
When it comes to security, there are different best practices and levels of permissiveness. While there's no one
right way to handle permissions, we hope these examples help you empower your team to work securely with
builds.
In many cases you probably also want to set Delete build pipeline to Allow . Otherwise these team
members can't delete even their own build pipelines.
Without Delete builds permission, users cannot delete even their own completed builds. However, keep in
mind that they can automatically delete old unneeded builds using retention policies.
We recommend that you do not grant these permissions directly to a person. A better practice is to add the
person to the build administrator group or another group, and manage permissions on that group.

Release permissions
Permissions for release pipelines follow a hierarchical model. Defaults for all the permissions can be set at the
project level and can be overridden on an individual release pipeline. Some of the permissions can also be
overridden on a specific stage within a pipeline. The hierarchical model helps you define default permissions for all
definitions at one extreme, and to lock down the production stage for an application at the other extreme.
To set permissions at project level for all release definitions in a project, open the shortcut menu from the icon
next to All release pipelines and choose Security .
To set or override the permissions for a specific release pipeline, open the shortcut menu from the icon next to
that pipeline name. Then choose Security to open the Permissions dialog.
To specify security settings for individual stages in a release pipeline, open the Permissions dialog by choosing
Security on the shortcut menu that opens from the ellipses (...) on a stage in the release pipeline editor.
The following permissions are defined for releases. The scope column explains whether the permission can be set
at the project, release pipeline, or stage level.

P ERM ISSIO N DESC RIP T IO N SC O P ES

Administer release permissions Can change any of the other Project, Release pipeline, Stage
permissions listed here.

Create releases Can create new releases. Project, Release pipeline

Delete release pipeline Can delete release pipeline(s). Project, Release pipeline

Delete release stage Can delete stage(s) in release Project, Release pipeline, Stage
pipeline(s).

Delete releases Can delete releases for a pipeline. Project, Release pipeline
P ERM ISSIO N DESC RIP T IO N SC O P ES

Edit release pipeline Can save any changes to a release Project, Release pipeline
pipeline, including configuration
variables, triggers, artifacts, and
retention policy as well as configuration
within a stage of the release pipeline. To
make changes to a specific stage in a
release pipeline, the user also needs
Edit release stage permission.

Edit release stage Can edit stage(s) in release pipeline(s). Project, Release pipeline, Stage
To save the changes to the release
pipeline, the user also needs Edit
release pipeline permission. This
permission also controls whether a user
can edit the configuration inside the
stage of a specific release instance. The
user also needs Manage releases
permission to save the modified release.

Manage deployments Can initiate a deployment of a release Project, Release pipeline, Stage
to a stage. This permission is only for
deployments that are manually initiated
by selecting the Deploy or Redeploy
actions in a release. If the condition on
a stage is set to any type of automatic
deployment, the system automatically
initiates deployment without checking
the permission of the user that created
the release. If the condition is set to
start after some stage, manually
initiated deployments do not wait for
those stages to be successful.

Manage release approvers Can add or edit approvers for stage(s) Project, Release pipeline, Stage
in release pipeline(s). This permissions
also controls whether a user can edit
the approvers inside the stage of a
specific release instance.

Manage releases Can edit the configuration in releases. Project, Release pipeline
To edit the configuration of a specific
stage in a release instance (including
variables marked as
settable at release time ), the user
also needs Edit release stage
permission.

View release pipeline Can view release pipeline(s). Project, Release pipeline

View releases Can view releases belonging to release Project, Release pipeline
pipeline(s).

Default values for all of these permissions are set for team project collections and project groups. For example,
Project Collection Administrators , Project Administrators , and Release Administrators are given all of
the above permissions by default. Contributors are given all permissions except Administer release
permissions . Readers , by default, are denied all permissions except View release pipeline and View releases .
Task group permissions
Task group permissions follow a hierarchical model. Defaults for all the permissions can be set at the project level
and can be overridden on an individual task group pipeline.
You use task groups to encapsulate a sequence of tasks already defined in a build or a release pipeline into a single
reusable task. You define and manage task groups in the Task groups tab in Azure Pipelines .

P ERM ISSIO N DESC RIP T IO N

Administer task group permissions Can add and remove users or groups to task group security.

Delete task group Can delete a task group.

Edit task group Can create, modify, or delete a task group.

Library roles and permissions


Permissions for library artifacts, such as variable groups and secure files, are managed by roles. You use a variable
group to store values that you want to make available across multiple build and release pipelines. You define and
manage variable groups and secure files in the Librar y tab in Azure Pipelines .

RO L E DESC RIP T IO N

Administrator Can edit/delete and manage security for library items.

Creator Can create library items.

Reader Can only read library items.

User Can consume library items in pipelines.

Service connection security roles


You add users to the following roles from the project-level admin context, Ser vices page. To create and manage
these resources, see Service connections for build and release.

RO L E DESC RIP T IO N

User Can use the endpoint when authoring build or release


pipelines.

Administrator Can manage membership of all other roles for the service
connection as well as use the endpoint to author build or
release pipelines. The system automatically adds the user that
created the service connection to the Administrator role for
that pool.

Deployment pool security roles


You add users to the following roles from the collection-level admin context, Deployment Pools page. To create
and manage deployment pools, see Deployment groups.
RO L E DESC RIP T IO N

Reader Can only view deployment pools.

Ser vice Account Can view agents, create sessions, and listen for jobs from the
agent pool.

User Can view and use the deployment pool for creating
deployment groups.

Administrator Can administer, manage, view and use deployment pools.

Environment permissions
You can use roles to control who can create, view, and manage environments. When you create an environment in
a YAML, contributors and project administrators will be granted the administrator role. When you create an
environment through the UI, only the creator will have the administrator role.

Related notes
Set build and release permissions
Default permissions and access
Permissions and groups reference
Add users to Azure Pipelines
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

If your teammates want to edit pipelines, then have an administrator add them to your project:
1. Make sure you are a member of the Project Administrators group (learn more).
2. Go to your project summary: https://dev.azure.com/{your-organization}/{your-project}

3. Invite the teammates to join the project.

4. After the teammates accept the invitation, ask them to verify that they can create and edit pipelines.

Confirm that contributors have pipeline permissions


NOTE
A security best practice is to only allow required users and groups for pipeline permissions. The contributors group may be
too broad in a given project.
If you created your project after about October 2018, then the above procedure is probably sufficient. However, in
some cases your team members might see errors or grayed-out controls when they try to work with pipelines. In
these cases, make sure that your project contributors have the necessary permissions:
1. Make sure you are a member of the Build Administrators group or the Project Administrators group (learn
more).
2. Open the build security dialog box.

3. On the permissions dialog box, make sure the following permissions are set to Allow.

Permissions for build and release functions are primarily set at the object-level for a specific build or release, or for
select tasks, at the collection level. For a simplified view of permissions assigned to built-in groups, see
Permissions and access.
In addition to permission assignments, you manage security for several resources—such as variable groups,
secure files, and deployment groups—by adding users or groups to a role. You grant or restrict permissions by
setting the permission state to Allow or Deny, either for a security group or an individual user. For definitions of
each build and release permission and role, see Build and release permissions.

Set permissions for build pipelines


1. To set the permissions for all build pipelines, click the Security From the web portal Build-Release hub,
Builds page

To set the permissions for a specific build pipeline, open the context menu for the build and click Security.

2. Choose the group you want to set permissions for, and then change the permission setting to Allow or
Deny.
For example, here we change the permission for Edit build pipeline for the Contributors group to Allow.

3. Save your changes.

Set permissions for release pipelines


1. From the web portal Build-Release hub, Releases page, open the Security dialog for all release pipelines.

If you want to manage the permissions for a specific release, then open the Security dialog for that release.
2. Choose the group you want to set permissions for, and then change the permission setting to Allow or
Deny.
For example, here we deny access to several permissions for the Contributors group.

3. Save your changes.

Manage Library roles for variable groups, secure files, and deployment
groups
Permissions for variable groups, secure files, and deployment groups are managed by roles. For a description of
the roles, see About security roles.

NOTE
Feature availability : These features are available on Azure Pipelines and TFS 2017 and later versions.

You can set the security for all artifacts for a project, as well as set the security for individual artifacts. The method
is similar for all three artifact types. You set the security for variable groups and secure files from Azure
Pipelines , Librar y page, and for deployment groups, from the Deployment groups page.
For example, here we show how to set the security for variable groups.
1. Build-Release hub, Librar y page, open the Security dialog for all variable groups.
If you want to manage the permissions for a specific variable group, then open the Security dialog for that
group.

2. Add the user or group and choose the role you want them to have.
For example, here we deny access to several permissions for the Contributors group.

3. Click Add .

Manage task group permissions


Permissions for task groups are subject to a hierarchical model. You use task groups to encapsulate a sequence of
tasks already defined in a build or a release pipeline into a single reusable task. You define and manage task
groups in the Task groups tab of Azure Pipelines .

NOTE
Feature availability : These features are available on Azure Pipelines and TFS 2017 and later versions.

1. From the web portal Build-Release hub, Task groups page, open the Security dialog for all task groups.

If you want to manage the permissions for a specific task group, then open the Security dialog for that
group.
2. Add the user or group and then set the permissions you want them to have.
For example, here we add Raisa and set her permissions to Administer all task groups.

3. Click Add .

Set collection-level permissions to administer build resources


Set collection-level permissions to administer build resources
1. From the web portal user context, open the admin context by clicking the gear Settings icon and
choosing Organization settings or Collection settings .
2. Click Security , and then choose the group whose permissions you want to modify.
Here we choose the Build Administrators group and change the Use build resources permission. For a
description of each permissions, see Permissions and groups reference, Collection-level permissions.

3. Save your changes.

Manage permissions for agent pools and service connections


You manage the security for agent pools and service connections by adding users or groups to a role. The method
is similar for both agent pools and service connections. You will need to be a member of the Project Administrator
group to manage the security for these resources.
NOTE
Feature availability : These features are available on Azure Pipelines and TFS 2015 and later versions.

For example, here we show how to add a user to the Administrator role for a service connection.
1. From the web portal, click the gear Settings icon to open the project settings admin context.
2. Click Ser vices , click the service connection that you want to manage, and then click Roles .

3. Add the user or group and choose the role you want them to have. For a description of each role, see About
security roles.
For example, here we add Raisa to the Administrator role.

4. Click Add .

Manage permissions for agent pools and deployment pools


You manage the security for agent pools and deployment pools by adding users or groups to a role. The method is
similar for both types of pools.
NOTE
Feature availability : These features are available on Azure Pipelines and TFS 2018 and later versions.

You will need to be a member of the Project Collection Administrator group to manage the security for a pool.
Once you've been added to the Administrator role, you can then manage the pool. For a description of each role,
see About security roles.
1. From the web portal, click the gear Settings icon and choose Organization settings or Collection settings
to open the collection-level settings admin context.
2. Click Deployment Pools , and then open the Security dialog for all deployment pools.

If you want to manage the permissions for a specific deployment group, then open the Security dialog for
that group.
3. Add the user or group and choose the role you want them to have.
For example, here we add Raisa to the Administrator role.

4. Click Add .

Related notes
Default build and release permissions
Default permissions and access
Permissions and groups reference
Run Git commands in a script
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 |Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

For some workflows you need your build pipeline to run Git commands. For example, after a CI build on a feature
branch is done, the team might want to merge the branch to master.
Git is available on Microsoft-hosted agents and on on-premises agents.

Enable scripts to run Git commands


NOTE
Before you begin, be sure your account's default identity is set with:

git config --global user.email "[email protected]"


git config --global user.name "Your Name"

Grant version control permissions to the build service


Go to the Version Control control panel tab ▼
Azure Repos: https://dev.azure.com/{your-organization}/{your-project}/_admin/_versioncontrol
On-premises: https://{your-server}:8080/tfs/DefaultCollection/{your-project}/_admin/_versioncontrol

If you see this page, select the repo, and then click the link:
On the Version Control tab, select the repository in which you want to run Git commands, and then select
Project Collection Build Ser vice . By default, this identity can read from the repo but cannot push any changes
back to it.

Grant permissions needed for the Git commands you want to run. Typically you'll want to grant:
Create branch: Allow
Contribute: Allow
Read: Allow
Create tag: Allow
When you're done granting the permissions, make sure to click Save changes .
Enable your pipeline to run command-line Git
On the variables tab set this variable:

NAME VA L UE

system.prefergit true

Allow scripts to access the system token


YAML
Classic
Add a checkout section with persistCredentials set to true .

steps:
- checkout: self
persistCredentials: true

Learn more about checkout .


On the options tab select Allow scripts to access OAuth token .

Make sure to clean up the local repo


Certain kinds of changes to the local repository are not automatically cleaned up by the build pipeline. So make
sure to:
Delete local branches you create.
Undo git config changes.
If you run into problems using an on-premises agent, make sure the repo is clean:
YAML
Classic
Make sure checkout has clean set to true .

steps:
- checkout: self
clean: true

On the repository tab set Clean to true.


On the variables tab create or modify the Build.Clean variable and set it to source

Examples
List the files in your repo
Make sure to follow the above steps to enable Git.
On the build tab add this task:
TA SK A RGUM EN T S

Tool: git

Utility: Command Line Arguments : ls-files


List the files in the Git repo.

Merge a feature branch to master


You want a CI build to merge to master if the build succeeds.
Make sure to follow the above steps to enable Git.
On the Triggers tab select Continuous integration (CI) and include the branches you want to build.
Create merge.bat at the root of your repo:

@echo off
ECHO SOURCE BRANCH IS %BUILD_SOURCEBRANCH%
IF %BUILD_SOURCEBRANCH% == refs/heads/master (
ECHO Building master branch so no merge is needed.
EXIT
)
SET sourceBranch=origin/%BUILD_SOURCEBRANCH:refs/heads/=%
ECHO GIT CHECKOUT MASTER
git checkout master
ECHO GIT STATUS
git status
ECHO GIT MERGE
git merge %sourceBranch% -m "Merge to master"
ECHO GIT STATUS
git status
ECHO GIT PUSH
git push origin
ECHO GIT STATUS
git status

On the build tab add this as the last task:

TA SK A RGUM EN T S

Path : merge.bat

Utility: Batch Script


Run merge.bat.

FAQ
Can I run Git commands if my remote repo is in GitHub or another Git service such as Bitbucket Cloud?
Yes
Which tasks can I use to run Git commands?
Batch Script
Command Line
PowerShell
Shell Script
How do I avoid triggering a CI build when the script pushes?
Add ***NO_CI*** to your commit message. Here are examples:
git commit -m "This is a commit message ***NO_CI***"
git merge origin/features/hello-world -m "Merge to master ***NO_CI***"

Add [skip ci] to your commit message or description. Here are examples:
git commit -m "This is a commit message [skip ci]"
git merge origin/features/hello-world -m "Merge to master [skip ci]"

You can also use any of the variations below. This is supported for commits to Azure Repos Git, Bitbucket Cloud,
GitHub, and GitHub Enterprise Server.
[skip ci] or [ci skip]
skip-checks: true or skip-checks:true
[skip azurepipelines] or [azurepipelines skip]
[skip azpipelines] or [azpipelines skip]
[skip azp] or [azp skip]
***NO_CI***

How does enabling scripts to run Git commands affect how the build pipeline gets build sources?
When you set system.prefergit to true , the build pipeline uses command-line Git instead of LibGit2Sharp to
clone or fetch the source files.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure Pipelines with Microsoft Teams
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines
If Microsoft Teams is your choice for collaboration, you can use the Azure Pipelines app built for Microsoft Teams to
easily monitor the events for your pipelines. Set up and manage subscriptions for builds, releases, YAML pipelines,
pending approvals and more from the app and get notifications for these events in your Teams channels.

NOTE
This feature is only available on Azure DevOps Services. Typically, new features are introduced in the cloud service first, and
then made available on-premises in the next major version or update of Azure DevOps Server. To learn more, see Azure
DevOps Feature Timeline.

Add Azure Pipelines app to your team


Visit the App store in Microsoft Teams and search for the Azure Pipelines app. Upon installing, a welcome message
from the app displays as shown in the following example. Use the @azure pipelines handle to start interacting with
the app.
Connect the Azure Pipelines app to your pipelines
Once the app is installed in your team, you can connect the app to the pipelines you want to monitor. The app asks
you to sign in & authenticate to Azure Pipelines before running any commands. Use Sign in with different
email if your Microsoft Teams and Azure Boards are in different tenants.
To start monitoring all pipelines in a project, use the following command inside a channel:

@azure pipelines subscribe [project url]

The project URL can be to any page within your project (except URLs to pipelines).
For example:

@azure pipelines subscribe https://dev.azure.com/myorg/myproject/

You can also monitor a specific pipeline using the following command:

@azure pipelines subscribe [pipeline url]

The pipeline URL can be to any page within your pipeline that has a definitionId or buildId/releaseId present in
the URL.
For example:

@azure pipelines subscribe https://dev.azure.com/myorg/myproject/_build?definitionId=123

or:
@azure pipelines subscribe https://dev.azure.com/myorg/myproject/_release?
definitionId=123&view=mine&_a=releases

For Build pipelines, the channel is subscribed to the Build completed notification. For Release pipelines, the channel
is subscribed to the Release deployment started, Release deployment completed, and Release deployment approval
pending notifications. For YAML pipelines, subscriptions are created for the Run stage state changed and Run stage
waiting for approval notifications.

Add or remove subscriptions


To manage the subscriptions for a channel, use the following command:
@azure pipelines subscriptions

This command lists all of the current subscriptions for the channel and allows you to add/remove subscriptions.
[!NOTE] Team administrators aren't able to remove or modify subscriptions created by Project administrators.

Using filters effectively to customize subscriptions


When a user subscribes to any pipeline, a few subscriptions are created by default without any filters being applied.
Often, users have the need to customize these subscriptions. For example, users may want to get notified only
when builds fail or when deployments are pushed to a production environment. The Azure Pipelines app supports
filters to customize what you see in your channel.
1. Run the @Azure Pipelines subscriptions command
2. Select View all subscriptions . In the list of subscriptions, if there is a subscription that is unwanted or should
be modified (Example: creating noise in the channel), select Remove
3. Scroll down and select the Add subscription button
4. Select the required pipeline and the event
5. Select the appropriate filters and save
Example: Get notifications only for failed builds

Example: Get notifications only if the deployments are pushed to prod environment
Approve deployments from your channel
You can approve deployments from within your channel without navigating to the Azure Pipelines portal by
subscribing to the Release deployment approval pending notification for classic Releases or the Run stage waiting
for approval notification for YAML pipelines. Both of these subscriptions are created by default when you subscribe
to the pipeline.
Whenever the running of a stage is pending for approval, a notification card with options to approve or reject the
request is posted in the channel. Approvers can review the details of the request in the notification and take
appropriate action. In the following example, the deployment was approved and the approval status is displayed on
the card.

The app supports all of the checks and approval scenarios present in the Azure Pipelines portal, like single
approver, multiple approvers (any one user, any order, in sequence), and teams as approvers. You can approve
requests as an individual or on behalf of a team.

Search and share pipeline information using compose extension


To help users search and share information about pipelines, Azure Pipelines app for Microsoft Teams supports
compose extension. You can now search for pipelines by pipeline ID or by pipeline name. For compose extension to
work, users will have to sign into Azure Pipelines project that they are interested in either by running
@azure pipelines signin command or by signing into the compose extension directly.
Previews of pipeline URLs
When a user pastes a pipeline URL, a preview is shown similar to that in the following image. This helps to keep
pipeline related conversations relevant and accurate. Users can choose between compact and expanded cards.
For this feature to work, users have to be signed-in. Once they are signed in, this feature will work for all channels
in a team in Microsoft Teams.

Remove subscriptions and pipelines from a channel


If you want to clean up your channel, use the following commands to unsubscribe from all pipelines within a
project.

@azure pipelines unsubscribe all [project url]

For example:

@azure pipelines unsubscribe all https://dev.azure.com/myorg/myproject

This command deletes all the subscriptions related to any pipeline in the project and removes the pipelines from
the channel.

IMPORTANT
Only project administrators can run this command.

Threaded notifications
To logically link a set of related notifications and also to reduce the space occupied by notifications in a channel,
notifications are threaded. All notifications linked to a particular run of a pipeline will be linked together.
The following example shows the compact view of linked notifications.

When expanded, you can see all the of the linked notifications, as shown in the following example.
Commands reference
Here are all the commands supported by the Azure Pipelines app:

SL A SH C O M M A N D F UN C T IO N A L IT Y

@azure pipelines subscribe [pipeline url/ project url] Subscribe to a pipeline or all pipelines in a project to receive
notifications

@azure pipelines subscriptions Add or remove subscriptions for this channel

@azure pipelines feedback Report a problem or suggest a feature


SL A SH C O M M A N D F UN C T IO N A L IT Y

@azure pipelines help Get help on the slash commands

@azure pipelines signin Sign in to your Azure Pipelines account

@azure pipelines signout Sign out from your Azure Pipelines account

@azure pipelines unsubscribe all [project url] Remove all pipelines (belonging to a project) and their
associated subscriptions from a channel

NOTE
You can use the Azure Pipelines app for Microsoft Teams only with a project hosted on Azure DevOps Services at this time.
The user must be an admin of the project containing the pipeline to set up the subscriptions
Notifications are currently not supported inside chat/direct messages
Deployment approvals which have applied the Revalidate identity of approver before completing the approval
policy are not supported
'Third party application access via OAuth' must be enabled to receive notifications for the organization in Azure DevOps
(Organization Settings -> Security -> Policies)

Multi-tenant support
In your organization if you are using a different email or tenant for Microsoft Teams and Azure DevOps, perform
the following steps to sign in and connect based on your use case.

Case Email ID and tenant in Email ID and tenant in Steps to take


Microsoft Teams Azure DevOps

1 [email protected] (tenant 1) [email protected] (tenant 1) Sign in using Sign in


button.

2 [email protected] (tenant 1) [email protected] (tenant 2) Sign in the Azure


DevOps account
In the same browser,
start a new tab,
navigate to
https://teams.microso
ft.com/
Run the signin
command and
choose the Sign in
button.

3 [email protected] (tenant 1) [email protected] (tenant 2) Sign in using Sign in with


different email address ,
in the email id picker use the
email2 to sign in to Azure
DevOps.

4 [email protected] (tenant 1) [email protected] (non This scenario is not


default tenant 3) supported today
Troubleshooting
If you are experiencing the following errors when using the Azure Pipelines app for Microsoft Teams, follow the
procedures in this section.
Sorry, something went wrong. Please try again.
Configuration failed. Please make sure that the organization '{organization name}' exists and that you have
sufficient permissions.
Sorry, something went wrong. Please try again.
The Azure Pipelines app uses the OAuth authentication protocol, and requires Third-party application access via
OAuth for the organization to be enabled. To enable this setting, navigate to Organization Settings > Security >
Policies , and set the Third-par ty application access via OAuth for the organization setting to On .

Configuration failed. Please make sure that the organization '{organization name }' exists and that you have
sufficient permissions.
Sign out of Azure DevOps by navigating to https://aka.ms/VsSignout using your browser.
Open an In private or incognito browser window and navigate to https://aex.dev.azure.com/me and sign in. In
the dropdown under the profile icon to the left, select the directory that contains the organization containing the
pipeline for which you wish to subscribe.
In the same browser , start a new tab and sign in to https://teams.microsoft.com/ . Run the
@Azure Pipelines signout command and then run the @Azure Pipelines signin command in the channel where
the Azure Pipelines app for Microsoft Teams is installed.
Select the Sign in button and you'll be redirected to a consent page like the one in the following example. Ensure
that the directory shown beside the email is same as what was chosen in the previous step. Accept and complete
the sign-in process.
If these steps don't resolve your authentication issue, reach out to us at Developer Community.

Related articles
Azure Boards with Microsoft Teams
Azure Repos with Microsoft Teams
Azure Pipelines with Slack
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines
If you use Slack, you can use the Azure Pipelines app for Slack to easily monitor the events for your pipelines. Set
up and manage subscriptions for builds, releases, YAML pipelines, pending approvals and more from the app and
get notifications for these events in your Slack channels.

NOTE
This feature is only available on Azure DevOps Services. Typically, new features are introduced in the cloud service first, and
then made available on-premises in the next major version or update of Azure DevOps Server. To learn more, see Azure
DevOps Feature Timeline.

Add the Azure Pipelines app to your Slack workspace


Navigate to Azure Pipelines Slack app to install the Azure Pipelines app to your Slack workspace. Once added, you
will see a welcome message from the app as below. Use the /azpipelines handle to start interacting with the app.

Connect the Azure Pipelines app to your pipelines


Once the app has been installed in your Slack workspace, you can connect the app to the pipelines you want to
monitor. The app will ask you to authenticate to Azure Pipelines before running any commands.

To start monitoring all pipelines in a project, use the following slash command inside a channel:

/azpipelines subscribe [project url]

The project URL can be to any page within your project (except URLs to pipelines).
For example:

/azpipelines subscribe https://dev.azure.com/myorg/myproject/

You can also monitor a specific pipeline using the following command:

/azpipelines subscribe [pipeline url]

The pipeline URL can be to any page within your pipeline that has definitionId or buildId/releaseId in the URL.
For example:

/azpipelines subscribe https://dev.azure.com/myorg/myproject/_build?definitionId=123

or:

/azpipelines subscribe https://dev.azure.com/myorg/myproject/_release?definitionId=123&view=mine&_a=releases

The subscribe command gets you started with a few subscriptions by default. For Build pipelines, the channel is
subscribed to Build completed notification. For Release pipelines, the channel will start receiving Release
deployment started, Release deployment completed and Release deployment approval pending notifications. For
YAML pipelines, subscriptions are created for the Run stage state changed and Run stage waiting for approval
notifications.

Manage subscriptions
To manage the subscriptions for a channel, use the following command:
/azpipelines subscriptions

This command will list all the current subscriptions for the channel and allow you to add new subscriptions.
[!NOTE] Team administrators aren't able to remove or modify subscriptions created by Project administrators.
Using filters effectively to customize subscriptions
When a user subscribes to any pipeline, a few subscriptions are created by default without any filters being applied.
Often, users have the need to customize these subscriptions. For example, users may want to hear only about failed
builds or get notified only when deployments are pushed to production. The Azure Pipelines app supports filters to
customize what you see in your channel.
1. Run the /azpipelines subscriptions command
2. In the list of subscriptions, if there is a subscription that is unwanted or must be modified (Example: creating
noise in the channel), select the Remove button
3. Select the Add subscription button
4. Select the required pipeline and the desired event
5. Select the appropriate filters to customize your subscription
Example: Get notifications only for failed builds

Example: Get notifications only if the deployments are pushed to production environment
Approve deployments from your channel
You can approve deployments from within your channel without navigating to the Azure Pipelines portal by
subscribing to the Release deployment approval pending notification for classic Releases or the Run stage waiting
for approval notification for YAML pipelines. Both of these subscriptions are created by default when you subscribe
to the pipeline.

Whenever the running of a stage is pending for approval, a notification card with options to approve or reject the
request is posted in the channel. Approvers can review the details of the request in the notification and take
appropriate action. In the following example, the deployment was approved and the approval status is displayed on
the card.

The app supports all the checks and approval scenarios present in Azure Pipelines portal, like single approver,
multiple approvers (any one user, any order, in sequence) and teams as approvers. You can approve requests as an
individual or on behalf of a team.

Previews of pipeline URLs


When a user pastes a pipeline URL, a preview is shown similar to that in the following image. This helps to keep
pipeline related conversations relevant and accurate.

For this feature to work, users have to be signed-in. Once they are signed in, this feature will work for all channels
in a workspace.

Remove subscriptions and pipelines from a channel


If you want to clean up your channel, use the following commands to unsubscribe from all pipelines within a
project.

/azpipelines unsubscribe all [project url]

For example:

/azpipelines unsubscribe all https://dev.azure.com/myorg/myproject

This command deletes all the subscriptions related to any pipeline in the project and removes the pipelines from
the channel.

IMPORTANT
Only project administrators can run this command.

Commands reference
Here are all the commands supported by the Azure Pipelines app:

SL A SH C O M M A N D F UN C T IO N A L IT Y

/azpipelines subscribe [pipeline url/ project url] Subscribe to a pipeline or all pipelines in a project to receive
notifications

/azpipelines subscriptions Add or Remove subscriptions for this channel

/azpipelines feedback Report a problem or suggest a feature

/azpipelines help Get help on the slash commands


SL A SH C O M M A N D F UN C T IO N A L IT Y

/azpipelines signin Sign in to your Azure Pipelines account

/azpipelines signout Sign out from your Azure Pipelines account

/azpipelines unsubscribe all [project url] Remove all pipelines (belonging to a project) and their
associated subscriptions from a channel

Notifications in Private channels


The Azure Pipelines app can help you monitor the pipelines activity in your private channels as well. You will need
to invite the bot to your private channel by using /invite @azpipelines . Post that, you can set up and manage your
notifications the same way as you would for a public channel.

NOTE
You can use the Azure Pipelines app for Slack only with a project hosted on Azure DevOps Services at this time.
The user has to be an admin of the project containing the pipeline to set up the subscriptions
Notifications are currently not supported inside direct messages
Deployment approvals which have 'Revalidate identity of approver before completing the approval' policy applied, are not
supported
'Third party application access via OAuth' must be enabled to receive notifications for the organization in Azure DevOps
(Organization Settings -> Security -> Policies)

Troubleshooting
If you are experiencing the following errors when using the Azure Pipelines App for Slack, follow the procedures in
this section.
Sorry, something went wrong. Please try again.
Configuration failed. Please make sure that the organization '{organization name}' exists and that you have
sufficient permissions.
Sorry, something went wrong. Please try again.
The Azure Pipelines app uses the OAuth authentication protocol, and requires Third-party application access via
OAuth for the organization to be enabled. To enable this setting, navigate to Organization Settings > Security >
Policies , and set the Third-par ty application access via OAuth for the organization setting to On .
Configuration failed. Please make sure that the organization '{organization name }' exists and that you have
sufficient permissions.
Sign out of Azure DevOps by navigating to https://aka.ms/VsSignout using your browser.
Open an In private or incognito browser window and navigate to https://aex.dev.azure.com/me and sign in. In
the dropdown under the profile icon to the left, select the directory that contains the organization containing the
pipeline for which you wish to subscribe.

In the same browser , start a new tab, navigate to https://slack.com , and sign in to your work space (use web
client ). Run the /azpipelines signout command followed by the /azpipelines signin command.
Select the Sign in button and you'll be redirected to a consent page like the one in the following example. Ensure
that the directory shown beside the email is same as what was chosen in the previous step. Accept and complete
the sign-in process.
If these steps don't resolve your authentication issue, reach out to us at Developer Community.

Related articles
Azure Boards with Slack
Azure Repos with Slack
Create a service hook for Azure DevOps with Slack
Integrate with ServiceNow change management
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines
Azure Pipelines and ServiceNow bring an integration of Azure Pipelines with ServiceNow Change Management to
enhance collaboration between development and IT teams. By including change management in CI/CD pipelines,
teams can reduce the risks associated with changes and follow service management methodologies such as ITIL,
while gaining all DevOps benefits from Azure Pipelines.
This topic covers:
Configuring ServiceNow for integrating with Azure Pipelines
Including ServiceNow change management process as a release gate
Monitoring change management process from releases
Keeping ServiceNow change requests updated with deployment result

Prerequisites
This tutorial extends the tutorial Use approvals and gates. You must have completed that tutorial first .
You'll also need a non-developer instance of ServiceNow to which applications can be installed from the store.

Configure the ServiceNow instance


1. Install the Azure Pipelines application on your ServiceNow instance. You'll require Hi credentials to
complete the installation. Refer to the documentation for more details on getting and installing applications
from the ServiceNow store.
2. Create a new user in ServiceNow and grant it the x_mioms_azpipeline.pipelinesExecution role.
Set up the Azure DevOps organization
1. Install the ServiceNow Change Management extension on your Azure DevOps organization.
Follow the instructions to "Get it Free"
2. Create a new ServiceNow service connection in the Azure DevOps project used for managing your releases.
Enter the user name and password for the service account created in ServiceNow.

Configure a release pipeline


1. In your release pipeline, add a pre-deployment gate for ServiceNow Change Management.

2. Select the ServiceNow service connection you created earlier and enter the values for the properties of the
change request.
Inputs for the gate:
Short description: A summary of the change.
Description: A detailed description of the change.
Category: The category of the change. For example, Hardware , Network , Software .
Priority: The priority of the change.
Risk: The risk level for the change.
Impact: The effect that the change has on the business.
Configuration Item: The configuration item (CI) that the change applies to.
Assignment group: The group that the change is assigned to.
Schedule of change request: The schedule of the change. The date and time should be in the UTC format
yyyy-MM-ddTHH:mm:ssZ . For example, 2018-01-31T07:56:59Z
Additional change request parameters: Additional properties of the change request you want set. Name
must be the field name (not the label) prefixed with u_ . For example, u_backout_plan . The value must
be a valid to be accepted value in ServiceNow. Invalid entries are ignored.
Gate success criteria:
Desired state: The gate will succeed, and the pipeline continues when the change request status is the
same as the value you specify.
Gate output variables:
CHANGE_REQUEST_NUMBER : Number of the change request created in ServiceNow.
CHANGE_SYSTEM_ID : System ID of the change request created in ServiceNow.
The ServiceNow gate produces output variables. You must specify the reference name to be able to use
these output variables in the deployment workflow. Gate variables can be accessed by using
PREDEPLOYGATE as a prefix. For example, when the reference name is set to gate1 , the change number
can be obtained as $(PREDEPLOYGATE.gate1.CHANGE_REQUEST_NUMBER) .
3. At the end of your deployment process, add an agentless phase with a task to update the status of the
change after deployment.

Inputs for Update change request task :


Change request number: The number of the change request that you want to update.
Updated status of change request : The status of the change request that you want to update.
Close code and notes: Closure information for the change request.
Additional change request parameters: Additional properties of the change request you want to set.

Execute a release
1. Create a new release from the configured release pipeline in Azure DevOps
2. After completing the Dev stage, the pipeline creates a new change request in ServiceNow for the release
and waits for it to reach the desired state.
3. The values defined as gate parameters will be used. You can get the change number that was created from
the logs.

4. The ServiceNow change owner will see the release in the queue as a new change.

5. The release that caused the change to be requested can be tracked from the Azure DevOps Pipeline
metadata section of the change.

6. The change goes through its normal life cycle: Approval, Scheduled, and more until it is ready for
implementation.
7. When the change is ready for implementation (it is in the Implement state), the release in Azure DevOps
proceeds. The gates status will be as shown here:
8. After the deployment, the change request is closed automatically.

FAQs
Q: What versions of ServiceNow are supported?
A : The integration is compatible with Kingston and above ServiceNow versions.
Q: What types of change request can be managed with the integration?
A : Only normal change requests are currently supported with this integration.
Q: How do I set additional change properties?
A : You can specify additional change properties of the change request in the Additional change request
parameters field. The properties are specified as key-value pairs in JSON format, the name being the field name
(not the label) prefixed with u_ in ServiceNow and a valid value.
Q: Can I update custom fields in the change request with additional change request parameters?
A : If custom fields are defined in ServiceNow for the change requests, mapping of the custom fields in import set
transform map must be added. See ServiceNow Change Management Extension for details.
Q: I don't see drop-down values populated for Category, Status, and others. What should I do?
A : Change Management Core and Change Management - State Model plugins must be active on your ServiceNow
instance for the drop-downs to work. See Upgrade change management and Update change request states for
more details.

Related topics
Approvals and gates overview
Manual intervention
Use approvals and gates to control your deployment
Stages
Triggers

See also
Video: Deploy quicker and safer with gates in Azure Pipelines
Configure your release pipelines for safe deployments
Tutorial: Use approvals and gates to control your deployment
Twitter sentiment as a release gate
GitHub issues as a release gate
Author custom gates. Library with examples
Help and support
See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Continuously deploy from a Jenkins build
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Azure Pipelines supports integration with Jenkins so that you can use Jenkins for Continuous Integration (CI) while
gaining several DevOps benefits from an Azure Pipelines release pipeline that deploys to Azure:
Reuse your existing investments in Jenkins build jobs
Track work items and related code changes
Get end-to-end traceability for your CI/CD workflow
Consistently deploy to a range of cloud services
Enforce quality of builds by gating deployments
Define work flows such as manual approval processes and CI triggers
Integrate Jenkins with JIRA and Azure Pipelines to show associated issues for each Jenkins job
Integrate with other service management tools such as ServiceNow
A typical approach is to use Jenkins to build an app from source code hosted in a Git repository such as GitHub and
then deploy it to Azure using Azure Pipelines.

Before you begin


You'll need the source code for your app hosted in a repository such as GitHub, Azure Repos, GitHub Enterprise
Server, Bitbucket Cloud, or any another source control provider that Jenkins can interact with.
You'll need a Jenkins server where you run your CI builds. You can quickly set up a Jenkins server on Azure.
You'll need a Jenkins project that builds you app. For example, you can build a Java app with Maven on Jenkins.

Link Jenkins with Azure Pipelines


Create a Jenkins service connection from the Ser vice connections section of the project settings page.
In TFS, open the Ser vices page from the "settings" icon in the top menu bar.
For more information, see Jenkins service connection. If you are not familiar with the general concepts in this
section, see Accessing your project settings and Creating and using a service connection.

Add a Jenkins artifact


Create a new release pipeline and add a Jenkins artifact to it. After you select the Jenkins service connection, you
can select an existing Jenkins job to deploy.
It's possible to store the output from a Jenkins build in Azure blob storage. If you have configured this in your
Jenkins project, choose Download ar tifacts from Azure storage and select the default version and source alias.
For more information, see Jenkins artifacts. If you are not familiar with the general concepts in this section, see
Creating a release pipeline and Release artifacts and artifact sources.

Define the deployment steps


Add the tasks you require to deploy your app to your chosen target in the Agent job section in the Tasks page of
your release pipeline. For example, add the Azure App Ser vice Deploy task to deploy a web app.
YAML
Classic
Add the Azure App Ser vice Deploy task YAML code to a job in the .yml file at the root of the repository.

...
jobs:
- job: DeployMyApp
pool:
name: Default
steps:
- task: AzureRmWebAppDeployment@4
inputs:
connectionType: 'AzureRM'
azureSubscription: your-subscription-name
appType: webAppLinux
webAppName: 'MyApp'
deployToSlotOrASE: false
packageForLinux: '$(System.DefaultWorkingDirectory)/**/*.zip'
takeAppOfflineFlag: true
...

YAML builds aren't yet available on TFS.


Whenever you trigger your Azure release pipeline, the artifacts published by the Jenkins CI job are downloaded and
made available for your deployment. You get full traceability of your workflow, including the commits associated
with each job.
See more details of the Azure App Service Deploy task If you are not familiar with the general concepts in this
section, see Build and release jobs and Using tasks in builds and releases.

Enable continuous deployment


If your Jenkins server is hosted in Azure , or your Azure DevOps organization has direct visibility to your
Jenkins server, you can easily enable a continuous deployment (CD) trigger within your release pipeline that causes
a release to be created and a deployment started every time the source artifact is updated.
To enable continuous deployment for an Azure hosted or directly visible Jenkins server:
1. Open the continuous deployment trigger pane from the Pipelines page of your release pipeline.
2. Change the setting to Enabled .
3. Choose Add and select the branch you want to create the trigger for. Or select the default branch.
However, if you have an on-premises Jenkins server, or your Azure DevOps organization does not have direct
visibility to your Jenkins Server, you can trigger a release for an Azure pipeline from a Jenkins project using the
following steps:
1. Create a Personal Access Token (PAT) in your Azure DevOps or TFS organization. Jenkins requires this
information to access your organization. Ensure you keep a copy of the token information for upcoming
steps in this section.
2. Install the Team Foundation Server plugin on your Jenkins server.
3. Within your Jenkins project, you will find a new post build action named Trigger release in TFS/Team
Ser vices . Add this action to your project.
4. Enter the collection URL for your Azure DevOps organization or TFS server as
https://<accountname>.visualstudio.com/DefaultCollection/

5. Leave username empty and enter your PAT as the password.


6. Select the Azure DevOps project and the release definition to trigger.
Now a new CD release will be triggered every time your Jenkins CI job is completed.

See also
Artifacts
Stages
Triggers
YAML schema reference
Use Terraform to manage infrastructure deployment
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Terraform is a tool for building, changing and versioning infrastructure safely and efficiently. Terraform can manage
existing and popular cloud service providers as well as custom in-house solutions.
Configuration files describe to Terraform the components needed to run a single application or your entire
datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then
executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what
changed and create incremental execution plans which can be applied.
In this tutorial, you learn about:
The structure of a Terraform file
Building an application using an Azure CI pipeline
Deploying resources using Terraform in an Azure CD pipeline

Prerequisites
1. A Microsoft Azure account.
2. An Azure DevOps account.
3. Use the Azure DevOps Demo Generator to provision the tutorial project on your Azure DevOps organization.
This URL automatically selects the Terraform template in the demo generator.

Examine the Terraform file in your source code


This tutorial uses the PartsUnlimited project, which is a sample eCommerce website developed using .NET Core.
You will examine the Terraform file that defines the Azure resources required to deploy PartsUnlimited website.
1. Navigate to the project created earlier using the Azure DevOps Demo Generator.
2. In the Repos tab of Azure Pipelines , select the terraform branch.
Make sure that you are now on the terraform branch and Terraform folder is there in the repo.
3. Select the webapp.tf file under the Terraform folder. Review the code.

terraform {
required_version = ">= 0.11"
backend "azurerm" {
storage_account_name = "__terraformstorageaccount__"
container_name = "terraform"
key = "terraform.tfstate"
access_key ="__storagekey__"
features{}
}
}
provider "azurerm" {
version = "=2.0.0"
features {}
}
resource "azurerm_resource_group" "dev" {
name = "PULTerraform"
location = "West Europe"
}

resource "azurerm_app_service_plan" "dev" {


name = "__appserviceplan__"
location = "${azurerm_resource_group.dev.location}"
resource_group_name = "${azurerm_resource_group.dev.name}"

sku {
tier = "Free"
size = "F1"
}
}

resource "azurerm_app_service" "dev" {


name = "__appservicename__"
location = "${azurerm_resource_group.dev.location}"
resource_group_name = "${azurerm_resource_group.dev.name}"
app_service_plan_id = "${azurerm_app_service_plan.dev.id}"

webapp.tf is a terraform configuration file. Terraform uses its own file format, called HCL (Hashicorp
Configuration Language). The structure is similar to YAML. In this example, Terraform will deploy the Azure
resource group, app service plan, and app service required to deploy the website. However, since the names
of those resources are not yet known, they are marked with tokens that will be replaced with real values
during the release pipeline.
As an added benefit, this Infrastructure-as-Code (IaC) file can be managed as part of source control. You may
learn more about working with Terraform and Azure in this Terraform Basics lab.

Build the application using an Azure CI Pipeline


This DevOps project includes two separate pipelines for CI and CD. The CI pipeline produces the artifacts that will
be released via the CD pipeline at a later point.
1. Navigate to Pipelines and select the Terraform-CI pipeline.

2. Select Edit . This CI pipeline has tasks to compile the .NET Core project. These tasks restore dependencies,
build, test, and publish the output as a zip file which can be deployed to an app service.

3. In addition to the application build, the pipeline publishes Terraform files as build artifacts so that they will
be available to other pipelines, such as the CD pipeline to be used later. This is done via the Copy files task,
which copies the Terraform folder to the Artifacts directory.
4. Select Queue to queue a new build. Select Run to use the default options. When the build page appears,
select Agent job 1 . The build may take a few minutes to complete.

Release the application to Azure resources provisioned by Terraform


Now that the application has been built, it's time to release it. However, no deployment infrastructure has been
created yet. This is where Terraform comes in. By following the definition file reviewed earlier, Terraform will be able
to ensure the expected state of the Azure infrastructure meets the application's needs before it is published.
1. Navigate to Releases under Pipelines and select the Terraform-CD pipeline. Select Edit .

2. The CD pipeline has been configured to accept the artifacts published by the CI pipeline. There is only one
stage, which is the Dev stage that performs the deployment. Select it to review its tasks.
3. There are eight tasks defined in the release stage. Most of them require some configuration to work with the
target Azure account.

4. Select the Agent job and configure it to use the Azure Pipelines agent pool and vs2017-win2016
specification.
5. Select the Azure CLI task and configure it to use a service connection to the target Azure account. If the
target Azure account is under the same user logged in to Azure DevOps, then available subscriptions can be
selected and authorized from the dropdown. Otherwise, use the Manage link to manually create a service
connection. Once created, this connection can be reused for future tasks.

This task executes a series of Azure CLI commands to set up some basic infrastructure required to use
Terraform.

# this will create Azure resource group


call az group create --location westus --name $(terraformstoragerg)

call az storage account create --name $(terraformstorageaccount) --resource-group $(terraformstoragerg)


--location westus --sku Standard_LRS

call az storage container create --name terraform --account-name $(terraformstorageaccount)

call az storage account keys list -g $(terraformstoragerg) -n $(terraformstorageaccount)

By default, Terraform stores state locally in a file named terraform.tfstate . When working with Terraform in
a team, use of a local file makes Terraform implementation complicated. With remote state, Terraform writes
the state data to a remote data store. Here the pipeline uses an Azure CLI task to create an Azure storage
account and storage container to store the Terraform state. For more information on Terraform remote state,
see Terraform's docs for working with Remote State.
6. Select the Azure PowerShell task and configure it to use the Azure Resource Manager connection type
and use the service connection created earlier.
This task uses PowerShell commands to retrieve the storage account key needed for the Terraform
provisioning.

# Using this script we will fetch storage key which is required in terraform file to authenticate
backend storage account

$key=(Get-AzureRmStorageAccountKey -ResourceGroupName $(terraformstoragerg) -AccountName


$(terraformstorageaccount)).Value[0]

Write-Host "##vso[task.setvariable variable=storagekey]$key"

7. Select the Replace tokens task. If you recall the webapp.tf file reviewed earlier, there were several
resources that were unknown at the time and marked with token placeholders, such as
terraformstorageaccount . This task replaces those tokens with variable values relevant to the
deployment, including those from the pipeline's Variables . You may review those under Variables if you
like, but return to Tasks afterwards.

8. Select the Install Terraform task. This installs and configures the specified version of Terraform on the
agent for the remaining tasks.
When running Terraform in automation, the focus is usually on the core plan/apply cycle. The next three
tasks follow these stages.

9. Select the Terraform init task. This task runs the terraform init command. This command looks through
all of the *.tf files in the current working directory and automatically downloads any of the providers
required for them. In this example, it will download Azure provider as it is going to deploy Azure resources.
For more information, see Terraform's documentation for the init command.
Select the Azure subscription created earlier and enter terraform as the container. Note that the key is set
to terraform.tfstate .

10. Select the Terraform plan task. This task runs the terraform plan command. This command is used to
create an execution plan by determining what actions are necessary to achieve the desired state specified in
the configuration files. This is just a dry run and shows which actions will be performed. For more
information, see Terraform's documentation for the plan command.
Select the Azure subscription created earlier.
11. Select the Terraform apply task. This task runs the terraform validate and apply command. This
command deploys the resources. By default, it will also prompt for confirmation before applying. Since this
is an automated deployment, the auto-approve argument is included. For more information, see
Terraform's documentation for the plan command.
Select the Azure subscription created earlier.

12. Select the Azure App Ser vice Deploy task. Select the Azure subscription created earlier. By the time this
task runs, Terraform has ensured that the deployment environment has been configured to meet the app's
requirements. It will use the created app service name set in the Variables section.

13. From the top of the page, select Save and confirm.
14. Select Create release . Specify the recent build and select Create . Your build number will most likely be
different than this example.
15. Select the new release to track the pipeline.

16. Click through to track task progress.

17. Once the release has completed, select the Azure App Ser vice Deploy task.
18. Copy the name of the app service from the task title. Note that the name you see will vary slightly.

19. Open a new browser tab and navigate to the app service. The domain format is [app ser vice
name].azurewebsites.net , so the final URL will be something like:

https://pulterraformweb99ac17bf.azurewebsites.net.

Summary
In this tutorial, you learned how to automate repeatable deployments with Terraform on Azure using Azure
Pipelines.

Clean up resources
This tutorial created an Azure DevOps project and some resources in Azure. If you're not going to continue to use
these resources, delete them with the following steps:
1. Delete the Azure DevOps project created by the Azure DevOps Demo Generator.
2. All Azure resources created during this tutorial were assigned to either the PULTerraform or terraformrg
resource groups. Deleting those two groups will delete the resources they contain. This can be done via the
CLI or portal. The following example shows you how to delete the resource groups using Azure CLI.

az group delete --name PULTerraform


az group delete --name terraformrg

Next steps
Terraform with Azure
Migrate from Jenkins to Azure Pipelines
2/26/2020 • 7 minutes to read • Edit Online

Jenkins has traditionally been installed by enterprises in their own data centers and managed in an on-premises
fashion, though a number of providers offer managed Jenkins hosting.
Azure Pipelines, on the other hand, is a cloud native continuous integration pipeline, providing the management of
build and release pipelines and build agent virtual machines hosted in the cloud.
However, Azure Pipelines offers a fully on-premises option as well with Azure DevOps Server, for those customers
who have compliance or security concerns that require them to keep their code and build within the enterprise data
center.
In addition, Azure Pipelines supports a hybrid cloud and on-premises model, where Azure Pipelines manages the
build and release orchestration and enabling build agents both in the cloud and installed on-premises, for
customers with custom needs and dependencies for some build agents but who are looking to move most
workloads to the cloud.
This document provides a guide to translate a Jenkins pipeline configuration to Azure Pipelines, information about
moving container-based builds and selecting build agents, mapping environment variables, and how to handle
success and failures of the build pipeline.

Configuration
You'll find a familiar transition from a Jenkins declarative pipeline into an Azure Pipelines YAML configuration. The
two are conceptually similar, supporting "configuration as code" and allowing you to check your configuration into
your version control system. Unlike Jenkins, however, Azure Pipelines uses the industry-standard YAML to
configure the build pipeline.
Despite the language difference, however, the concepts between Jenkins and Azure Pipelines and the way they're
configured are similar. A Jenkinsfile lists one or more stages of the build process, each of which contains one or
more steps that are performed in order. For example, a "build" stage may run a task to install build-time
dependencies, then perform a compilation step. While a "test" stage may invoke the test harness against the
binaries that were produced in the build stage.
For example:
Jenkinsfile
pipeline {
agent none
stages {
stage('Build') {
steps {
sh 'npm install'
sh 'npm run build'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
}
}

This translates easily to an Azure Pipelines YAML configuration, with a job corresponding to each stage, and steps to
perform in each job:
azure-pipelines.yml

jobs:
- job: Build
steps:
- script: npm install
- script: npm run build
- job: Test
steps:
- script: npm test

Visual Configuration
If you are not using a Jenkins declarative pipeline with a Jenkinsfile, and are instead using the graphical interface to
define your build configuration, then you may be more comfortable with the classic editor in Azure Pipelines.

Container-Based Builds
Using containers in your build pipeline allows you to build and test within a docker image that has the exact
dependencies that your pipeline needs, already configured. It saves you from having to include a build step that
installs additional software or configures the environment. Both Jenkins and Azure Pipelines support container-
based builds.
In addition, both Jenkins and Azure Pipelines allow you to share the build directory on the host agent to the
container volume using the -v flag to docker. This allows you to chain multiple build jobs together that can use the
same sources and write to the same output directory. This is especially useful when you use many different
technologies in your stack; you may want to build your backend using a .NET Core container and your frontend
with a TypeScript container.
For example, to run a build in an Ubuntu 14.04 ("Trusty") container, then run tests in an Ubuntu 16.04 ("Xenial")
container:
Jenkinsfile
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'ubuntu:trusty'
args '-v $HOME:/build -w /build'
}
}
steps {
sh 'make'
}
}
stage('Test') {
agent {
docker {
image 'ubuntu:xenial'
args '-v $HOME:/build -w /build'
}
}
steps {
sh 'make test'
}
}
}
}

Azure Pipelines provides container jobs to enable you to run your build within a container:
azure-pipelines.yml

resources:
containers:
- container: trusty
image: ubuntu:trusty
- container: xenial
image: ubuntu:xenial

jobs:
- job: build
container: trusty
steps:
- script: make
- job: test
dependsOn: build
container: xenial
steps:
- script: make test

In addition, Azure Pipelines provides a docker task that allows you to run, build, or push an image.

Agent Selection
Jenkins offers build agent selection using the agent option to ensure that your build pipeline - or a particular stage
of the pipeline - runs on a particular build agent machine. Similarly, Azure Pipelines offers a number of options to
configure where your build environment runs.
Hosted Agent Selection
Azure Pipelines offers cloud hosted build agents for Linux, Windows, and macOS builds. To select the build
environment, you can use the vmimage keyword. For example, to select a macOS build:
pool:
vmimage: macOS-10.14

Additionally, you can specify a container and specify a docker image for finer grained control over how your build
is run.
On-Premises Agent Selection
If you host your build agents on-premises, then you can define the build agent "capabilities" based on the
architecture of the machine or the software that you've installed on it. For example, if you've set up an on-premises
build agent with the java capabilities, then you can ensure that your job runs on it using the demands keyword:

pool:
demands: java

Environment Variables
In Jenkins, you typically define environment variables for the entire pipeline. For example, to set two environment
variables, CONFIGURATION=debug and PLATFORM=x86 :
Jenkinsfile

pipeline {
environment {
CONFIGURATION = 'debug'
PLATFORM = 'x64'
}
}

Similarly, in Azure Pipelines you can configure variables that are used both within the YAML configuration and are
set as environment variables during job execution:
azure-pipelines.yml

variables:
configuration: debug
platform: x64

Additionally, in Azure Pipelines you can define variables that are set only for the duration of a particular job:
azure-pipelines.yml

jobs:
- job: debug build
variables:
configuration: debug
steps:
- script: ./build.sh $(configuration)
- job: release build
variables:
configuration: release
steps:
- script: ./build.sh $(configuration)

Predefined Variables
Both Jenkins and Azure Pipelines set a number of environment variables to allow you to inspect and interact with
the execution environment of the continuous integration system.

DESC RIP T IO N JEN K IN S A Z URE P IP EL IN ES

A unique numeric identifier for the BUILD_NUMBER BUILD_BUILDNUMBER


current build invocation.

A unique identifier (not necessarily BUILD_ID BUILD_BUILDID


numeric) for the current build
invocation.

The URL that displays the build logs. BUILD_URL This is not set as an environment
variable in Azure Pipelines but can be
derived from other variables.1

The name of the machine that the NODE_NAME AGENT_NAME


current build is running on.

The name of this project or build JOB_NAME RELEASE_DEFINITIONNAME


definition.

A string for identification of the build; BUILD_TAG BUILD_BUILDNUMBER


the build number is a good unique
identifier.

A URL for the host executing the build. JENKINS_URL SYSTEM_TEAMFOUNDATIONCOLLECTIONURI

A unique identifier for the build EXECUTOR_NUMBER AGENT_NAME


executor or build agent that is currently
running.

The location of the checked out sources. WORKSPACE BUILD_SOURCESDIRECTORY

The Git Commit ID corresponding to GIT_COMMIT BUILD_SOURCEVERSION


the version of software being built.

Path to the Git repository on GitHub, GIT_URL BUILD_REPOSITORY_URI


Azure Repos or another repository
provider.

The Git branch being built. GIT_BRANCH BUILD_SOURCEBRANCH

1 To derive the URL that displays the build logs in Azure Pipelines, combine the following environment variables in
this format:

${SYSTEM_TEAMFOUNDATIONCOLLECTIONURI}/${SYSTEM_TEAMPROJECT}/_build/results?buildId=${BUILD_BUILDID}

Success and Failure Handling


Jenkins allows you to run commands when the build has finished, using the post section of the pipeline. You can
specify commands that run when the build succeeds (using the success section), when the build fails (using the
failure section) or always (using the always section). For example:

Jenkinsfile
post {
always {
echo "The build has finished"
}
success {
echo "The build succeeded"
}
failure {
echo "The build failed"
}
}

Similarly, Azure Pipelines has a rich conditional execution framework that allows you to run a job, or steps of a job,
based on a number of conditions including pipeline success or failure.
To emulate the simple Jenkins post -build conditionals, you can define jobs that run based on the always() ,
succeeded() or failed() conditions:

azure-pipelines.yml

jobs:
- job: always
steps:
- script: echo "The build has finished"
condition: always()

- job: success
steps:
- script: echo "The build succeeded"
condition: succeeded()

- job: failed
steps:
- script: echo "The build failed"
condition: failed()

In addition, you can combine other conditions, like the ability to run a task based on the success or failure of an
individual task, environment variables, or the execution environment, to build a rich execution pipeline.
Migrate from Travis to Azure Pipelines
11/2/2020 • 14 minutes to read • Edit Online

Azure Pipelines is more than just a Continuous Integration tool, it's a flexible build and release orchestration platform. It's designed for
the software development and deployment process, but because of this extensibility, there are a number of differences from simpler
build systems like Travis.
This purpose of this guide is to help you migrate from Travis to Azure Pipelines. This guide describes the philosophical differences
between Travis and Azure Pipelines, examines the practical effects on the configuration of each system, and shows how to translate
from a Travis configuration to an Azure Pipelines configuration.

We need your help to make this guide better! Submit comments below or contribute your changes directly.

Key differences
There are numerous differences between Travis and Azure Pipelines, including version control configuration, environment variables, and
virtual machine environments, but at a higher level:
Azure Pipelines configuration is more precise and relies less on shorthand configuration and implied steps. You'll see this in
places like language selection and in the way Azure Pipelines allows flow to be controlled.
Travis builds have stages, jobs and phases, while Azure Pipelines simply has steps that can be arranged and executed in an
arbitrary order or grouping that you choose. This gives you flexibility over the way that your steps are executed, including the
way they're executed in parallel.
Azure Pipelines allows job definitions and steps to be stored in separate YAML files in the same or a different repository, enabling
steps to be shared across multiple pipelines.
Azure Pipelines provides full support for building and testing on Microsoft-managed Linux, Windows, and macOS images. See
Microsoft-hosted agents for more details.

Before starting your migration


If you are new to Azure Pipelines, see the following to learn more about Azure Pipelines and how it works prior to starting your
migration:
Create your first pipeline
Key concepts for new Azure Pipelines users
Building GitHub repositories

Language
Travis uses the language keyword to identify the prerequisite build environment to provision for your build. For example, to select
Node.JS 8.x:
.travis.yml

language: node_js
node_js:
- "8"

Microsoft-hosted agents contain the SDKs for many languages out-of-the-box and most languages need no configuration. But where a
language has multiple versions installed, you may need to execute a language selection task to set up the environment.
For example, to select Node.JS 8.x:
azure-pipelines.yml
steps:
- task: NodeTool@0
inputs:
versionSpec: '8.x'

Language mappings
The language keyword in Travis does not just imply that a particular version of language tools be used, but also that a number of build
steps be implicitly performed. Azure Pipelines, on the other hand, does not do any work without your input, so you'll need to specify the
commands that you want to execute.
Here is a translation guide from the language keyword to the commands that are executed automatically for the most commonly-used
languages:

L A N GUA GE C O M M A N DS

c ./configure
cpp make
make install

csharp nuget restore [solution.sln]


msbuild /p:Configuration=Release [solution.sln]

clojure lein deps


lein test

go go get -t -v ./...
make or go test

java Gradle :
groovy gradle assemble
gradle check

Maven :
mvn install -DskipTests=true -Dmaven.javadoc.skip=true -B -V
mvn test -B

Ant :
ant test

node_js npm install or npm ci or yarn


npm test

objective-c pod install or bundle exec pod install


swift xcodebuild -scheme [scheme] build test \| xcpretty

perl cpanm --quiet --installdeps --notest .

Build.PL :
perl ./Build.pl
./Build test

Makefile.PL :
perl Makefile.PL
make test

Makefile :
make test

php phpunit

python pip install -r requirements.txt

ruby bundle install --jobs=3 --retry=3


rake
In addition, less common languages can be enabled but require an additional dependency installation step or execution inside a docker
container:

L A N GUA GE C O M M A N DS

crystal docker run -v $(pwd):/src -w /src crystallang/crystal shards


install
docker run -v $(pwd):/src -w /src crystallang/crystal crystal
spec

d sudo wget http://master.dl.sourceforge.net/project/d-


apt/files/d-apt.list -O /etc/apt/sources.list.d/d-apt.list
sudo apt-get update
sudo apt-get -y --allow-unauthenticated install --reinstall d-
apt-keyring
sudo apt-get update
sudo apt-get install dmd-compiler dub
dub test --compiler=dmd

dart wget https://dl-ssl.google.com/linux/linux_signing_key.pub -O -


\| sudo apt-key add -
wget
https://storage.googleapis.com/download.dartlang.org/linux/debian/dart_stable.lis
-O /etc/apt/sources.list.d/dart_stable.list
sudo apt-get update
sudo apt-get install dart
/usr/lib/dart/bin/pub get
/usr/lib/dart/bin/pub run test

erlang sudo apt-get install rebar


rebar get-deps
rebar compile
rebar skip_deps=true eunit

elixir sudo apt-get install elixir


mix local.rebar --force
mix local.hex --force
mix deps.get
mix test

haskell sudo apt-get install cabal-install


cabal install --only-dependencies --enable-tests
cabal configure --enable-tests
cabal build
cabal test

haxe sudo apt-get install haxe


yes \| haxelib install [hxml]
haxe [hxml]

julia sudo apt-get install julia


julia -e "using Pkg; Pkg.build(); end"
julia --check-bounds=yes -e "Pkg; Pkg.test(coverage=true); end"

nix docker run -v $(pwd):/src -w /src nixos/nix nix-build

perl6 sudo apt-get install rakudo


PERL6LIB=lib prove -v -r --exec=perl6 t/

rust curl -sSf https://sh.rustup.rs | sh -s -- -y


cargo build --verbose
cargo test --verbose
L A N GUA GE C O M M A N DS

scala echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a


/etc/apt/sources.list.d/sbt.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --
recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823
sudo apt-get update
sudo apt-get install sbt
sbt ++2.11.6 test

smalltalk docker run -v $(pwd):/src -w /src hpiswa/smalltalkci


smalltalkci

Multiple language selection


You can also configure an environment that supports building different applications in multiple languages. For example, to ensure the
build environment targets both Node.JS 8.x and Ruby 2.5 or better:
azure-pipelines.yml

steps:
- task: NodeTool@0
inputs:
versionSpec: '8.x'
- task: UseRubyVersion@0
inputs:
versionSpec: '>= 2.5'

Phases
In Travis, steps are defined in a fixed set of named phases such as before_install or before_script . Azure Pipelines does not have
named phases and steps can be grouped, named, and organized in whatever way makes sense for the pipeline.
For example:
.travis.yml

before_install:
- npm install -g bower
install:
- npm install
- bower install
script:
- npm run build
- npm test

azure-pipelines.yml

steps:
- script: npm install -g bower
- script: npm install
- script: bower install
- script: npm run build
- script: npm test

Alternatively, steps can be grouped together and optionally named:


azure-pipelines.yml

steps:
- script: |
npm install -g bower
npm install
bower install
displayName: 'Install dependencies'
- script: npm run build
- script: npm test
Parallel jobs
Travis provides parallelism by letting you define a stage, which is a group of jobs that are executed in parallel. A Travis build can have
multiple stages; once all jobs in a stage have completed, the execution of the next stage can begin.
Azure Pipelines gives you finer grained control of parallelism. You can make each step dependent on any other step you want. In this
way, you specify which steps run serially, and which can run in parallel. So you can fan out with multiple steps run in parallel after the
completion of one step, and then fan back in with a single step that runs afterward. This model gives you options to define complex
workflows if necessary. For now, here's a simple example:

For example, to run a build script, then upon its completion run both the unit tests and the integration tests in parallel, and once all tests
have finished, package the artifacts and then run the deploy to pre-production:
.travis.yml

jobs:
include:
- stage: build
script: ./build.sh
- stage: test
script: ./test.sh unit_tests
- script: ./test.sh integration_tests
- stage: package
script: ./package.sh
- stage: deploy
script: ./deploy.sh pre_prod

azure-pipelines.yml

jobs:
- job: build
steps:
- script: ./build.sh
- job: test1
dependsOn: build
steps:
- script: ./test.sh unit_tests
- job: test2
dependsOn: build
steps:
- script: ./test.sh integration_tests
- job: package
dependsOn:
- test1
- test2
script: ./package.sh
- job: deploy
dependsOn: package
steps:
- script: ./deploy.sh pre_prod

Advanced parallel execution


In Azure Pipelines you have more options and control over how you orchestrate your pipeline. Unlike Travis, we don't require you to
think of blocks that must be executed together. Instead, you can focus on the resources that a job needs to start and the resources that it
produces when it's done.
For example, a team has a set of fast-running unit tests, and another set of and slower integration tests. The team wants to begin
creating the .ZIP file for a release as soon as the unit are completed because they provide high confidence that the build will provide a
good package. But before they deploy to pre-production, they want to wait until all tests have passed:

In Azure Pipelines they can do it this way:


azure-pipelines.yml

jobs:
- job: build
steps:
- script: ./build.sh
- job: test1
dependsOn: build
steps:
- script: ./test.sh unit_tests
- job: test2
dependsOn: build
steps:
- script: ./test.sh integration_tests
- job: package
dependsOn: test1
script: ./package.sh
- job: deploy
dependsOn:
- test1
- test2
- package
steps:
- script: ./deploy.sh pre_prod

Step reuse
Most teams like to reuse as much business logic as possible to save time and avoid replication errors, confusion, and staleness. Instead
of duplicating your change, you can make the change in a common area, and your leverage increases when you have similar processes
that build on multiple platforms.
In Travis you can use matrices to run multiple executions across a single configuration. In Azure Pipelines you can use matrices in the
same way, but you can also implement configuration reuse by using YAML templates.
Example: Environment variable in a matrix
One of the most common ways to run several builds with a slight variation is to change the execution using environment variables. For
example, your build script can look for the presence of an environment variable and change the way your software is built, or the way
its tested.
You can use a matrix to have run a build configuration several times, once for each value in the environment variable. For example, to
run a given script three times, each time with a different setting for an environment variable:
.travis.yml
os: osx
env:
matrix:
- MY_ENVIRONMENT_VARIABLE: 'one'
- MY_ENVIRONMENT_VARIABLE: 'two'
- MY_ENVIRONMENT_VARIABLE: 'three'
script: echo $MY_ENVIRONMENT_VARIABLE

azure-pipelines.yml

pool:
vmImage: 'macOS-10.14'
strategy:
matrix:
set_env_to_one:
MY_ENVIRONMENT_VARIABLE: 'one'
set_env_to_two:
MY_ENVIRONMENT_VARIABLE: 'two'
set_env_to_three:
MY_ENVIRONMENT_VARIABLE: 'three'
steps:
- script: echo $(MY_ENVIRONMENT_VARIABLE)

Example: Language versions in a matrix


Another common scenario is to run against several different language environments. Travis supports an implicit definition using the
language keyword, while Azure Pipelines expects an explicit task to define how to configure that language version.

You can easily use the environment variable matrix options in Azure Pipelines to enable a matrix for different language versions. For
example, you can set an environment variable in each matrix variable that corresponds to the language version that you want to use,
then in the first step, use that environment variable to run the language configuration task:
.travis.yml

os: linux
matrix:
include:
- rvm: 2.3.7
- rvm: 2.4.4
- rvm: 2.5.1
script: ruby --version

azure-pipelines.yml

vmImage: 'Ubuntu 16.04'


strategy:
matrix:
ruby 2.3:
ruby_version: '2.3.7'
ruby 2.4:
ruby_version: '2.4.4'
ruby 2.5:
ruby_version: '2.5.1'
steps:
- task: UseRubyVersion@0
inputs:
versionSpec: $(ruby_version)
- script: ruby --version

Example: Operating systems within a matrix


It's also common to run builds in multiple operating systems. Travis supports this definition using the os keyword, while Azure
Pipelines lets you configure the operating system by selecting the pool to run in using the vmImage keyword.
For example, you can set an environment variable in each matrix variable that corresponds to the operating system image that you
want to use. Then you can set the machine pool to the variable you've set:
.travis.yml
matrix:
include:
- os: linux
- os: windows
- os: osx
script: echo Hello, world!

azure-pipelines.yml

strategy:
matrix:
linux:
imageName: 'ubuntu-16.04'
mac:
imageName: 'macos-10.14'
windows:
imageName: 'vs2017-win2016'

pool:
vmImage: $(imageName)

steps:
- script: echo Hello, world!

Success and failure handling


Travis allows you to specify steps that will be run when the build succeeds, using the after_success phase, or when the build fails, using
the after_failure phase. Since Azure Pipelines doesn't limit you to a finite number of specially-named phases, you can define success
and failure conditions based on the result of any step, which enables more flexible and powerful pipelines.
.travis.yml

build: ./build.sh
after_success: echo Success
after_failure: echo Failed

azure-pipelines.yml

steps:
- script: ./build.sh
- script: echo Success
condition: succeeded()
- script: echo Failed
condition: failed()

Advanced success and failure handling


In Azure Pipelines you can program a flexible set of dependencies and conditions for flow control between jobs.
You can configure jobs to run based on the success or failure of previous jobs or based on environment variables. You can even
configure jobs to always run, regardless of the success of other jobs.
For example, if you want to run a script when the build fails, but only if it is running as a build on the master branch:
azure-pipelines.yml

jobs:
- job: build
steps:
- script: ./build.sh
- job: alert
dependsOn: build
condition: and(failed(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
steps:
- script: ./sound_the_alarms.sh

Predefined variables
Both Travis and Azure Pipelines set a number of environment variables to allow you to inspect and interact with the execution
environment of the CI system.
In most cases there's an Azure Pipelines variable to match the environment variable in Travis. Here's a list of commonly-used
environment variables in Travis and their analog in Azure Pipelines:

T RAVIS A Z URE P IP EL IN ES DESC RIP T IO N

CI=true or TRAVIS=true TF_BUILD=True Indicates that your build is running in the CI


system; useful for scripts that are also intended
to be run locally during development.

TRAVIS_BRANCH CI builds : The name of the branch the build was queued
BUILD_SOURCEBRANCH for, or the name of the branch the pull request
is targeting.
Pull request builds :
SYSTEM_PULLREQUEST_TARGETBRANCH

TRAVIS_BUILD_DIR BUILD_SOURCESDIRECTORY The location of your checked out source and


the default working directory.

TRAVIS_BUILD_NUMBER BUILD_BUILDID A unique numeric identifier for the current build


invocation.

TRAVIS_COMMIT CI builds : The commit ID currently being built.


BUILD_SOURCEVERSION

TRAVIS_COMMIT Pull request builds : For pull request validation builds, Azure
git rev-parse HEAD^2 Pipelines sets BUILD_SOURCEVERSION to the
resulting merge commit of the pull request into
master; this command will identify the pull
request commit itself.

TRAVIS_COMMIT_MESSAGE BUILD_SOURCEVERSIONMESSAGE The log message of the commit being built.

TRAVIS_EVENT_TYPE BUILD_REASON The reason the build was queued; a map of


values is in the "build reasons" table below.

TRAVIS_JOB_NAME AGENT_JOBNAME The name of the current job, if specified.

TRAVIS_OS_NAME AGENT_OS The operating system that the job is running


on; a map of values is in the "operating
systems" table below.

TRAVIS_PULL_REQUEST Azure Repos : The pull request number that triggered this
SYSTEM_PULLREQUEST_PULLREQUESTID build. (For GitHub builds, this is a unique
identifier that is not the pull request number.)
GitHub :
SYSTEM_PULLREQUEST_PULLREQUESTNUMBER

TRAVIS_PULL_REQUEST_BRANCH SYSTEM_PULLREQUEST_SOURCEBRANCH The name of the branch where the pull request
originated.

TRAVIS_PULL_REQUEST_SHA Pull request builds : For pull request validation builds, Azure
git rev-parse HEAD^2 Pipelines sets BUILD_SOURCEVERSION to the
resulting merge commit of the pull request into
master; this command will identify the pull
request commit itself.

TRAVIS_PULL_REQUEST_SLUG The name of the forked repository, if the pull


request originated in a fork. There's no analog
to this in Azure Pipelines.

TRAVIS_REPO_SLUG BUILD_REPOSITORY_NAME The name of the repository that this build is


configured for.
T RAVIS A Z URE P IP EL IN ES DESC RIP T IO N

TRAVIS_TEST_RESULT AGENT_JOBSTATUS Travis sets this value to 0 if all previous steps


have succeeded (returned 0 ). For Azure
Pipelines, check that
AGENT_JOBSTATUS=Succeeded .

TRAVIS_TAG BUILD_SOURCEBRANCH If this build was queued by the creation of a tag


then this is the name of that tag. For Azure
Pipelines, the BUILD_SOURCEBRANCH will be set
to the full Git reference name, eg
refs/tags/tag_name .

TRAVIS_BUILD_STAGE_NAME The name of the stage in Travis. As we saw


earlier, Azure Pipelines handles flow control
using jobs. You can reference AGENT_JOBNAME .

Build Reasons :
The TRAVIS_EVENT_TYPE variable contains values that map to values provided by the Azure Pipelines BUILD_REASON variable:

T RAVIS A Z URE P IP EL IN ES DESC RIP T IO N

push IndividualCI The build is a continuous integration build from


a push.

pull_request PullRequest The build was queued to validate a pull request.

api Manual The build was queued by the REST API or a


manual request on the web page.

cron Schedule The build was scheduled.

Operating Systems :
The TRAVIS_OS_NAME variable contains values that map to values provided by the Azure Pipelines AGENT_OS variable:

T RAVIS A Z URE P IP EL IN ES DESC RIP T IO N

linux Linux The build is running on Linux.

osx Darwin The build is running on macOS.

windows Windows_NT The build is running on Windows.

To learn more, see Predefined environment variables.


If there isn't a variable for the data you need, then you can use a shell command to get it. For example, a good substitute of an
environment variable containing the commit ID of the pull request being built is to run a git command: git rev-parse HEAD^2 .

Building specific branches


By default, both Travis and Azure Pipelines perform CI builds on all branches. Similarly, both systems allow you to limit these builds to
specific branches. In Azure Pipelines, the list of branches to build should be listed in the include list and the branches not to build
should be listed in the `exclude list. Wildcards are supported.
For example, to build only the master branch and those that begin with the word "releases":
.travis.yml

branches:
only:
- master
- /^releases.*/
azure-pipelines.yml

trigger:
branches:
include:
- master
- releases*

Output caching
Travis supports caching dependencies and intermediate build output to improve build times. Azure Pipelines does not support caching
intermediate build output, but does offer integration with Azure Artifacts for dependency storage.

Git submodules
Travis and Azure Pipelines both clone git repos "recursively" by default. This means that submodules are cloned by the agent, which is
useful since submodules usually contain dependencies. However, the extra cloning takes time, so if you don't need the dependencies
then you can disable cloning submodules:
.travis.yml

git:
submodules: false

azure-pipelines.yml

checkout:
submodules: false
Migrate from XAML builds to new builds
11/2/2020 • 13 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017 | XAML builds

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

We introduced XAML build automation capabilities based on the Windows Workflow Foundation in Team
Foundation Server (TFS) 2010. We released another version of XAML builds in TFS 2013.
After that we sought to expand beyond .NET and Windows and add support for other kinds of apps that are based
on operating systems such as macOS and Linux. It became clear that we needed to switch to a more open, flexible,
web-based foundation for our build automation engine. In early 2015 in Azure Pipelines, and then in TFS 2015, we
introduced a simpler task- and script-driven cross-platform build system.
Because the systems are so different, there's no automated or general way to migrate a XAML build pipeline into a
new build pipeline. The migration process is to manually create the new build pipelines that replicate what your
XAML builds do.
If you're building standard .NET applications, you probably used our default templates as provided out-of-the-box.
In this case the process should be reasonably easy.
If you have customized your XAML templates or added custom tasks, then you'll need to also take other steps
including writing scripts, installing extensions, or creating custom tasks.

Overview of the migration effort


Here are the steps to migrate from XAML builds to newer builds:
1. If you're using a private TFS server, set up agents to run your builds.
2. To get familiar with the new build system, create a "Hello world" build pipeline.
3. Create a new build pipeline intended to replace one of your XAML build pipelines.
a. Create a new build pipeline.
b. Port your XAML settings.
4. On the General tab, disable the XAML build pipeline.
5. Repeat the previous two steps for each of your XAML build pipelines.
6. Take advantage of new build features and learn more about the kinds of apps you can build.
7. Learn how to customize, and if necessary extend your system.
8. When you no longer need the history and artifacts from your XAML builds, delete the XAML builds, and then
the XAML build pipelines.
WARNING
After you delete the XAML builds and pipelines, you cannot get them back.

Create new build pipelines


If you're building a standard .NET app, you're probably using one of the out-of-the-box build templates such as
TfvcTemplate.12.xaml or GitTemplate.12.xaml. In this case, it will probably just take you a few clicks to create build
pipelines in the new build system.
1. Open your project in your web browser ▼

(If you don't see your project listed on the home page, select Browse .)
On-premises TFS: http://{your_server}:8080/tfs/DefaultCollection/{your_project}
Azure Pipelines: https://dev.azure.com/{your_organization}/{your_project}
The TFS URL doesn't work for me. How can I get the correct URL?
2. Create a build pipeline (Pipelines tab > Builds) ▼

3. Select a template to add commonly used tasks to your build pipeline.


4. Make any necessary changes to your build pipeline to replicate your XAML build pipeline. The tasks added by
the template should simply work in many cases. But if you changed process parameters or other settings in your
XAML build pipelines, below are some pointers to get you started replicating those changes.

Port your XAML settings


In each of the following sections we show the XAML user interface, and then provide a pointer to the place where
you can port the setting into your new build pipeline.
General tab
A Z URE P IP EL IN ES A N D T F S 2018 A N D
XA M L SET T IN G T F S 2017 EQ UIVA L EN T N EW ER EQ UIVA L EN T

Build pipeline name You can change it whenever you save When editing the pipeline: On the
the pipeline. Tasks tab, in left pane click
Pipeline , and the Name field
appears in right pane.
In the Builds hub (Mine or All
pipelines tab), open the action
menu and choose Rename .

Description (optional) Not supported. Not supported.

Queue processing Not yet supported. As a partial Not yet supported. As an alternative,
alternative, disable the triggers. disable the triggers.

Source Settings tab


TFVC

XA M L SET T IN G T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T

Source Settings tab On the Repositor y tab specify your On the Tasks tab, in left pane click Get
mappings with Active paths as Map sources . Specify your workspace
and Cloaked paths as Cloak . mappings with Active paths as Map
and Cloaked paths as Cloak .

The new build pipeline offers you some new options. The specific extra options you'll see depend on the version
you're using of TFS or Azure Pipelines. If you're using Azure Pipelines, first make sure to display Advanced
settings . See Build TFVC repositories.
Git
XA M L SET T IN G T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T

Source Settings tab On the Repositor y tab specify the On the Tasks tab, in left pane click Get
repository and default branch. sources . Specify the repository and
default branch.

The new build pipeline offers you some new options. The specific extra options you'll see depend on the version
you're using of TFS or Azure Pipelines. If you're using Azure Pipelines, first make sure to display Advanced
settings . See Pipeline options for Git repositories.
Trigger tab

XA M L SET T IN G T F S 2017 A N D N EW ER, A Z URE P IP EL IN ES EQ UIVA L EN T

Trigger tab On the Triggers tab, select the trigger you want to use: CI,
scheduled, or gated.

The new build pipeline offers you some new options. For example:
You can potentially create fewer build pipelines to replace a larger number of XAML build pipelines. This is
because you can use a single new build pipeline with multiple triggers. And if you're using Azure Pipelines,
then you can add multiple scheduled times.
The Rolling builds option is replaced by the Batch changes option. You can't specify minimum time
between builds. But if you're using Azure Pipelines, you can specify the maximum number of parallel jobs
per branch.
If your code is in TFVC, you can add folder path filters to include or exclude certain sets of files from
triggering a CI build.
If your code is in TFVC and you're using the gated check-in trigger, you've got the option to also run CI builds
or not. You can also use the same workspace mappings as your repository settings, or specify different
mappings.
If your code is in Git, then you specify the branch filters directly on the Triggers tab. And you can add folder
path filters to include or exclude certain sets of files from triggering a CI build.
The specific extra options you'll see depend on the version you're using of TFS or Azure Pipelines. See Build pipeline
triggers
We don't yet support the Build even if nothing has changed since the previous build option.
Build Defaults tab

XA M L P RO C ESS PA RA M ET ER T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T

Build controller On the General tab, select the default On the Options tab, select the default
agent pool. agent pool.

Staging location On the Tasks tab, specify arguments to On the Tasks tab, specify arguments to
the Copy Files and Publish Build the Copy Files and Publish Build
Artifacts tasks. See Build artifacts. Artifacts tasks. See Build artifacts.

The new build pipeline offers you some new options. For example:
You don't need a controller, and the new agents are easier to set up and maintain. See Build and release
agents.
You can exactly specify which sets of files you want to publish as build artifacts. See Build artifacts.
Process tab
TF Version Control
XA M L P RO C ESS PA RA M ET ER T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T

Clean workspace On the Repositor y tab, open the On the Tasks tab, in left pane click Get
Clean menu, and then select true . sources . Display Advanced settings ,
and then select Clean . (We plan to
change move this option out of
advanced settings.)

Get version You can't specify a changeset in the You can't specify a changeset in the
build pipeline, but you can specify one build pipeline, but you can specify one
when you manually queue a build. when you manually queue a build.

Label Sources On the Repositor y tab, select an Tasks tab, in left pane click Get
option from the Label sources menu. sources . Select one of the Tag
sources options. (We plan to change
the name of this to Label sources .)

The new build pipeline offers you some new options. See Build TFVC repositories.
Git

XA M L P RO C ESS PA RA M ET ER T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T

Clean repository Repositor y tab, open Clean menu, On the Tasks tab, in left pane click Get
select true . sources . Show Advanced settings ,
and then select Clean . (We plan to
change move this option out of
advanced settings.)

Checkout override You can't specify a commit in the build You can't specify a commit in the build
pipeline, but you can specify one when pipeline, but you can specify one when
you manually queue a build. you manually queue a build.

The new build pipeline offers you some new options. See Pipeline options for Git repositories.
Build
On the Build tab (TFS 2017 and newer) or the Tasks tab (Azure Pipelines), after you select the Visual Studio Build
task, you'll see the arguments that are equivalent to the XAML build parameters.

T F S 2017 A N D N EW ER, A Z URE P IP EL IN ES EQ UIVA L EN T


XA M L P RO C ESS PA RA M ET ER A RGUM EN T

Projects Solution

Configurations Platform, Configuration. See Visual Studio Build: How do I


build multiple configurations for multiple platforms?

Clean build Clean

Output location The Visual Studio Build task builds and outputs files in the
same way you do it on your dev machine, in the local
workspace. We give you full control of publishing artifacts out
of the local workspace on the agent. See Artifacts in Azure
Pipelines.

Advanced, MSBuild arguments MSBuild Arguments

Advanced, MSBuild platform Advanced, MSBuild Architecture

Advanced, Perform code analysis Use an MSBuild argument such as /p:RunCodeAnalysis=true

Advanced, post- and pre-build scripts You can run one or more scripts at any point in your build
pipeline by adding one or more instances of the PowerShell,
Batch, and Command tasks. For example, see Use a PowerShell
script to customize your build pipeline.

IMPORTANT
In the Visual Studio Build arguments, on the Visual Studio Version menu, make sure to select version of Visual Studio that
you're using.

The new build pipeline offers you some new options. See Visual Studio Build.
Learn more: Visual Studio Build task (for building solutions), MSBuild task (for building individual projects).
Test
See continuous testing and Visual Studio Test task.
Publish Symbols

XA M L P RO C ESS PA RA M ET ER T F S 2017 A N D N EW ER, A Z URE P IP EL IN ES EQ UIVA L EN T

Path to publish symbols Click the Publish Symbols task and then copy the path into
the Path to publish symbols argument.

Advanced

XA M L P RO C ESS PA RA M ET ER T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T

Maximum agent execution time None On the Options tab you can specify
Build job timeout in minutes .
XA M L P RO C ESS PA RA M ET ER T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T

Maximum agent reservation wait time None None

Name filter, Tag comparison operator, A build pipeline asserts demands that A build pipeline asserts demands that
Tags filter are matched with agent capabilities. See are matched with agent capabilities. See
Agent capabilities. Agent capabilities.

Build number format On the General tab, copy your build On the General tab, copy your build
number format into the Build number number format into the Build number
format field. format field.

Create work item on failure On the Options tab, select this check On the Options tab, enable this
box. option.

Update work items with build number None On the Options tab you can enable
Automatically link new work in
this build .

The new build pipeline offers you some new options. See:
Agent capabilities
Build number format
Retention Policy tab

XA M L P RO C ESS PA RA M ET ER T F S 2017 A N D N EW ER, A Z URE P IP EL IN ES EQ UIVA L EN T

Retention Policy tab On the Retention tab specify the policies you want to
implement.

The new build pipeline offers you some new options. See Build and release retention policies.

Build and release different kinds of apps


In XAML builds you had to create your own custom templates to build different types of apps. In the new build
system you can pick from a set of pre-defined templates. The largest and most current set of templates are
available on Azure Pipelines and in our newest version of TFS.
Build
Here are a few examples of the kinds of apps you can build:
Build your ASP.NET 4 app.
Build your ASP.NET Core app
Build your Universal Windows Platform app
Build your Xamarin app
C++ apps for Windows
Release
The new build system is tightly integrated with Azure Pipelines. So it's easier then ever to automatically kick off a
deployment after a successful build. Learn more:
Create your first pipeline
Release pipelines
Triggers
A few examples include:
Continuous deployment of your app to an Azure web site
IIS using deployment groups
Other apps and tasks
For more examples of apps you can build and deploy, see Build and deploy your app.
For a complete list of our build, test, and deployment tasks, see Build and release tasks.

Customize your tasks


In XAML builds you created custom XAML tasks. In the new builds, you've got a range of options that begin with
easier and lighter-weight approaches.
Get tasks from the Marketplace
Visual Studio Marketplace offers hundreds of extensions that you can install to add tasks that extend your build and
deployment capabilities.
Write a script
A major feature of the new build system is its emphasis on using scripts to customize your build pipeline. You can
check your scripts into version control and customize your build using any of these methods:
PowerShell scripts (Windows)
Batch scripts (Windows)
Command prompt
Shell scripts (macOS and Linux)

TIP
If you're using TFS 2017 or newer, you can write a short PowerShell script directly inside your build pipeline.
TFS 2017 or newer inline PowerShell script
For all these tasks we offer a set of built-in variables, and if necessary, you can define your own variables. See Build
variables.
Write a custom task
If necessary, you can write your own custom extensions to custom tasks for your builds and releases.

Reuse patterns
In XAML builds you created custom XAML templates. In the new builds, it's easier to create reusable patterns.
Create a template
If you don't see a template for the kind of app you can start from an empty pipeline and add the tasks you need.
After you've got a pattern that you like, you can clone it or save it as a template directly in your web browser. See
Create your first pipeline.
Task groups (TFS 2017 or newer)
In XAML builds, if you change the template, then you also change the behavior of all pipelines based on it. In the
new build system, templates don't work this way. Instead, a template behaves as a traditional template. After you
create the build pipeline, subsequent changes to the template have no effect on build pipelines.
If you want to create a reusable and automatically updated piece of logic, then create a task group. You can then
later modify the task group in one place and cause all the pipelines that use it to automatically be changed.

FAQ
I don't see XAML builds. What do I do?
XAML builds are deprecated. We strongly recommend that you migrate to the new builds as explained above.
If you're not yet ready to migrate, then to enable XAML builds you must connect a XAML build controller to your
organization. See Configure and manage your build system.
If you're not yet ready to migrate, then to enable XAML builds:
1. Install TFS 2018.2.
2. Connect your XAML build servers to your TFS instance. See Configure and manage your build system.
How do I add conditional logic to my build pipeline?
Although the new build pipelines are essentially linear, we do give you control of the conditions under which a task
runs.
On TFS 2015 and newer: You can select Enabled, Continue on error, or Always run.
On Azure Pipelines, you can specify one of four built-in choices to control when a task is run. If you need more
control, you can specify custom conditions. For example:
and(failed(), in(variables['Build.Reason'], 'IndividualCI', 'BatchedCI'),
startsWith(variables['Build.SourceBranch'], 'refs/heads/features/'))

See Specify conditions for running a task.


Create a virtual network isolated environment for
build-deploy-test scenarios
6/2/2020 • 10 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Network Virtualization provides ability to create multiple virtual networks on a shared physical network. Isolated
virtual networks can be created using SCVMM Network Virtualization concepts. VMM uses the concept of logical
networks and corresponding VM networks to create isolated networks of virtual machines.

You can create an isolated network of virtual machines that span across different hosts in a host-cluster or a
private cloud.
You can have VMs from different networks residing in the same host machine and still be isolated from each
other.
You can define IP address from the any IP pool of your choice for a VM Network.
See also: Hyper-V Network Virtualization Overview.

To create a virtual network isolated environment:


Ensure you meet the prerequisite conditions described in this section.
Set up Network Virtualization using SCVMM. This is a one-time setup task you do not need to repeat.
Follow these steps.
Decide on the network topology you want to use. You'll specify this when you create the virtual network.
The options and steps are described in this section.
Enable your build-deploy-test scenario as shown in these steps.
You can perform a range of operations to manage VMs using SCVMM. For examples, see SCVMM
deployment.

Prerequisites
SCVMM Server 2012 R2 or later.
Window 2012 R2 host machines with Hyper-V set up with at least two physical NICs attached.
One NIC (perhaps external) with corporate network or Internet access.
One NIC configured in Trunk Mode with a VLAN ID (such as 991) and routable IP subnets (such as
10.10.30.1/24). You network administrator can configure this.
All Hyper-V hosts in the host group have the same VLAN ID. This host group will be used for your isolated
networks.
Verify the setup is working correctly by following these steps:
1. Open an RDP session to each of the host machines and open an administrator PowerShell session.
2. Run the command Get-NetVirtualizationProviderAddress . This gets the provider address for the physical
NIC configured in trunk mode with a VLAN ID.

3. Go to another host and open an administrator PowerShell session. Ping other machines using the command
ping -p <Provider address> . This confirms all host machines are connected to a physical NIC in trunk mode
with IPs routable across the host machines. If this test fails, contact your network administrator.
Back to list of tasks

Create a Network Virtualization layer in SCVMM


Setting up a network visualization layer in SCVMM includes creating logical networks, port profiles, logical
switches, and adding the switches to the Hyper-V hosts.
Create logical networks
1. Log into the SCVMM admin console.
2. Go to Fabric -> Networking -> Logical Networks -> Create new Logical Network .

3. In the popup, enter an appropriate name and select One Connected Network -> Allow new networks
created on this logical network to use network vir tualization , then click Next .
4. Add a new Network Site and select the host group to which the network site will be scoped. Enter the
VLAN ID used to configure physical NIC in the Hyper-V host group and the corresponding routable IP
subnet(s). To assist tracking, change the network site name to one that is memorable.

5. Click Next and Save .


6. Create an IP pool for the new logical networks, enter a name, and click Next .
7. Select Use and existing network site and click Next . Enter the routable IP address range your network
administrator configured for your VLAN and click Next . If you have multiple routable IP subnets associated
with your VLAN, create an IP pool for each one.

8. Provide the gateway address. By default, you can use the first IP address in your subnet.
9. Click Next and leave the existing DNS and WINS settings. Complete the creation of the network site.
10. Now create another Logical Network for external Internet access, but this time select One Connected
network -> Create a VM network with same name to allow vir tual machines to access this
logical network directly and then click Next .

11. Add a network site and select the same host group, but this time add the VLAN as 0 . This means the
communication uses the default access mode NIC (Internet).
12. Click Next and Save .
13. The result should look like the following in your administrator console after creating the logical networks.

Create port profiles


1. Go to Fabric -> Networking -> Por t profiles and Create Hyper-V por t profile .

2. Select Uplink por t profile and select Hyper-V Por t as the load balancing algorithm, then click Next .
3. Select the Network Virtualization site created previously and choose the Enable Hyper-V Network
Vir tualization checkbox, then save the profile.

4. Now create another Hyper-V port profile for external logical network. Select Uplink mode and Host
default as the load balancing algorithm, then click Next .
5. Select the other network site to be used for external communication, but and this time don't enable network
virtualization. Then save the profile.

Create logical switches


1. Go to Fabric -> Networking -> Logical switches and Create Logical Switch .
2. In the getting started wizard, click Next and enter a name for the switch, then click Next .

3. Click Next to open to Uplink tab. Click Add uplink por t profile and add the network virtualization port
profile you just created.
4. Click Next and save the logical switch.
5. Now create another logical switch for the external network for Internet communication. This time add the
other uplink port profile you created for the external network.

Add logical switches to Hyper-V hosts


1. Go to VM and Ser vices -> [Your host group] -> [each of the host machines in turn].
2. Right click and open the Proper ties -> Vir tual Switches tab.
3. Click New Vir tual Switch -> New logical switch for network vir tualization . Assign the physical
adapter you configured in trunk mode and select the network virtualization port profile.

4. Create another logical switch for external connectivity, assign the physical adapter used for external
communication, and select the external port profile.
5. Do the same for all the Hyper-V hosts in the host group.
This is a one-time configuration for a specific host group of machines. After completing this setup, you can
dynamically provision your isolated network of virtual machines using the SCVMM extension in TFS and Azure
Pipelines builds and releases.
Back to list of tasks

Create the required virtual network topology


Isolated virtual networks can be broadly classified into three topologies.
Topology 1: AD-backed Isolated VMs
A boundary VM with Internet/TFS connectivity.
An Azure Pipelines/TFS deployment group agent installed on the boundary VM.
An AD-DNS VM if you want to use a local Active Directory domain.
Isolated app VMs where you deploy and test your apps.
Topology 2: Non-AD backed isolated VMs
A boundary VM with Internet/TFS connectivity.
An Azure Pipelines/TFS deployment group agent installed on the boundary VM.
Isolated app VMs where you deploy and test your apps.

Topology 3: AD-backed non-isolated VMs


A boundary VM with Internet/TFS connectivity.
An Azure Pipelines/TFS deployment group agent installed on the boundary VM.
An AD-DNS VM if you want to use a local Active Directory domain.
App VMs that are also connected to the external network where you deploy and test your apps.

You can create any of the above topologies using the SCVMM extension, as shown in the following steps.
1. Open your TFS or Azure Pipelines instance and install the SCVMM extension if not already installed. For
more information, see SCVMM deployment.

The SCVMM task provides a more efficient way capability to perform lab management operations
using build and release pipelines. You can manage SCVMM environments, provision isolated virtual
networks, and implement build-deploy-test scenarios.

2. In a build or release pipeline, add a new SCVMM task.


3. In the SCVMM task, select a service connection for the SCVMM server where you want to provision your
virtual network and then select New Vir tual machines using Templates/Stored VMs and VHDs to
provision your VMs.
4. You can create VMs from templates, stored VMs, and VHD/VHDx. Choose the appropriate option and enter
the VM names and corresponding source information.

5. In case of topologies 1 and 2 , leave the VM Network name empty, which will clear all the old VM
networks present in the created VMs (if any). For topology 3 , you must provide information about the
external VM network here.
6. Enter the Cloud Name of the host where you want to provision your isolated network. In case of private
cloud, ensure the host machines added to the cloud are connected to the same logical and external switches
as explained above.

7. Select the Network Vir tualization option to create the virtualization layer.
8. Based on the topology you would like to create, decide if the network requires an Active Directory VM. For
example, to create Topology 2 (AD-backed isolated network), you require an Active directory VM. Select the
Add Active Director y VM checkbox, enter the AD VM name and the stored VM source. Also enter the
static IP address configured in the AD VM source and the DNS suffix.

9. Enter the settings for the VM Network and subnet you want to create, and the backing logical network you
created in the previous section (Logical Networks). Ensure the VM network name is unique. If possible,
append the release name for easier tracking later.
10. In the Boundar y Vir tual Machine options section, set Create boundar y VM for communication
with Azure Pipelines/TFS . This will be the entry point for external communication.
11. Enter the boundary VM name and the source template (the boundary VM source should always be a VM
template), and enter name of the existing external VM network you created for external communication.

12. Provide details for configuring the boundary VM agent to communicate with Azure Pipelines/TFS. You can
configure a deployment agent or an automation agent. This agent will be used for app deployments.
13. Ensure the agent name you provide is unique. This will be used as demand in succeeding job properties so
that the correct agent will be selected. If you selected the deployment group agent option, this parameter is
replaced by the value of the tag, which must also be unique.
14. Ensure the boundary VM template has the agent configuration files downloaded and saved in the VHD
before the template is created. Use this path as the agent installation path above.

Enable your build-deploy-test scenario


1. Create a new job in your pipeline, after your network virtualization job.
2. Based on the boundary VM agent (deployment group agent or automation agent) that is created as part of
your boundary VM provisioning, choose Deployment group job or Agent job .
3. In the job properties, select the appropriate deployment group or automation agent pool.
4. In the case of an automation pool, add a new Demand for Agent.Name value. Enter the unique name used
in the network virtualization job. In the case of deployment group job, you must set the tag in the properties
of the group machines.
5. Inside the job, add the tasks you require for deployment and testing.

6. After testing is completed, you can destroy the VMs by using the Delete VM task option.
Now you can create release from this release pipeline. Each release will dynamically provision your isolated virtual
network and run your deploy and test tasks in the environment. You can find the test results in the release
summary. After your tests are completed, you can automatically decommission your environments. You can create
as many environments as you need with just a click from Azure Pipelines .
Back to list of tasks

See also
SCVMM deployment
Hyper-V Network Virtualization Overview

FAQ
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Build and release tasks
11/2/2020 • 15 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015 | Previous versions (XAML builds)

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.

This article provides an index of built-in tasks. To learn more about tasks, including creating custom tasks,
custom extensions, and finding tasks on the Visual Studio Marketplace, see Tasks concepts.

Build
TA SK VERSIO N S

- Build, test, package, or publish a Azure Pipelines, TFS 2017 and newer
.NET Core CLI task
dotnet application, or run a custom dotnet command. For
package commands, supports NuGet.org and
authenticated feeds like Package Management and MyGet.

- Android Azure Pipelines, TFS 2015 RTM and newer


Android build task (deprecated; use Gradle)
build and release task

- Android Azure Pipelines, TFS 2015 RTM and newer


Android signing build and release task
signing build and release task

- Learn how to build with Azure Pipelines, TFS 2015 RTM and newer
Ant build and release task
Apache Ant

- Build, test, and deploy Azure Pipelines


Azure IoT Edge task
applications quickly and efficiently to Azure IoT Edge

- CMake build and Azure Pipelines, TFS 2015 RTM and newer
CMake build and release task
release task

- Build, push or run multi- Azure Pipelines, Azure DevOps Server 2019
Docker Compose task
container Docker applications. Task can be used with
Docker or Azure Container registry.

- Build and push Docker images to any Azure Pipelines, TFS 2018 and newer
Docker task
container registry using Docker registry service
connection
TA SK VERSIO N S

Go task - Get, build, test a go application, or run a Azure Pipelines


custom go command.

- Gradle build and Azure Pipelines, TFS 2015 RTM and newer
Gradle build and release task
release task

- Grunt build and Azure Pipelines, TFS 2015.3 and newer


Grunt build and release task
release task

- Gulp build and release Azure Pipelines, TFS 2015 RTM and newer
Gulp build and release task
task

- Index Sources & Azure Pipelines, TFS 2015 RTM and newer
Index Sources & Publish Symbols
Publish Symbols build and release task

- Queue a Azure Pipelines, TFS 2017 and newer


Jenkins Queue Job build and release task
job on a Jenkins server build and release task

- Maven build and Azure Pipelines, TFS 2015 RTM and newer
Maven build and release task
release task

- MSBuild build and Azure Pipelines, TFS 2015 RTM and newer
MSBuild build and release task
release task

- Azure Pipelines, TFS 2015.3 and newer


SonarQube - Prepare Analysis Configuration
Configure all the required settings before executing the
build

- Display Azure Pipelines, TFS 2015.3 and newer


SonarQube - Publish Quality Gate Result
the Quality Gate status in the build summary

- Run the analysis of Azure Pipelines, TFS 2015.3 and newer


SonarQube - Run Code Analysis
the source code

- Visual Azure Pipelines, TFS 2015 RTM and newer


Visual Studio Build build and release task
Studio Build build and release task

- Azure Pipelines, TFS 2015 RTM and newer


Xamarin.Android build and release task
Xamarin.Android build and release task

- Xamarin.iOS Azure Pipelines, TFS 2015 RTM and newer


Xamarin.iOS build and release task
build and release task
TA SK VERSIO N S

- Xcode build and Azure Pipelines


Xcode build and release task
release task

- Xcode Build build TFS 2015, TFS 2017, TFS 2018


Xcode Build build and release task
and release task

- Xcode Azure Pipelines, TFS 2015 RTM and newer


Xcode Package iOS build and release task
Package iOS build and release task

Utility
TA SK VERSIO N S

- Use an archive file to then create Azure Pipelines, TFS 2017 and newer
Archive Files task
a source folder

- Connect or Azure Pipelines


Azure Network Load Balancer task
disconnect an Azure virtual machine's network interface to
a load balancer's address pool

- Run a Bash script on macOS, Linux, or Azure Pipelines


Bash task
Windows

- Execute .bat or .cmd scripts when Azure Pipelines, TFS 2015 RTM and newer
Batch Script task
building your code

Cache task - Improve build performance by caching files, Azure Pipelines, TFS 2017 and newer
like dependencies, between pipeline runs.

- Execute tools from a command Azure Pipelines, TFS 2015 RTM and newer
Command Line task
prompt when building code

- Copy build TFS 2015 RTM. Deprecated on Azure Pipelines and newer
Copy and Publish Build Artifacts task versions of TFS.
artifacts to a staging folder and publish them

- Copy files between folders with Azure Pipelines, TFS 2015.3 and newer
Copy Files task
match patterns when building code

- Use cURL to upload files Azure Pipelines, TFS 2015 RTM and newer
cURL Upload Files task
with supported protocols

- A thin utility task for Azure Pipelines


Decrypt File (OpenSSL) task
file decryption using OpenSSL
TA SK VERSIO N S

- Pause execution of a build or release Azure Pipelines, Azure DevOps Server 2019
Delay task
pipeline for a fixed delay time

- Delete files from the agent working Azure Pipelines, TFS 2015.3 and newer
Delete Files task
directory when building code

- Download Build Azure Pipelines


Download Build Artifacts task
Artifacts task for use in a build or release pipeline

- Download Azure Pipelines


Download Fileshare Artifacts task
Fileshare Artifacts task for Azure Pipelines and TFS

- Download assets Azure Pipelines


Download GitHub Release task
from your GitHub release as part of your pipeline

- Download a package from Azure Pipelines


Download Package task
a Package Management feed in Azure Artifacts or TFS.

- Download Pipeline Azure Pipelines


Download Pipeline Artifacts task
Artifacts task to download pipeline artifacts from earlier
stages in this pipeline, or from another pipeline

- Download a secure file to Azure Pipelines


Download Secure File task
a temporary location on the build or release agent in

- Extract files from archives to a Azure Pipelines, TFS 2017 and newer
Extract Files task
target folder using minimatch patterns on (TFS)

- Apply configuration file Azure Pipelines, Azure DevOps Server 2019


File Transform task
transformations and variable substitution to a target
package or folder

- Upload files to a remote machine Azure Pipelines, TFS 2017 and newer
FTP Upload task
using the File Transfer Protocol (FTP), or securely with FTPS
on (TFS)

- Create, edit, or discard a Azure Pipelines


GitHub Release task
GitHub release.

- Install an Apple Azure Pipelines, TFS 2018 and newer


Install Apple Certificate task
certificate required to build on a macOS agent on (TFS)
TA SK VERSIO N S

- Install an Azure Pipelines, TFS 2018 and newer


Install Apple Provisioning Profile task
Apple provisioning profile required to build on a macOS
agent

- Install an SSH key prior to a Azure Pipelines


Install SSH Key task
build or release

- Invoke a HTTP triggered Azure Pipelines, TFS 2017 and newer


Invoke Azure Function task
function in an Azure function app and parse the response

- Build and release task to Azure Pipelines, TFS 2018 and newer
Invoke HTTP REST API task
invoke an HTTP API and parse the response with a build or
release pipeline

- Download artifacts Azure Pipelines, TFS 2017 and newer


Jenkins Download Artifacts task
produced by a Jenkins job

- Pause an active Azure Pipelines, Azure DevOps Server 2019


Manual Intervention task
deployment within a stage in a release pipeline

- Execute PowerShell scripts Azure Pipelines, TFS 2015 RTM and newer
PowerShell task

- Publish build artifacts to Azure Pipelines, TFS 2015 RTM and newer
Publish Build Artifacts task
Azure Pipelines, Team Foundation Server (TFS), or to a file
share

- Publish artifacts to Azure Pipelines


Publish Pipeline Artifacts task
Azure Pipelines.

- Send a message Azure Pipelines, Azure DevOps Server 2019


Publish To Azure Service Bus task
to an Azure Service Bus with a build or release pipeline

- Run a Python script in a build or Azure Pipelines


Python Script task
release pipeline

- Observe the Azure Pipelines, TFS 2017 and newer


Query Azure Monitor Alerts task
configured Azure monitor rules for active alerts in a build
or release pipeline

Query Work Items task - Ensure the number of Azure Pipelines, TFS 2017 and newer
matching items returned by a work item query is within
the configured threshold

- Service Fabric Azure Pipelines, Azure DevOps Server 2019


Service Fabric PowerShell Utility task
PowerShell task for use in build or release pipelines in
TA SK VERSIO N S

- Execute a bash script when building Azure Pipelines, TFS 2015 RTM and newer
Shell Script task
code

- Update the Azure Pipelines, TFS 2017 and newer


Update Service Fabric Manifests task
Service Fabric App versions

- Activate or deactivate a Azure Pipelines, TFS 2015 RTM and newer


Xamarin License task
Xamarin license when building code

Test
TA SK VERSIO N S

- Test app packages with Visual Azure Pipelines, TFS 2017 and newer
App Center Test task
Studio App Center.

Azure Pipelines
Cloud-based Apache JMeter Load Test task
(Deprecated) - Runs the Apache JMeter load test in cloud

- Runs the Azure Pipelines, TFS 2015 RTM and newer


Cloud-based Load Test task (Deprecated)
load test in cloud with a build or release pipeline with
Azure Pipelines to integrate cloud-based load tests into
your build and release pipelines

Azure Pipelines, TFS 2015 RTM and newer


Cloud-based Web Performance Test task
(Deprecated) - Runs the Quick Web Performance Test with
a build or release pipeline to easily verify your web
application exists and is responsive

- Test container Azure Pipelines


Container Structure Test Task
structure by container task and integrate test reporting
into your build and release pipelines

- Publish Azure Pipelines, TFS 2015 RTM and newer


Publish Code Coverage Results task
Cobertura or JaCoCo code coverage results from an Azure
Pipelines or TFS build

- Publish Test Results to Azure Pipelines, TFS 2015 RTM and newer
Publish Test Results task
integrate test reporting into your build and release
pipelines
TA SK VERSIO N S

- Run Coded Azure Pipelines, TFS 2015 RTM and newer


Run Functional Tests task
UI/Selenium/Functional tests on a set of machines using
the Test Agent to integrate cloud-based load tests into
your build and release pipelines

- Deploy Azure Pipelines, TFS 2015 RTM and newer


Visual Studio Test Agent Deployment task
and configure the Test Agent to run tests on a set of
machines to integrate cloud-based load tests into your
build and release pipelines

- Run unit and functional tests Azure Pipelines


Visual Studio Test task
(Selenium, Appium, Coded UI test, etc.) using the Visual
Studio Test runner. Test frameworks that have a Visual
Studio test adapter such as xUnit, NUnit, Chutzpah, etc.
can also be run.

- This task is deprecated. Azure Pipelines, TFS 2015 RTM and newer
Xamarin Test Cloud task
Use the App Center Test task instead.

Package
TA SK VERSIO N S

- Learn all about how you can use Azure Pipelines, TFS 2015 RTM and newer
CocoaPods task
CocoaPods packages when you are building code in Azure
Pipelines or Team Foundation Server (TFS).

- How to create and Azure Pipelines


Conda Environment task
activate a Conda environment when building code

- Azure Pipelines
Maven Authenticate task (for task runners)
Provides credentials for Azure Artifacts feeds and external
Maven repositories.

- Don't use Azure Pipelines


npm Authenticate task (for task runners)
this task if you're also using the npm task. Provides npm
credentials to an .npmrc file in your repository for the
scope of the build. This enables npm task runners like gulp
and Grunt to authenticate with private registries.

- How to use npm packages when building Azure Pipelines, TFS 2015 RTM and newer
npm task
code in Azure Pipelines

- Configure NuGet tools to Azure Pipelines


NuGet Authenticate
authenticate with Azure Artifacts and other NuGet
repositories
TA SK VERSIO N S

- Learn all Azure Pipelines, TFS 2018 and newer


NuGet restore, pack, and publish task
about how you can make use of NuGet packages when
you are building code

- How to upload a Azure Pipelines


PyPI Publisher task (Deprecated)
package to PyPI when building code

- Sets up authentication Azure Pipelines


Python Pip Authenticate
with pip so you can perform pip commands in your
pipeline.

- Sets up Azure Pipelines


Python Twine Upload Authenticate
authentication with twine to Python feeds so you can
publish Python packages in your pipeline.

- Learn Azure Pipelines, TFS 2018 and newer


Universal Package, download and publish task
all about how you can make use of Universal packages
when you are building code

Deploy
TA SK VERSIO N S

- Distribute app builds to Azure Pipelines, TFS 2017 and newer


App Center Distribute task
testers and users through App Center

- The Azure App Azure Pipelines, Azure DevOps Server 2019


Azure App Service Deploy task
Service Deploy task is used to update Azure App Services
to deploy Web Apps, Functions, and WebJobs.

- Start, Stop, Restart, Azure Pipelines


Azure App Service Manage task
Slot swap, Swap with Preview, Install site extensions, or
Enable Continuous Monitoring for an Azure App Service

- Azure App Service Azure Pipelines


Azure App Service Settings task
Settings Task supports configuring App settings,
connection strings and other general settings in bulk using
JSON syntax on your web app or any of its deployment
slots.

Azure CLI task - build task to run a shell or batch Azure Pipelines, Azure DevOps Server 2019
script containing Microsoft Azure CLI commands

- Deploy an Azure Pipelines


Azure Cloud Service Deployment task
Azure Cloud Service
TA SK VERSIO N S

- Run Azure Pipelines


Azure Database for Mysql Deployment task
your scripts and make changes to your Azure DB for
Mysql.

- build task to copy files to Azure Pipelines, TFS 2015.3 and newer
Azure File Copy task
Microsoft Azure storage blobs or virtual machines (VMs)

- Deploy Azure Pipelines


Azure Function App for Container task
Azure Functions on Linux using custom images

- The Azure App Service Azure Pipelines


Azure Function App task
Deploy task is used to update Azure App Services to
deploy Web Apps, Functions, and WebJobs.

- Azure Key Vault task for use in Azure Pipelines, Azure DevOps Server 2019
Azure Key Vault task
the jobs of all of your build and release pipelines

- Configure alerts on Azure Pipelines


Azure Monitor Alerts task
available metrics for an Azure resource

- Security and compliance Azure Pipelines, Azure DevOps Server 2019


Azure Policy task
assessment with Azure policies

Azure PowerShell task - Run a PowerShell script Azure Pipelines


within an Azure environment

- Deploy, Azure Pipelines


Azure Resource Group Deployment task
start, stop, or delete Azure Resource Groups

- Deploy Azure Pipelines


Azure SQL Database Deployment task
Azure SQL DB using DACPAC or run scripts using
SQLCMD

- Azure Pipelines
Azure virtual machine scale set deployment task
Deploy virtual machine scale set image

- Deploy Web Azure Pipelines


Azure Web App for Container task
Apps, Functions, and WebJobs to Azure App Services

- The Azure App Service Deploy Azure Pipelines


Azure Web App task
task is used to update Azure App Services to deploy Web
Apps, Functions, and WebJobs.
TA SK VERSIO N S

- Build a machine image Azure Pipelines


Build Machine Image task
using Packer to use for Azure Virtual machine scale set
deployment

- Run scripts with Knife commands on Azure Pipelines


Chef Knife task
your Chef workstation

- Deploy to Chef environments by editing Azure Pipelines


Chef task
environment attributes

- Copy Files Over SSH task Azure Pipelines, TFS 2017 and newer
Copy Files Over SSH task
for use in the jobs of all of your build and release pipelines

- Deploy a website or web Azure Pipelines


IIS Web App Deploy task
app using WebDeploy

- Create or update a Azure Pipelines


IIS Web App Manage task
Website, Web App, Virtual Directory, or Application Pool

- Deploy, configure, or update a Azure Pipelines


Kubectl task
Kubernetes cluster in Azure Container Service by running
kubectl commands.

- Bake and deploy Azure Pipelines


Kubernetes manifest task
manifests to Kubernetes clusters

Azure Pipelines
MySQL Database Deployment On Machine Group
task - The task is used to deploy for MySQL Database.

- Deploy, Azure Pipelines, Azure DevOps Server 2019


Package and Deploy Helm Charts task
configure, update your Kubernetes cluster in Azure
Container Service by running helm commands.

- PowerShell on Azure Pipelines, TFS 2015 RTM and newer


PowerShell on Target Machines task
Target Machines build task

- Service Azure Pipelines, TFS 2017 and newer


Service Fabric Application Deployment task
Fabric Application Deployment task

- Service Fabric Azure Pipelines, Azure DevOps Server 2019


Service Fabric Compose Deploy task
Compose Deploy Deployment task

- SSH task for use in the jobs Azure Pipelines, TFS 2017 and newer
SSH Deployment task
of all of your build and release pipelines
TA SK VERSIO N S

- Copy application Azure Pipelines, TFS 2015 RTM and newer


Windows Machine File Copy task
files and other artifacts to remote Windows machines

- Deploy to Azure Pipelines


WinRM SQL Server DB Deployment task
SQL Server Database using DACPAC or SQL scripts

Tool
TA SK VERSIO N S

- Install the Docker CLI on an Azure Pipelines, Azure DevOps Server 2019
Docker Installer task
agent machine

Go Tool Installer task - Finds or downloads a specific Azure Pipelines


version of the Go tool into the tools cache and adds it to
the PATH

- Install helm on an agent Azure Pipelines


Helm installer task
machine

- Change the version of Java Azure Pipelines


Java Tool Installer task

- Install kubectl on an agent Azure Pipelines


Kubectl installer task
machine

- Find, download, and Azure Pipelines


Node.js Tool Installer task
cache a specified version of Node.js and add it to the PATH

- Find, download, and cache Azure Pipelines


NuGet Tool Installer task
a specified version of NuGet and add it to the PATH

- Acquires a specific version of Azure Pipelines


Use .NET Core task
.NET Core from the internet or the tools cache and adds it
to the PATH

- Select a version of Python Azure Pipelines


Use Python Version task
to run on an agent and optionally add it to PATH

- Select a version of Ruby to Azure Pipelines


Use Ruby Version task
run on an agent and optionally add it to PATH

- Acquires Azure Pipelines


Visual Studio Test Platform Installer task
the test platform from nuget.org or the tools cache and
can allow you to run tests and collect diagnostic data
To learn more about tool installer tasks, see Tool installers.

Open source
These tasks are open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn step-by-step how to build my app?
Build your app
Can I add my own build tasks?
Yes: Add a build task
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some
features are available on-premises if you have upgraded to the latest version of TFS.
.NET Core CLI task
11/2/2020 • 10 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Azure Pipelines
Use this task to build, test, package, or publish a dotnet application, or to run a custom dotnet command. For
package commands, this task supports NuGet.org and authenticated feeds like Package Management and
MyGet.
If your .NET Core or .NET Standard build depends on NuGet packages, make sure to add two copies of this step:
one with the restore command and one with the build command.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

YAML snippet
# .NET Core
# Build, test, package, or publish a dotnet application, or run a custom dotnet command
- task: DotNetCoreCLI@2
inputs:
#command: 'build' # Options: build, push, pack, publish, restore, run, test, custom
#publishWebProjects: true # Required when command == Publish
#projects: # Optional
#custom: # Required when command == Custom
#arguments: # Optional
#publishTestResults: true # Optional
#testRunTitle: # Optional
#zipAfterPublish: true # Optional
#modifyOutputPath: true # Optional
#feedsToUse: 'select' # Options: select, config
#vstsFeed: # Required when feedsToUse == Select
#includeNuGetOrg: true # Required when feedsToUse == Select
#nugetConfigPath: # Required when feedsToUse == Config
#externalFeedCredentials: # Optional
#noCache: false
restoreDirectory:
#verbosityRestore: 'Detailed' # Options: -, quiet, minimal, normal, detailed, diagnostic
#packagesToPush: '$(Build.ArtifactStagingDirectory)/*.nupkg' # Required when command == Push
#nuGetFeedType: 'internal' # Required when command == Push# Options: internal, external
#publishVstsFeed: # Required when command == Push && NuGetFeedType == Internal
#publishPackageMetadata: true # Optional
#publishFeedCredentials: # Required when command == Push && NuGetFeedType == External
#packagesToPack: '**/*.csproj' # Required when command == Pack
#packDirectory: '$(Build.ArtifactStagingDirectory)' # Optional
#nobuild: false # Optional
#includesymbols: false # Optional
#includesource: false # Optional
#versioningScheme: 'off' # Options: off, byPrereleaseNumber, byEnvVar, byBuildNumber
#versionEnvVar: # Required when versioningScheme == ByEnvVar
#majorVersion: '1' # Required when versioningScheme == ByPrereleaseNumber
#minorVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#patchVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#buildProperties: # Optional
#verbosityPack: 'Detailed' # Options: -, quiet, minimal, normal, detailed, diagnostic
workingDirectory:

Arguments
A RGUM EN T DESC RIP T IO N

command The dotnet command to run. Select custom to add


Command arguments or use a command not listed here.
Options: build , push , pack , publish , restore , run
, test , custom

selectOrConfig You can either choose to select a feed from Azure Artifacts
Feeds to use and/or NuGet.org here, or commit a NuGet.config file to
your source code repository and set its path using the
nugetConfigPath argument.
Options: select , config
Argument aliases: feedsToUse
A RGUM EN T DESC RIP T IO N

versioningScheme Cannot be used with include referenced projects. If you


Automatic package versioning choose 'Use the date and time', this will generate a SemVer-
compliant version formatted as X.Y.Z-ci-datetime where
you choose X, Y, and Z.
If you choose 'Use an environment variable', you must
select an environment variable and ensure it contains
the version number you want to use.
If you choose 'Use the build number', this will use the
build number to version your package. Note: Under
Options set the build number format to be
'$(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOf
Month)$(Rev:.r)'
Options: off , byPrereleaseNumber , byEnvVar ,
byBuildNumber ,

arguments Arguments to the selected command. For example, build


Arguments configuration, output folder, runtime. The arguments depend
on the command selected
Note: This input only currently accepts arguments for
build , publish , run , test , custom . If you would like
to add arguments for a command not listed, use custom .

projects The path to the csproj file(s) to use. You can use wildcards
Path to project(s) (e.g. **/*.csproj for all .csproj files in all subfolders).

noCache Prevents NuGet from using packages from local machine


Disable local cache caches.

packagesDirectory Specifies the folder in which packages are installed. If no


Destination directory folder is specified, packages are restored into the default
NuGet package cache
Argument aliases: restoreDirectory

buildProperties Specifies a list of token = value pairs, separated by


Additional build properties semicolons, where each occurrence of $token$ in the .nuspec
file will be replaced with the given value. Values can be
strings in quotation marks

verbosityPack Specifies the amount of detail displayed in the output for the
Verbosity pack command.

verbosityRestore Specifies the amount of detail displayed in the output for the
Verbosity restore command.

workingDirectory Current working directory where the script is run. Empty is


Working Directory the root of the repo (build) or artifacts (release), which is
$(System.DefaultWorkingDirectory)

searchPatternPush The pattern to match or path to nupkg files to be uploaded.


Path to NuGet package(s) to publish Multiple patterns can be separated by a semicolon, and you
can make a pattern negative by prefixing it with ! .
Example: **/*.nupkg;!**/*.Tests.nupkg.
Argument aliases: packagesToPush
A RGUM EN T DESC RIP T IO N

nuGetFeedType Specifies whether the target feed is internal or external.


Target feed location Options: internal , external

feedPublish Select a feed hosted in your organization. You must have


Target feed Package Management installed and licensed to select a feed
here
Argument aliases: publishVstsFeed

publishPackageMetadata Associate this build/release pipeline’s metadata (run ID,


Publish pipeline metadata source code information) with the package

externalEndpoint The NuGet service connection that contains the external


NuGet server NuGet server’s credentials.
Argument aliases: publishFeedCredentials

searchPatternPack Pattern to search for csproj or nuspec files to pack. You can
Path to csproj or nuspec file(s) to pack separate multiple patterns with a semicolon, and you can
make a pattern negative by prefixing it with ! . Example:
**/*.csproj;!**/*.Tests.csproj
Argument aliases: packagesToPack

configurationToPack When using a csproj file this specifies the configuration to


Configuration to Package package.
Argument aliases: configuration

outputDir Folder where packages will be created. If empty, packages will


Package Folder be created alongside the csproj file.
Argument aliases: packDirectory

nobuild Don't build the project before packing. Corresponds to the


Do not build --no-build parameter of the `build` command.

includesymbols Additionally creates symbol NuGet packages. Corresponds to


Include Symbols the --include-symbols command line parameter.

includesource Includes source code in the package. Corresponds to the


Include Source --include-source command line parameter.

publishWebProjects If true, the task will try to find the web projects in the
Publish Web Projects repository and run the publish command on them. Web
projects are identified by presence of either a web.config file
or wwwroot folder in the directory. Note that this argument
defaults to true if not specified.

zipAfterPublish If true, folder created by the publish command will be zipped.


Zip Published Projects

modifyOutputPath If true, folders created by the publish command will have


Add project name to publish path project file name prefixed to their folder names when output
path is specified explicitly in arguments. This is useful if you
want to publish multiple projects to the same folder.
A RGUM EN T DESC RIP T IO N

publishTestResults Enabling this option will generate a test results TRX file in
Publish test results $(Agent.TempDirectory) and results will be published to
the server.
This option appends
--logger trx --results-directory
$(Agent.TempDirectory)
to the command line arguments.
Code coverage can be collected by adding
--collect "Code coverage" to the command line
arguments. This is currently only available on the Windows
platform.

testRunTitle Provides a name for the test run


Test run title

custom The command to pass to dotnet.exe for execution.


Custom command For a full list of available commands, see the dotnet CLI
documentation

feedRestore Include the selected feed in the generated NuGet.config. You


Use packages from this Azure Artifacts/TFS feed must have Package Management installed and licensed to
select a feed here. Note that this is not supported for the
test command.
Argument aliases: vstsFeed

includeNuGetOrg Include NuGet.org in the generated NuGet.config000 0.


Use packages from NuGet.org

nugetConfigPath The NuGet.config in your repository that specifies the feeds


Path to NuGet.config from which to restore packages.

externalEndpoints Credentials to use for external registries located in the


Credentials for feeds outside this organization/collection selected NuGet.config. For feeds in this
organization/collection, leave this blank; the build’s
credentials are used automatically
Argument aliases: externalFeedCredentials

versionEnvVar Enter the variable name without $, $env, or %


Environment variable

requestedMajorVersion The 'X' in version X.Y.Z.


Major Argument aliases: majorVersion

requestedMinorVersion The 'Y' in version X.Y.Z.


Minor Argument aliases: minorVersion

requestedPatchVersion The 'Z' in version X.Y.Z.


Patch Argument aliases: patchVersion

C O N T RO L O P T IO N S

Examples
Build
Build a project

# Build project
- task: DotNetCoreCLI@2
inputs:
command: 'build'

Build Multiple Projects

# Build multiple projects


- task: DotNetCoreCLI@2
inputs:
command: 'build'
projects: |
src/proj1/proj1.csproj
src/proj2/proj2.csproj
src/other/other.sln # Pass a solution instead of a csproj.

Push
Push NuGet packages to internal feed

# Push non test NuGet packages from a build to internal organization Feed
- task: DotNetCoreCLI@2
inputs:
command: 'push'
searchPatternPush:
'$(Build.ArtifactStagingDirectory)/*.nupkg;!$(Build.ArtifactStagingDirectory)/*.Tests.nupkg'
feedPublish: 'FabrikamFeed'

Push NuGet packages to external feed

# Push all NuGet packages from a build to external Feed


- task: DotNetCoreCLI@2
inputs:
command: 'push'
nugetFeedType: 'external'
externalEndPoint: 'MyNuGetServiceConnection'

Pack
Pack a NuGetPackage to a specific output directory

# Pack a NuGet package to a test directory


- task: DotNetCoreCLI@2
inputs:
command: 'pack'
outputDir: '$(Build.ArtifactStagingDirectory)/TestDir'

Pack a Symbol Package

# Pack a symbol package along with NuGet package


- task: DotNetCoreCLI@2
inputs:
command: 'pack'
includesymbols: true
Publish
Publish projects to specified folder

# Publish projects to specified folder.


- task: DotNetCoreCLI@2
displayName: 'dotnet publish'
inputs:
command: publish
publishWebProjects: false
projects: '**/*.csproj'
arguments: '-o $(Build.ArtifactStagingDirectory)/Output'
zipAfterPublish: true
modifyOutputPath: true

Test
Run tests in your repository

# Run tests and auto publish test results.


- task: DotNetCoreCLI@2
inputs:
command: 'test'

FAQ
Why is my build, publish, or test step failing to restore packages?
Most dotnet commands, including build , publish , and test include an implicit restore step. This will fail
against authenticated feeds, even if you ran a successful dotnet restore in an earlier step, because the earlier
step will have cleaned up the credentials it used.
To fix this issue, add the --no-restore flag to the Arguments textbox.
In addition, the test command does not recognize the feedRestore or vstsFeed arguments and feeds
specified in this manner will not be included in the generated NuGet.config file when the implicit restore step
runs. It is recommended that an explicit dotnet restore step be used to restore packages. The restore
command respects the feedRestore and vstsFeed arguments.
Why should I check in a NuGet.config?
Checking a NuGet.config into source control ensures that a key piece of information needed to build your
project-the location of its packages-is available to every developer that checks out your code.
However, for situations where a team of developers works on a large range of projects, it's also possible to add
an Azure Artifacts feed to the global NuGet.config on each developer's machine. In these situations, using the
"Feeds I select here" option in the NuGet task replicates this configuration.

Troubleshooting
File structure for output files is different from previous builds
Azure DevOps hosted agents are configured with .NET Core 3.0, 2.1 and 2.2. CLI for .NET Core 3.0 has a different
behavior while publishing projects using output folder argument. When publishing projects with the output
folder argument (-o), the output folder is created in the root directory and not in the project file’s directory.
Hence while publishing more than one projects, all the files are published to the same directory, which causes an
issue.
To resolve this issue, use the Add project name to publish path parameter (modifyOutputPath in YAML) in the
.NET Core CLI task. This creates a sub folder with project file’s name, inside the output folder. Hence all your
projects will be published under different sub-folder’s inside the main output folder.

steps:
- task: DotNetCoreCLI@2
displayName: 'dotnet publish'
inputs:
command: publish
publishWebProjects: false
projects: '**/*.csproj'
arguments: '-o testpath'
zipAfterPublish: false
modifyOutputPath: true

Project using Entity Framework has stopped working on Hosted Agents


The latest .NET Core: 3.0 does not have Entity Framework(EF) built-in. You will have to either install EF before
beginning execution or add global.json to the project with required .NET Core SDK version. This will ensure that
correct SDK is used to build EF project. If the required version is not present on the machine, add UseDotNetV2
task to your pipeline to install the required version. Learn more about EF with .NET Core 3.0

Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Android build task (deprecated; use Gradle)
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build an Android app using Gradle and optionally start the emulator for unit tests.

Deprecated
The Android Build task has been deprecated. Use the Gradle task instead.

Demands
The build agent must have the following capabilities:
Android SDK (with the version number you will build against)
Android Support Repository (if referenced by Gradle file)

Arguments
A RGUM EN T DESC RIP T IO N

Location of Gradle Wrapper The location in the repository of the gradlew wrapper
used for the build. For agents on Windows (including
Microsoft-hosted agents), you must use the
gradlew.bat wrapper. Agents on Linux or macOS can
use the gradlew shell script.
See The Gradle Wrapper.

Project Directory Relative path from the repo root to the root directory of the
application (likely where your build.gradle file is).

Gradle Arguments Provide any options to pass to the Gradle command line.
The default value is build
See Gradle command line.

A N DRO ID VIRT UA L DEVIC E ( AVD) O P T IO N S

Name Name of the AVD to be started or created.


Note: You must deploy your own agent to use this
option. You cannot use a Microsoft-hosted pool if you
want to create an AVD.

Create AVD Select this check box if you would like the AVD to be
created if it does not exist.
A RGUM EN T DESC RIP T IO N

AVD Target SDK Android SDK version the AVD should target. The default value
is android-19

AVD Device (Optional) Device pipeline to use. Can be a device index or id.
The default value is Nexus 5

AVD ABI The Application Binary Interface to use for the AVD. The
default value is default/armeabi-v7a
See ABI Management.

Overwrite Existing AVD Select this check box if an existing AVD with the same name
should be overwritten.

Create AVD Optional Arguments Provide any options to pass to the android create avd
command.
See Android Command Line.

EM UL ATO R O P T IO N S

Start and Stop Android Emulator Check if you want the emulator to be started and stopped
when Android Build task finishes.
Note: You must deploy your own agent to use this
option. You cannot use a Microsoft-hosted pool if you
want to use an emulator.

Timeout in Seconds How long should the build wait for the emulator to start. The
default value is 300 seconds.

Headless Display Check if you want to start the emulator with no GUI (headless
mode).

Emulator Optional Arguments (Optional) Provide any options to pass to the emulator
command. The default value is
-no-snapshot-load -no-snapshot-save

Delete AVD Check if you want the AVD to be deleted upon completion.

C O N T RO L O P T IO N S

Related tasks
Android Signing
Android signing task
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task in a pipeline to sign and align Android APK files.

Demands
The build agent must have the following capabilities:
Java JDK

YAML snippet
# Android signing
# Sign and align Android APK files
- task: AndroidSigning@3
inputs:
#apkFiles: '**/*.apk'
#apksign: true # Optional
#apksignerKeystoreFile: # Required when apksign == True
#apksignerKeystorePassword: # Optional
#apksignerKeystoreAlias: # Optional
#apksignerKeyPassword: # Optional
#apksignerArguments: '--verbose' # Optional
#apksignerFile: # Optional
#zipalign: true # Optional
#zipalignFile: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

files (Required) Relative path from the repo root to the APK(s)
APK files you want to sign. You can use wildcards to specify
multiple files. For example:
outputs\apk*.apk to sign all .APK files in the
outputs\apk\ subfolder
*\bin\.apk to sign all .APK files in all bin subfolders

Default value: **/*.apk


Argument aliases: apkFiles

SIGN IN G O P T IO N S

apksign (Optional) Select this option to sign the APK with a provided
Sign the APK Android Keystore file. Unsigned APKs can only run in an
emulator. APKs must be signed to run on a device.
Default value: true
A RGUM EN T DESC RIP T IO N

keystoreFile (Required) Select or enter the file name of the Android


Keystore file Keystore file that should be used to sign the APK. This file
must be uploaded to the secure files library where it is
securely stored with encryption. The Android Keystore file will
be used to sign the APK, but will be removed from the agent
machine when the pipeline completes.
Argument aliases: apksignerKeystoreFile

keystorePass (Optional) Enter the password for the provided Android


Keystore password Keystore file.
Impor tant: Use a new variable with its lock enabled on
the Variables pane to encrypt this value. See secret
variables.

Argument aliases: apksignerKeystorePassword

keystoreAlias (Optional) Enter the alias that identifies the public/private key
Alias pair to be used in the keystore file.
Argument aliases: apksignerKeystoreAlias

keyPass (Optional) Enter the key password for the alias and Android
Key password Keystore file.
Impor tant: Use a new variable with its lock enabled on
the Variables pane to encrypt this value. See secret
variables.

Argument aliases: apksignerKeyPassword

apksignerArguments (Optional) Provide any options to pass to the apksigner


apksigner arguments command line. Default is: --verbose .
See the apksigner documentation.

Default value: --verbose

apksignerLocation (Optional) Optionally specify the location of the apksigner


apksigner location executable used during signing. This defaults to the
apksigner found in the Android SDK version folder that
your application builds against.

Argument aliases: apksignerFile

Z IPA L IGN O P T IO N S

zipalign (Optional) Select if you want to zipalign your package.


Zipalign This reduces the amount of RAM consumed by an app.

Default value: true


A RGUM EN T DESC RIP T IO N

zipalignLocation (Optional) Optionally specify the location of the zipalign


Zipalign location executable used during signing. This defaults to the
zipalign found in the Android SDK version folder that your
application builds against.

Argument aliases: zipalignFile

C O N T RO L O P T IO N S

Related tasks
Android Build
Ant task
6/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build with Apache Ant.

Demands
The build agent must have the following capability:
Apache Ant

YAML snippet
# Ant
# Build with Apache Ant
- task: Ant@1
inputs:
#buildFile: 'build.xml'
#options: # Optional
#targets: # Optional
#publishJUnitResults: true
#testResultsFiles: '**/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#codeCoverageToolOptions: 'None' # Optional. Options: none, cobertura, jaCoCo
#codeCoverageClassFilesDirectories: '.' # Required when codeCoverageToolOptions != None
#codeCoverageClassFilter: # Optional. Comma-separated list of filters to include or exclude classes from
collecting code coverage. For example: +:com.*,+:org.*,-:my.app*.*
#codeCoverageSourceDirectories: # Optional
#codeCoverageFailIfEmpty: false # Optional
#antHomeDirectory: # Optional
#javaHomeOption: 'JDKVersion' # Options: jDKVersion, path
#jdkVersionOption: 'default' # Optional. Options: default, 1.11, 1.10, 1.9, 1.8, 1.7, 1.6
#jdkUserInputDirectory: # Required when javaHomeOption == Path
#jdkArchitectureOption: 'x64' # Optional. Options: x86, x64

Arguments
A RGUM EN T DESC RIP T IO N

antBuildFile (Required) Relative path from the repository root to the Ant
Ant build file build file.
For more information about build files, see Using Apache Ant
Default value: build.xml
Argument aliases: buildFile
A RGUM EN T DESC RIP T IO N

options (Optional) Options that you want to pass to the Ant


Options command line. You can provide your own properties (for
example, -DmyProperty=myPropertyValue ) and also use
built-in variables (for example,
-DcollectionId=$(system.collectionId) ). Alternatively,
the built-in variables are already set as environment variables
during the build and can be passed directly (for example,
-DcollectionIdAsEnvVar=%SYSTEM_COLLECTIONID% ).
See Running Apache Ant.

targets (Optional) Target(s) for Ant to execute for this build.


Target(s) See Using Apache Ant Targets.

JUN IT T EST RESULT S

publishJUnitResults (Required) Select this option to publish JUnit test results


Publish to Azure Pipelines produced by the Ant build to Azure Pipelines or your on-
premises Team Foundation Server. Each test result file that
matches Test Results Files is published as a test run.
Default value: true

testResultsFiles (Required) Test results files path. Wildcards can be used. For
Test Results Files example, */TEST-.xml for all xml files whose name starts
with TEST-."
Default value: **/TEST-*.xml

testRunTitle (Optional) Assign a title for the JUnit test case results for this
Test Run Title build.

C O DE C O VERA GE

codeCoverageTool (Optional) Select the code coverage tool you want to use.
Code Coverage Tool
If you are using the Microsoft-hosted agents, then the
tools are set up for you. If you are using on-premises
Windows agent, then if you select:
JaCoCo, make sure jacocoant.jar is available in lib
folder of Ant installation. See JaCoCo.
Cobertura, set up an environment variable
COBERTURA_HOME pointing to the Cobertura .jar
files location. See Cobertura.
After you select one of these tools, the following
arguments appear.

Default value: None


Argument aliases: codeCoverageToolOptions
A RGUM EN T DESC RIP T IO N

classFilesDirectories (Required) Specify a comma-separated list of relative


Class Files Directories paths from the Ant build file to the directories that
contain your .class files, archive files (such as .jar and
.war). Code coverage is reported for class files present in
the directories. Directories and archives are searched
recursively for class files.
For example: target/classes,target/testClasses.

Default value: .
Argument aliases: codeCoverageClassFilesDirectories

classFilter (Optional) Specify a comma-separated list of filters to


Class Inclusion/Exclusion Filters include or exclude classes from collecting code coverage.
For example: +:com.,+:org.,-:my.app.

Argument aliases: codeCoverageClassFilter

srcDirectories (Optional) Specify a comma-separated list of relative


Source Files Directories paths from the Ant build file to your source directories.
Code coverage reports will use these paths to highlight
source code. For example: src/java,src/Test.

Argument aliases: codeCoverageSourceDirectories

failIfCoverageEmpty (Optional) Fail the build if code coverage did not produce
Fail when code coverage results are missing any results to publish

Default value: false


Argument aliases: codeCoverageFailIfEmpty

A DVA N C ED

antHomeUserInputPath (Optional) If set, overrides any existing ANT_HOME


Set ANT_HOME Path environment variable with the given path.
Argument aliases: antHomeDirectory

javaHomeSelection (Required) Sets JAVA_HOME either by selecting a JDK version


Set JAVA_HOME by that will be discovered during builds or by manually entering
a JDK path.
Default value: JDKVersion
Argument aliases: javaHomeOption

jdkVersion (Optional) Will attempt to discover the path to the selected


JDK version JDK version and set JAVA_HOME accordingly.
Default value: default
Argument aliases: jdkVersionOption

jdkUserInputPath (Required) Sets JAVA_HOME to the given path


JDK Path Argument aliases: jdkUserInputDirectory

jdkArchitecture (Optional) Optionally supply the architecture (x86, x64) of the


JDK Architecture JDK.
Default value: x64
Argument aliases: jdkArchitectureOption
A RGUM EN T DESC RIP T IO N

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Azure IoT Edge task
11/2/2020 • 3 minutes to read • Edit Online

Use this task to build, test, and deploy applications quickly and efficiently to Azure IoT Edge.

Container registry types


Azure Container Registry
PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Required) Select Azure Container Registry for ACR or Generic


Container registry type Container Registry for generic registries including Docker hub.

azureSubscriptionEndpoint (Required, if containerregistrytype = Azure Container Registry)


Azure subscription Select an Azure subscription.

azureContainerRegistry (Required) Select an Azure Container Registry.


Azure Container Registry

Other container registries


PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Required) Select Azure Container Registry for ACR or Generic


Container registry type Container Registry for generic registries including Docker hub.

dockerRegistryEndpoint (Required) Select a generic Docker registry connection.


Docker Registry Connection Required for Build and Push
Argument aliases: dockerRegistryConnection

Build module images


PA RA M ET ERS DESC RIP T IO N

action (Required) Select an Azure IoT Edge action.


Action Default value: Build module images.

templateFilePath (Required) The path of your Azure IoT Edge solution


.template.json file .template.json file. This file defines the modules and routes in
an Azure IoT Edge solution. The filename must end with
.template.json.
Default value: deployment.template.json.

defaultPlatform (Required) In your .template.json file you can leave the


Default platform modules platform unspecified, in which case the default
platform will be used.
Default value: amd64.

The following YAML example builds module images:


- task: AzureIoTEdge@2
displayName: AzureIoTEdge - Build module images
inputs:
action: Build module images
templateFilePath: deployment.template.json
defaultPlatform: amd64

Push module images


PA RA M ET ERS DESC RIP T IO N

action (Required) Select an Azure IoT Edge action.


Action Default value: Build module images.

templateFilePath (Required) The path of your Azure IoT Edge solution


.template.json file .template.json file. This file defines the modules and routes in
an Azure IoT Edge solution. The filename must end with
.template.json.
Default value: deployment.template.json.

defaultPlatform (Required) In your .template.json file you can leave the


Default platform modules platform unspecified, in which case the default
platform will be used.
Default value: amd64.

bypassModules (Optional) Specify the module(s) that you do not need to build
Bypass module(s) or push from the list of module names separated by commas
in the .template.json file. For example, if you have two
modules, "SampleModule1,SampleModule2" in your file and
you want to build or push just SampleModule1, specify
SampleModule2 as the bypass module(s). Leave empty to
build or push all the modules in .template.json.

The following YAML example pushes module images:

variables:
azureSubscriptionEndpoint: Contoso
azureContainerRegistry: contoso.azurecr.io

steps:
- task: AzureIoTEdge@2
displayName: AzureIoTEdge - Push module images
inputs:
action: Push module images
containerregistrytype: Azure Container Registry
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
templateFilePath: deployment.template.json
defaultPlatform: amd64

Deploy to IoT Edge devices


PA RA M ET ERS DESC RIP T IO N

action (Required) Select an Azure IoT Edge action.


Action Default value: Build module images.
PA RA M ET ERS DESC RIP T IO N

deploymentFilePath (Required) Select the deployment JSON file. If this task is in a


Deployment file release pipeline, you must specify the location of the
deployment file within the artifacts (the default value works
for most conditions). If this task is in a build pipeline, you must
specify the Path of output deployment file.
Default value: $(System.DefaultWorkingDirectory)/*/.json.

connectedServiceNameARM (Required) Select an Azure subscription that contains an IoT


Azure subscription contains IoT Hub Hub
Argument aliases: azureSubscription

iothubname (Required) Select the IoT Hub


IoT Hub name

deviceOption (Required) Choose to deploy to a single device, or to multiple


Choose single/multiple device devices specified by using tags.

deploymentid (Required) Enter the IoT Edge Deployment ID. If an ID already


IoT Edge deployment ID exists, it will be overridden. Up to 128 lowercase letters,
numbers, and the characters
- : + % _ # * ? ! ( ) , = @ ; . More details.
Default value: $(System.TeamProject)-devops-deployment.

priority (Required) A positive integer used to resolve deployment


IoT Edge deployment priority conflicts. When a device is targeted by multiple deployments it
will use the one with highest priority or, in the case of two
deployments with the same priority, the one with the latest
creation time.
Default value: 0.

targetcondition (Required) Specify the target condition of the devices to which


IoT Edge device target condition you want to deploy. For example, tags.building=9 and
tags.environment='test'. Do not include double quotes. More
details.

deviceId (Required) Specify the IoT Edge Device ID.


IoT Edge device ID

The following YAML example deploys module images:

steps:
- task: AzureIoTEdge@2
displayName: 'Azure IoT Edge - Deploy to IoT Edge devices'
inputs:
action: 'Deploy to IoT Edge devices'
deploymentFilePath: deployment.template.json
azureSubscription: $(azureSubscriptionEndpoint)
iothubname: iothubname
deviceOption: 'Single Device'
deviceId: deviceId
CMake task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build with the CMake cross-platform build system.

Demands
cmake

IMPORTANT
The Microsoft-hosted agents have CMake installed by default so you don't need to include a demand for CMake in your
azure-pipelines.yml file. If you do include a demand for CMake you may receive an error. To resolve, remove the demand.

YAML snippet
# CMake
# Build with the CMake cross-platform build system
- task: CMake@1
inputs:
#workingDirectory: 'build' # Optional
#cmakeArgs: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

cwd (Optional) Working directory when CMake is run. The


Working Directory default value is build .
If you specify a relative path, then it is relative to your
repo. For example, if you specify build , the result is the
same as if you specified
$(Build.SourcesDirectory)\build .

You can also specify a full path outside the repo, and you
can use variables. For example:
$(Build.ArtifactStagingDirectory)\build

If the path you specify does not exist, CMake creates it.

Default value: build


Argument aliases: workingDirectory

cmakeArgs (Optional) Arguments that you want to pass to CMake.


Arguments

C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
How do I enable CMake for Microsoft-hosted agents?
The Microsoft-hosted agents have CMake installed already so you don't need to do anything. You do not need to
add a demand for CMake in your azure-pipelines.yml file.
How do I enable CMake for my on-premises agent?
1. Deploy an agent.
2. Install CMake and make sure to add it to the path of the user that the agent is running as on your agent
machine.
3. In your web browser, navigate to Agent pools:
1. Choose Azure DevOps , Organization settings .

2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .


2. Choose Agent pools .

1. Choose Azure DevOps , Collection settings .

2. Choose Agent pools .


1. Navigate to your project and choose Settings (gear icon) > Agent Queues .

2. Choose Manage pools .

1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .

1. Navigate to your project and choose Manage project (gear icon).

2. Choose Control panel .

3. Select Agent pools .

4. Navigate to the capabilities tab:


1. From the Agent pools tab, select the desired agent pool.

2. Select Agents and choose the desired agent.


3. Choose the Capabilities tab.

NOTE
Microsoft-hosted agents don't display system capabilities. For a list of software installed on Microsoft-hosted
agents, see Use a Microsoft-hosted agent.

1. From the Agent pools tab, select the desired pool.


2. Select Agents and choose the desired agent.

3. Choose the Capabilities tab.


1. From the Agent pools tab, select the desired pool.

2. Select Agents and choose the desired agent.


3. Choose the Capabilities tab.

Select the desired agent, and choose the Capabilities tab.


Select the desired agent, and choose the Capabilities tab.

From the Agent pools tab, select the desired agent, and choose the Capabilities tab.

5. Click Add capability and set the fields to cmake and yes .
6. Click Save changes .
How does CMake work? What arguments can I use?
About CMake
CMake Documentation
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Docker task
11/7/2020 • 4 minutes to read • Edit Online

Use this task to build and push Docker images to any container registry using Docker registry service connection.

Overview
Following are the key benefits of using Docker task as compared to directly using docker client binary in script -
Integration with Docker registr y ser vice connection - The task makes it easy to use a Docker registry
service connection for connecting to any container registry. Once logged in, the user can author follow up
tasks to execute any tasks/scripts by leveraging the login already done by the Docker task. For example, you
can use the Docker task to sign in to any Azure Container Registry and then use a subsequent task/script to
build and push an image to this registry.
Metadata added as labels - The task adds traceability-related metadata to the image in the form of the
following labels -
com.azure.dev.image.build.buildnumber
com.azure.dev.image.build.builduri
com.azure.dev.image.build.definitionname
com.azure.dev.image.build.repository.name
com.azure.dev.image.build.repository.uri
com.azure.dev.image.build.sourcebranchname
com.azure.dev.image.build.sourceversion
com.azure.dev.image.release.definitionname
com.azure.dev.image.release.releaseid
com.azure.dev.image.release.releaseweburl
com.azure.dev.image.system.teamfoundationcollectionuri
com.azure.dev.image.system.teamproject

Task Inputs
PA RA M ET ERS DESC RIP T IO N

command (Required) Possible values: buildAndPush , build , push ,


Command login , logout
Added in version 2.173.0: start , stop
Default value: buildAndPush

containerRegistry (Optional) Name of the Docker registry service connection


Container registry

repository (Optional) Name of repository within the container registry


Repository corresponding to the Docker registry service connection
specified as input for containerRegistry

container (Required for commands start and stop ) The container


Container resource to start or stop
PA RA M ET ERS DESC RIP T IO N

tags (Optional) Multiline input where each line contains a tag to be


Tags used in build , push or buildAndPush commands
Default value: $(Build.BuildId)

Dockerfile (Optional) Path to the Dockerfile. The task will use the first
Dockerfile dockerfile it finds to build the image.
Default value: **/Dockerfile

buildContext (Optional) Path to the build context


Build context Default value: **

arguments (Optional) Additional arguments to be passed onto the docker


Arguments client
Be aware that if you use value buildAndPush for the
command parameter, then the arguments property will be
ignored.

addPipelineData (Optional) Adds the above mentioned metadata as labels to


Add Pipeline Data the image
Possible values: true , false
Default value: true

Login
Following YAML snippet showcases container registry login using a Docker registry service connection -

- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1

Build and Push


A convenience command called buildAndPush allows for build and push of images to container registry in a single
command. The following YAML snippet is an example of building and pushing multiple tags of an image to multiple
registries -
steps:
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
- task: Docker@2
displayName: Login to Docker Hub
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection2
- task: Docker@2
displayName: Build and Push
inputs:
command: buildAndPush
repository: contosoRepository
tags: |
tag1
tag2

In the above snippet, the images contosoRepository:tag1 and contosoRepository:tag2 are built and pushed to the
container registries corresponding to dockerRegistryServiceConnection1 and dockerRegistryServiceConnection2 .
If one wants to build and push to a specific authenticated container registry instead of building and pushing to all
authenticated container registries at once, the containerRegistry input can be explicitly specified along with
command: buildAndPush as shown below -

steps:
- task: Docker@2
displayName: Build and Push
inputs:
command: buildAndPush
containerRegistry: dockerRegistryServiceConnection1
repository: contosoRepository
tags: |
tag1
tag2

Logout
Following YAML snippet showcases container registry logout using a Docker registry service connection -

- task: Docker@2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: dockerRegistryServiceConnection1

Start/stop
This task can also be used to control job and service containers. This usage is uncommon, but occasionally used for
unique circumstances.
resources:
containers:
- container: builder
image: ubuntu:18.04
steps:
- script: echo "I can run inside the container (it starts by default)"
target:
container: builder
- task: Docker@2
inputs:
command: stop
container: builder
# any task beyond this point would not be able to target the buider container
# because it's been stopped

Other commands and arguments


The command and argument inputs can be used to pass additional arguments for build or push commands using
docker client binary as shown below -

steps:
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
- task: Docker@2
displayName: Build
inputs:
command: build
repository: contosoRepository
tags: tag1
arguments: --secret id=mysecret,src=mysecret.txt

NOTE
The arguments input is evaluated for all commands except buildAndPush . As buildAndPush is a convenience command (
build followed by push ), arguments input is ignored for this command.

Troubleshooting
Why does Docker task ignore arguments passed to buildAndPush command?
Docker task configured with buildAndPush command ignores the arguments passed since they become ambiguous
to the build and push commands that are run internally. You can split your command into separate build and push
steps and pass the suitable arguments. See this stackoverflow post for example.
DockerV2 only supports Docker registry service connection and not support ARM service connection. How can
I use an existing Azure service principal (SPN ) for authentication in Docker task?
You can create a Docker registry service connection using your Azure SPN credentials. Choose the Others from
Registry type and provide the details as follows:

Docker Registry: Your container registry URL (eg. https://myacr.azurecr.io)


Docker ID: Service principal client ID
Password: Service principal key
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Docker Compose task
11/7/2020 • 11 minutes to read • Edit Online

Azure Pipelines
Use this task to build, push or run multi-container Docker applications. This task can be used with a Docker registry
or an Azure Container Registry.

Container registry types


Azure Container Registry
PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Optional) Azure Container Registry if using ACR or Container


(Container registry type) Registry if using any other container registry.
Default value: Azure Container Registry

azureSubscriptionEndpoint (Required) Name of the Azure Service Connection. See Azure


(Azure subscription) Resource Manager service connection to manually set up the
connection.
Argument aliases: azureSubscription

azureContainerRegistry (Required) Name of the Azure Container Registry.


(Azure container registry) Example: Contoso.azurecr.io

This YAML example specifies the inputs for Azure Container Registry:

variables:
azureContainerRegistry: Contoso.azurecr.io
azureSubscriptionEndpoint: Contoso
steps:
- task: DockerCompose@0
displayName: Container registry login
inputs:
containerregistrytype: Azure Container Registry
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)

Other container registries


The containerregistr ytype value is required when using any container registry other than ACR. Use
containerregistrytype: Container Registry in this case.

PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Required) Azure Container Registry if using ACR or Container


(Container registry type) Registry if using any other container registry.
Default value: Azure Container Registry

dockerRegistryEndpoint (Required) Docker registry service connection.


(Docker registry service connection)

This YAML example specifies a container registry other than ACR where Contoso is the name of the Docker
registry service connection for the container registry:

- task: DockerCompose@0
displayName: Container registry login
inputs:
containerregistrytype: Container Registry
dockerRegistryEndpoint: Contoso

Build service images


PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Required) Azure Container Registry if using ACR or Container


(Container Registry Type) Registry if using any other container registry.
Default value: Azure Container Registry

azureSubscriptionEndpoint (Required) Name of the Azure Service Connection.


(Azure subscription)

azureContainerRegistry (Required) Name of the Azure Container Registry.


(Azure Container Registry)

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name = value pair on a new line. You
need to use the | operator in YAML to indicate that newlines
should be preserved.
Example: dockerComposeFileArgs: dockerComposeFileArgs: -f
--verbose

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

additionalImageTags (Optional) Additional tags for the Docker images being built or
(Additional Image Tags) pushed.

includeSourceTags (Optional) Include Git tags when building or pushing Docker


(Include Source Tags) images.
Default value: false
PA RA M ET ERS DESC RIP T IO N

includeLatestTag (Optional) Include the latest tag when building or pushing


(Include Latest Tag) Docker images.
Default value: false

This YAML example builds the image where the image name is qualified on the basis of the inputs related to Azure
Container Registry:

- task: DockerCompose@0
displayName: Build services
inputs:
action: Build services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
additionalImageTags: $(Build.BuildId)

Push service images


PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Required) Azure Container Registry if using ACR or Container


(Container Registry Type) Registry if using any other container registry.
Default value: Azure Container Registry

azureSubscriptionEndpoint (Required) Name of the Azure Service Connection.


(Azure subscription)

azureContainerRegistry (Required) Name of the Azure Container Registry.


(Azure Container Registry)

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true
PA RA M ET ERS DESC RIP T IO N

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

additionalImageTags (Optional) Additional tags for the Docker images being built or
(Additional Image Tags) pushed.

includeSourceTags (Optional) Include Git tags when building or pushing Docker


(Include Source Tags) images.
Default value: false

includeLatestTag (Optional) Include the latest tag when building or pushing


(Include Latest Tag) Docker images.
Default value: false

This YAML example pushes an image to a container registry:

- task: DockerCompose@0
displayName: Push services
inputs:
action: Push services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
additionalImageTags: $(Build.BuildId)

Run service images


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command
PA RA M ET ERS DESC RIP T IO N

buildImages (Optional) Build images before starting service containers.


(Build Images) Default value: true

detached (Optional) Run the service containers in the background.


(Run in Background) Default value: true

This YAML example runs services:

- task: DockerCompose@0
displayName: Run services
inputs:
action: Run services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.ci.build.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
buildImages: true
abortOnContainerExit: true
detached: false

Run a specific service image


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

serviceName (Required) Name of the specific service to run.


(Service Name)

containerName (Optional) Name of the specific service container to run.


(Container Name)
PA RA M ET ERS DESC RIP T IO N

ports (Optional) Ports in the specific service container to publish to


(Ports) the host. Specify each host-port:container-port binding on a
new line.

workDir (Optional) The working directory for the specific service


(Working Directory) container.
Argument aliases: workingDirectory

entrypoint (Optional) Override the default entry point for the specific
(Entry Point Override) service container.

containerCommand (Optional) Command to run in the specific service container.


(Command) For example, if the image contains a simple Python Flask web
application you can specify python app.py to launch the web
application.

detached (Optional) Run the service containers in the background.


(Run in Background) Default value: true

This YAML example runs a specific service:

- task: DockerCompose@0
displayName: Run a specific service
inputs:
action: Run a specific service
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
serviceName: myhealth.web
ports: 80
detached: true

Lock service images


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)
PA RA M ET ERS DESC RIP T IO N

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

removeBuildOptions (Optional) Remove the build options from the output Docker
(Remove Build Options) Compose file.
Default value: false

baseResolveDirectory (Optional) The base directory from which relative paths in the
(Base Resolve Directory) output Docker Compose file should be resolved.

outputDockerComposeFile (Required) Path to an output Docker Compose file.


(Output Docker Compose File) Default value: $(Build.StagingDirectory)/docker-compose.yml

This YAML example locks services:

- task: DockerCompose@0
displayName: Lock services
inputs:
action: Lock services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
outputDockerComposeFile: $(Build.StagingDirectory)/docker-compose.yml

Write service image digests


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)
PA RA M ET ERS DESC RIP T IO N

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

imageDigestComposeFile (Required) Path to a Docker Compose file that is created and


(Image Digest Compose File) populated with the full image repository digests of each
service's Docker image.
Default value: $(Build.StagingDirectory)/docker-
compose.images.yml

This YAML example writes service image digests:

- task: DockerCompose@0
displayName: Write service image digests
inputs:
action: Write service image digests
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
imageDigestComposeFile: $(Build.StagingDirectory)/docker-compose.images.yml

Combine configuration
PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command
PA RA M ET ERS DESC RIP T IO N

removeBuildOptions (Optional) Remove the build options from the output Docker
(Remove Build Options) Compose file.
Default value: false

baseResolveDirectory (Optional) The base directory from which relative paths in the
(Base Resolve Directory) output Docker Compose file should be resolved.

outputDockerComposeFile (Required) Path to an output Docker Compose file.


(Output Docker Compose File) Default value: $(Build.StagingDirectory)/docker-compose.yml

This YAML example combines configurations:

- task: DockerCompose@0
displayName: Combine configuration
inputs:
action: Combine configuration
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
additionalDockerComposeFiles: docker-compose.override.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
outputDockerComposeFile: $(Build.StagingDirectory)/docker-compose.yml

Run a Docker Compose command


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Docker Compose File) (Required) Path to the primary Docker Compose file to use.
Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command
PA RA M ET ERS DESC RIP T IO N

dockerComposeCommand (Required) Docker Compose command to execute with the


(Command) help of arguments. For example, rm to remove all stopped
service containers.

This YAML example runs a docker Compose command:

- task: DockerCompose@0
displayName: Run a Docker Compose command
inputs:
action: Run a Docker Compose command
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
dockerComposeCommand: rm

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Go task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to get, build, or test a go application, or run a custom go command.

YAML snippet
# Go
# Get, build, or test a Go application, or run a custom Go command
- task: Go@0
inputs:
#command: 'get' # Options: get, build, test, custom
#customCommand: # Required when command == Custom
#arguments: # Optional
workingDirectory:

Arguments
A RGUM EN T DESC RIP T IO N

command (Required) Select a Go command to run. Select 'Custom' to use


Command a command not listed here.
Default value: get

customCommand (Required) Custom Go command for execution. For example:


Custom command to execute go version, enter version.

arguments (Optional) Arguments to the selected command. For example,


Arguments build time arguments for go build command.

workingDirectory (Required) Current working directory where the script is run.


Working Directory Empty is the root of the repo (build) or artifacts (release),
which is $(System.DefaultWorkingDirectory)

Example
variables:
GOBIN: '$(GOPATH)/bin' # Go binaries path
GOROOT: '/usr/local/go1.11' # Go installation path
GOPATH: '$(system.defaultWorkingDirectory)/gopath' # Go workspace path
modulePath: '$(GOPATH)/src/github.com/$(build.repository.name)' # Path to the module's code

steps:
- task: GoTool@0
displayName: 'Use Go 1.10'

- task: Go@0
displayName: 'go get'
inputs:
arguments: '-d'

- task: Go@0
displayName: 'go build'
inputs:
command: build
arguments: '-o "$(System.TeamProject).exe"'

- task: ArchiveFiles@2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(Build.Repository.LocalPath)'
includeRootFolder: False

- task: PublishBuildArtifacts@1
displayName: 'Publish artifact'
condition: succeededOrFailed()

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Gradle task
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build using a Gradle wrapper script.

YAML snippet
# Gradle
# Build using a Gradle wrapper script
- task: Gradle@2
inputs:
#gradleWrapperFile: 'gradlew'
#cwd: # Optional
#options: # Optional
#tasks: 'build' # A list of tasks separated by spaces, such as 'build test'
#publishJUnitResults: true
#testResultsFiles: '**/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#codeCoverageToolOption: 'None' # Optional. Options: none, cobertura, jaCoCo
#codeCoverageClassFilesDirectories: 'build/classes/main/' # Required when codeCoverageToolOption == False
#codeCoverageClassFilter: # Optional. Comma-separated list of filters to include or exclude classes from
collecting code coverage. For example: +:com.*,+:org.*,-:my.app*.*
#codeCoverageFailIfEmpty: false # Optional
#javaHomeOption: 'JDKVersion' # Options: jDKVersion, path
#jdkVersionOption: 'default' # Optional. Options: default, 1.11, 1.10, 1.9, 1.8, 1.7, 1.6
#jdkDirectory: # Required when javaHomeOption == Path
#jdkArchitectureOption: 'x64' # Optional. Options: x86, x64
#gradleOptions: '-Xmx1024m' # Optional
#sonarQubeRunAnalysis: false
#sqGradlePluginVersionChoice: 'specify' # Required when sonarQubeRunAnalysis == True# Options: specify,
build
#sonarQubeGradlePluginVersion: '2.6.1' # Required when sonarQubeRunAnalysis == True &&
SqGradlePluginVersionChoice == Specify
#checkStyleRunAnalysis: false # Optional
#findBugsRunAnalysis: false # Optional
#pmdRunAnalysis: false # Optional

Arguments
Default value: true

A RGUM EN T DESC RIP T IO N

wrapperScript (Required) The location in the repository of the gradlew


Gradle Wrapper wrapper used for the build. For agents on Windows
(including Microsoft-hosted agents), you must use the
gradlew.bat wrapper. Agents on Linux or macOS can
use the gradlew shell script.
See The Gradle Wrapper.

Default value: gradlew


Argument aliases: gradleWrapperFile
A RGUM EN T DESC RIP T IO N

options (Optional) Specify any command line options you want to


Options pass to the Gradle wrapper.
See Gradle Command Line.

tasks (Required) The task(s) for Gradle to execute. A list of task


Tasks names should be separated by spaces and can be taken
from gradlew tasks issued from a command prompt.
See Gradle Build Script Basics.

Default value: build

JUN IT T EST RESULT S

publishJUnitResults (Required) Select this option to publish JUnit Test results


Publish to Azure Pipelines produced by the Gradle build to Azure Pipelines/TFS.

testResultsFiles (Required) Test results files path. Wildcards can be used. For
Test results files example, */TEST-.xml for all xml files whose name starts
with TEST-."
Default value: **/TEST-*.xml

testRunTitle (Optional) Assign a title for the JUnit test case results for this
Test run title build.

C O DE C O VERA GE

codeCoverageTool (Optional) Choose a code coverage tool to determine the


Code coverage tool code that is covered by the test cases for the build.
Default value: None
Argument aliases: codeCoverageToolOption

classFilesDirectories (Required) Comma-separated list of directories containing


Class files directories class files and archive files (JAR, WAR, etc.). Code coverage is
reported for class files in these directories. Normally, classes
under `build/classes/main` are searched, which is the default
class directory for Gradle builds
Default value: build/classes/main/
Argument aliases: codeCoverageClassFilesDirectories

classFilter (Optional) Comma-separated list of filters to include or


Class inclusion/exclusion filters exclude classes from collecting code coverage. For example:
+:com.*,+:org.*,-:my.app*.*."
Argument aliases: codeCoverageClassFilter

failIfCoverageEmpty (Optional) Fail the build if code coverage did not produce any
Fail when code coverage results are missing results to publish
Default value: false
Argument aliases: codeCoverageFailIfEmpty

A DVA N C ED

cwd (Optional) Working directory in which to run the Gradle build.


Working directory If not specified, the repository root directory is used
A RGUM EN T DESC RIP T IO N

javaHomeSelection (Required) Sets JAVA_HOME either by selecting a JDK version


Set JAVA_HOME by that will be discovered during builds or by manually entering
a JDK path
Default value: JDKVersion
Argument aliases: javaHomeOption

jdkVersion (Optional) Will attempt to discover the path to the selected


JDK version JDK version and set JAVA_HOME accordingly
Argument aliases: jdkDirectory

jdkUserInputPath (Required) Sets JAVA_HOME to the given path


JDK path Default value: default
Argument aliases: jdkVersionOption

jdkArchitecture (Optional) Optionally supply the architecture (x86, x64) of


JDK Architecture JDK.
Default value: x64
Argument aliases: jdkArchitectureOption

gradleOpts (Optional) Sets the GRADLE_OPTS environment variable,


Set GRADLE_OPTS which is used to send command-line arguments to start the
JVM. The xmx flag specifies the maximum memory available
to the JVM.
Default value: -Xmx1024m
Argument aliases: gradleOptions

C O DE A N A LY SIS

sqAnalysisEnabled (Required) This option has changed from version 1 of the


Run SonarQube or SonarCloud Analysis Gradle task to use the SonarQube and SonarCloud
marketplace extensions. Enable this option to run SonarQube
or SonarCloud analysis after executing tasks in the Tasks field.
You must also add a Prepare Analysis Configuration task
from one of the extensions to the build pipeline before this
Gradle task
Default value: false
Argument aliases: sonarQubeRunAnalysis

sqGradlePluginVersionChoice (Required) The SonarQube Gradle plugin version to use. You


SonarQube scanner for Gradle version can declare it in your Gradle configuration file, or specify a
version here
Default value: specify

sqGradlePluginVersion (Required) Refer for all available versions


SonarQube scanner for Gradle plugin version Default value: 2.6.1
Argument aliases: sonarQubeGradlePluginVersion

checkstyleAnalysisEnabled (Optional) Run the Checkstyle tool with the default Sun
Run Checkstyle checks. Results are uploaded as build artifacts.
Default value: false
Argument aliases: checkStyleRunAnalysis

findbugsAnalysisEnabled (Optional) Use the FindBugs static analysis tool to look for
Run FindBugs bugs in the code. Results are uploaded as build artifacts
Default value: false
Argument aliases: findBugsRunAnalysis
A RGUM EN T DESC RIP T IO N

pmdAnalysisEnabled (Optional) Use the PMD Java static analysis tool to look for
Run PMD bugs in the code. Results are uploaded as build artifacts
Default value: false
Argument aliases: pmdRunAnalysis

C O N T RO L O P T IO N S

Example
Build your Java app with Gradle

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
How do I generate a wrapper from my Gradle project?
The Gradle wrapper allows the build agent to download and configure the exact Gradle environment that is
checked into the repository without having any software configuration on the build agent itself other than the
JVM.
1. Create the Gradle wrapper by issuing the following command from the root project directory where your
build.gradle resides:
jamal@fabrikam> gradle wrapper

2. Upload your Gradle wrapper to your remote repository.


There is a binary artifact that is generated by the gradle wrapper ( located at
gradle/wrapper/gradle-wrapper.jar ). This binary file is small and doesn't require updating. If you need to
change the Gradle configuration run on the build agent, you update the gradle-wrapper.properties .
The repository should look something like this:

|-- gradle/
`-- wrapper/
`-- gradle-wrapper.jar
`-- gradle-wrapper.properties
|-- src/
|-- .gitignore
|-- build.gradle
|-- gradlew
|-- gradlew.bat

How do I fix timeouts when downloading dependencies?


To fix errors such as Read timed out when downloading dependencies, users of Gradle 4.3+ can change the
timeout by adding to Options -Dhttp.socketTimeout=60000 -Dhttp.connectionTimeout=60000 . This increases the
timeout from 10 seconds to 1 minute.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Grunt task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015.3
Use this task to run Grunt tasks using the JavaScript Task Runner.

Demands
The build agent must have the following capability:
Grunt

YAML snippet
# Grunt
# Run the Grunt JavaScript task runner
- task: Grunt@0
inputs:
#gruntFile: 'gruntfile.js'
#targets: # Optional
#arguments: # Optional
#workingDirectory: # Optional
#gruntCli: 'node_modules/grunt-cli/bin/grunt'
#publishJUnitResults: false # Optional
#testResultsFiles: '**/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#enableCodeCoverage: false # Optional
#testFramework: 'Mocha' # Optional. Options: mocha, jasmine
#srcFiles: # Optional
#testFiles: 'test/*.js' # Required when enableCodeCoverage == True

Arguments
A RGUM EN T DESC RIP T IO N

gruntFile (Required) Relative path from the repo root to the Grunt
Grunt File Path script that you want to run.
Default value: gruntfile.js

targets (Optional) Space delimited list of tasks to run. If you leave it


Grunt task(s) blank, the default task will run.

arguments Additional arguments passed to Grunt. See Using the CLI.


Arguments Tip: --gruntfile is not needed. This argument is handled by the
Grunt file path argument shown above.

cwd (Optional) Current working directory when the script is run. If


Working Directory you leave it blank, the working directory is the folder where
the script is located.
Argument aliases: workingDirectory
A RGUM EN T DESC RIP T IO N

gruntCli (Required) grunt-cli to run when agent can't find global


grunt-cli location installed grunt-cli Defaults to the grunt-cli under
node_modules folder of the working directory.
Default value: node_modules/grunt-cli/bin/grunt
Argument aliases: workingDirectory

publishJUnitResults Select this option to publish JUnit test results produced by


Publish to Azure Pipelines the Grunt build to Azure Pipelines
Default value: false

testResultsFiles (Required) Test results files path. Wildcards can be used. For
Test Results Files example, **/TEST-.xml for all XML files whose name starts with
TEST-.
Default value: **/TEST-.xml

testRunTitle (Optional) Provide a name for the test run


Test Run Title

enableCodeCoverage (Optional) Select this option to enable Code Coverage using


Enable Code Coverage Istanbul
Default value: false

testFramework (Optional) Select your test framework


Test Framework Default value: Mocha

srcFiles (Optional) Provide the path to your source files which you
Source Files want to hookRequire ()

testFiles (Required) Provide the path to your test script files


Test Script Files Default value: test/*.js

Example
See Sample Gruntfile.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Gulp task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run gulp tasks using the Node.js streaming task-based build system.

Demands
gulp

YAML snippet
# gulp
# Run the gulp Node.js streaming task-based build system
- task: gulp@1
inputs:
#gulpFile: 'gulpfile.js'
#targets: # Optional
#arguments: # Optional
#workingDirectory: # Optional
#gulpjs: # Optional
#publishJUnitResults: false # Optional
#testResultsFiles: '**/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#enableCodeCoverage: false
#testFramework: 'Mocha' # Optional. Options: mocha, jasmine
#srcFiles: # Optional
#testFiles: 'test/*.js' # Required when enableCodeCoverage == True

Arguments
A RGUM EN T DESC RIP T IO N

gulpFile (Required) Relative path from the repo root of the gulp file
gulp File Path script that you want to run.
Default value: gulpfile.js

targets (Optional) Space-delimited list of tasks to run. If not specified,


gulp Task(s) the default task will run.

arguments Additional arguments passed to gulp.


Arguments Tip: --gulpfile is not needed since already added via gulpFile
input above

cwd (Optional) Current working directory when the script is run.


Working Directory Defaults to the folder where the script is located.
Argument aliases: workingDirectory

gulpjs (Optional) Path to an alternative gulp.js, relative to the


gulp.js location working directory.
Argument aliases: workingDirectory
A RGUM EN T DESC RIP T IO N

publishJUnitResults Select this option to publish JUnit test results produced by


Publish to Azure Pipelines the gulp build to Azure Pipelines
Default value: false

testResultsFiles (Required) Test results files path. Wildcards can be used. For
Test Results Files example, **/TEST-*.xml for all XML files whose name starts
with TEST-.
Default value: **/TEST-*.xml

testRunTitle (Optional) Provide a name for the test run


Test Run Title

enableCodeCoverage (Optional) Select this option to enable Code Coverage using


Enable Code Coverage Istanbul
Default value: false

testFramework (Optional) Select your test framework


Test Framework Default value: Mocha

srcFiles (Optional) Provide the path to your source files, that you
Source Files want to hookRequire ()

testFiles (Required) Provide the path to your test script files


Test Script Files Default value: test/*.js

Example
Run gulp.js
On the Build tab:

Install npm.
Command: install

Package: npm

Run your script.


gulp file path: gulpfile.js
Advanced, gulp.js location:
Build: gulp node_modules/gulp/bin/gulp.js

Build a Node.js app


Build your Node.js app with gulp

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Index Sources & Publish Symbols task
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
A symbol server is available with Package Management in Azure Ar tifacts and works best with Visual Studio 2017.4
and newer . Team Foundation Ser ver users and users without the Package Management extension can publish symbols
to a file share using this task.

Use this task to index your source code and optionally publish symbols to the Package Management symbol
server or a file share.
Indexing source code enables you to use your .pdb symbol files to debug an app on a machine other than the one
you used to build the app. For example, you can debug an app built by a build agent from a dev machine that does
not have the source code.
Symbol servers enables your debugger to automatically retrieve the correct symbol files without knowing product
names, build numbers or package names. To learn more about symbols, read the concept page; to publish
symbols, use this task and see the walkthrough.

NOTE
This build task works only:
For code in Git or TFVC stored in Team Foundation Server (TFS) or Azure Repos. It does not work for any other type of
repository.

Demands
None

YAML snippet
# Index sources and publish symbols
# Index your source code and publish symbols to a file share or Azure Artifacts symbol server
- task: PublishSymbols@2
inputs:
#symbolsFolder: '$(Build.SourcesDirectory)' # Optional
#searchPattern: '**/bin/**/*.pdb'
#indexSources: true # Optional
#publishSymbols: true # Optional
#symbolServerType: ' ' # Required when publishSymbols == True# Options: , teamServices, fileShare
#symbolsPath: # Optional
#compressSymbols: false # Required when symbolServerType == FileShare
#detailedLog: true # Optional
#treatNotIndexedAsWarning: false # Optional
#symbolsMaximumWaitTime: # Optional
#symbolsProduct: # Optional
#symbolsVersion: # Optional
#symbolsArtifactName: 'Symbols_$(BuildConfiguration)' # Optional
Arguments
A RGUM EN T DESC RIP T IO N

SymbolsFolder (Optional) The path to the folder that is searched for


Path to symbols folder symbol files. The default is $(Build.SourcesDirectory),
Otherwise specify a rooted path.
For example: $(Build.BinariesDirectory)/MyProject

SearchPattern (Required) File matching pattern(s) The pattern used to


Search pattern discover the pdb files to publish

Default value: **/bin/**/*.pdb

IndexSources (Optional) Indicates whether to inject source server


Index sources information into the PDB files

Default value: true

PublishSymbols (Optional) Indicates whether to publish the symbol files


Publish symbols
Default value: true

SymbolServerType (Required) Choose where to publish symbols. Symbols


Symbol server type published to the Azure Artifacts symbol server are accessible
by any user with access to the organization/collection. Azure
DevOps Server only supports the "File share" option. Follow
these instructions to use Symbol Server in Azure Artifacts.
TeamSer vices:
Symbol Server in this organization/collection (requires
Azure Artifacts)
File share:
Select this option to use the file share supplied in the
next input.

SymbolsPath (Optional) The file share that hosts your symbols. This
Path to publish symbols value will be used in the call to symstore.exe add as the
/s parameter.
To prepare your SymStore symbol store:
1. Set up a folder on a file-sharing server to store the
symbols. For example, set up \fabrikam-share\symbols.
2. Grant full control permission to the build agent service
account.
If you leave this argument blank, your symbols will be
source indexed but not published. (You can also store
your symbols with your drops. See Publish Build Artifacts).

CompressSymbols (Required) Only available when File share is selected as


Compress symbols the Symbol ser ver type . Compresses your pdbs to
save space.
Default value: false
A RGUM EN T DESC RIP T IO N

A DVA N C ED

DetailedLog (Optional) Enables additional log details.


Verbose logging Default value: true

TreatNotIndexedAsWarning (Optional) Indicates whether to warn if sources are not


Warn if not indexed indexed for a PDB file. Otherwise the messages are
logged as normal output.
A common cause of sources to not be indexed are when
your solution depends on binaries that it doesn't build.
Even if you don't select this option, the messages are
written in log.

Default value: false

SymbolsMaximumWaitTime (Optional) The number of minutes to wait before failing this


Max wait time (min) task. If you leave it blank, limit is 2 hours.

SymbolsProduct (Optional) Specify the product parameter to symstore.exe. The


Product default is $(Build.DefinitionName)

SymbolsVersion (Optional) Specify the version parameter to symstore.exe. The


Version default is $(Build.BuildNumber).

SymbolsArtifactName (Optional) Specify the artifact name to use for the Symbols
Artifact name artifact. The default is Symbols_$(BuildConfiguration).
Default value: Symbols_$(BuildConfiguration)

For more information about the different types of tasks and their uses, see Task control options.

IMPORTANT
If you want to delete symbols that were published using the Index Sources & Publish Symbols task, you must first
remove the build that generated those symbols. This can be accomplished by using retention policies to clean up your build
or by manually deleting the run. For information about debugging your app, see Use indexed symbols to debug your app,
Debug with symbols in Visual Studio, Debug with symbols in WinDbg.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
How does indexing work?
By choosing to index the sources, an extra section will be injected into the PDB files. PDB files normally contain
references to the local source file paths only. For example, C:\BuildAgent\_work\1\src\MyApp\Program.cs . The extra
section injected into the PDB file contains mapping instructions for debuggers. The mapping information indicates
how to retrieve the server item corresponding to each local path.
The Visual Studio debugger will use the mapping information to retrieve the source file from the server. An actual
command to retrieve the source file is included in the mapping information. You may be prompted by Visual
Studio whether to run the command. For example
tf.exe git view /collection:http://SERVER:8080/tfs/DefaultCollection /teamproject:"93fc2e4d-0f0f-4e40-9825-
01326191395d" /repository:"647ed0e6-43d2-4e3d-b8bf-2885476e9c44"
/commitId:3a9910862e22f442cd56ff280b43dd544d1ee8c9 /path:"/MyApp/Program.cs"
/output:"C:\Users\username\AppData\Local\SOURCE~1\TFS_COMMIT\3a991086\MyApp\Program.cs" /applyfilters

Can I use source indexing on a portable PDB created from a .NET Core assembly?
No, source indexing is currently not enabled for Portable PDBs as SourceLink doesn't support authenticated
source repositories. The workaround at the moment is to configure the build to generate full PDBs. Note that if
you are generating a .NET Standard 2.0 assembly and are generating full PDBs and consuming them in a .NET
Framework (full CLR) application then you will be able to fetch sources from Azure Repos (provided you have
embedded SourceLink information and enabled it in your IDE).
Where can I learn more about symbol stores and debugging?
Symbol Server and Symbol Stores
SymStore
Use the Microsoft Symbol Server to obtain debug symbol files
The Srcsrv.ini File
Source Server
Source Indexing and Symbol Servers: A Guide to Easier Debugging
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
How long are Symbols retained?
When symbols are published to Azure Pipelines they are associated with a build. When the build is deleted either
manually or due to retention policy then the symbols are also deleted. If you want to retain the symbols
indefinitely then you should mark the build as Retain Indefinitely.
Jenkins Queue Job task
6/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to queue a job on a Jenkins server.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Demands
None

YAML snippet
# Jenkins queue job
# Queue a job on a Jenkins server
- task: JenkinsQueueJob@2
inputs:
serverEndpoint:
jobName:
#isMultibranchJob: # Optional
#multibranchPipelineBranch: # Required when isMultibranchJob == True
#captureConsole: true
#capturePipeline: true # Required when captureConsole == True
isParameterizedJob:
#jobParameters: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

serverEndpoint (Required) Select the service connection for your Jenkins


Jenkins service connection instance. To create one, click Manage and create a new
Jenkins service connection.

jobName (Required) The name of the Jenkins job to queue. This job
Job name name must exactly match the job name on the Jenkins
server.

isMultibranchJob (Optional) This job is of multibranch pipeline type. If


Job is of multibranch pipeline type selected, enter the appropriate branch name. Requires
Team Foundation Server Plugin for Jenkins v5.3.4 or later

Default value: false


A RGUM EN T DESC RIP T IO N

multibranchPipelineBranch (Required) Queue this multibranch pipeline job on the


Multibranch pipeline branch specified branch. Requires Team Foundation Server Plugin
for Jenkins v5.3.4 or later

captureConsole (Required) If selected, this task will capture the Jenkins


Capture console output and wait for completion build console output, wait for the Jenkins build to
complete, and succeed/fail based on the Jenkins build
result. Otherwise, once the Jenkins job is successfully
queued, this task will successfully complete without
waiting for the Jenkins build to run.

Default value: true

capturePipeline (Required) This option is similar to capture console output


Capture pipeline output and wait for pipeline completion except it will capture the output for the entire Jenkins
pipeline, wait for completion for the entire pipeline, and
succeed/fail based on the pipeline result.

Default value: true

isParameterizedJob (Required) Select if the Jenkins job accepts parameters.


Parameterized job This job should be selected even if all default parameter
values are used and no parameters are specified.

Default value: false

jobParameters This option is available for parameterized jobs. Specify job


Job parameters parameters, one per line, in the form
parameterName=parameterValue preceded by | on
the first line. Example:
jobParameters: |
parameter1=value1
parameter2=value2
To set a parameter to an empty value (useful for
overriding a default value), omit the parameter value. For
example, specify parameterName=
Variables are supported. For example, to define the
commitId parameter to be the git commit ID for the
build, use: commitId=$(Build.SourceVersion) .
Supported Jenkins parameter types are:
Boolean
String
Choice
Password

Team Foundation Server Plug-in


You can use Team Foundation Server Plug-in (version 5.2.0 or newer) to automatically collect files from the Jenkins
workspace and download them into the build.
To set it up:
1. Install the Team Foundation Server Plug-in on the Jenkins server.
2. On the Jenkins server, for each job you would like to collect results from, add the Collect results for
Azure Pipelines/TFS post-build action and then configure it with one or more pairs of result type and
include file pattern.
3. On the Jenkins Queue Job, build task enable the Capture console output and wait for completion to
collect results from the root level job, or the Capture pipeline output and wait for pipeline
completion to collect results from all pipeline jobs.
Results will be downloaded to the $(Build.StagingDirector y)/jenkinsResults/Job Name/team-results.zip
and extracted to this location. Each set of result types collected by the plug-in, will be under the team-results
directory, $(Build.StagingDirector y)/jenkinsResults/Job Name/team-results/ResultType/ . This is the
directory where build results can be published by downstream tasks (for example, Publish Test Results, and Publish
Code Coverage Results).

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Maven task
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build your Java code.

Demands
The build agent must have the following capability:
Maven

YAML snippet
# Maven
# Build, test, and deploy with Apache Maven
- task: Maven@3
inputs:
#mavenPomFile: 'pom.xml'
#goals: 'package' # Optional
#options: # Optional
#publishJUnitResults: true
#testResultsFiles: '**/surefire-reports/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#codeCoverageToolOption: 'None' # Optional. Options: none, cobertura, jaCoCo. Enabling code coverage
inserts the `clean` goal into the Maven goals list when Maven runs.
#codeCoverageClassFilter: # Optional. Comma-separated list of filters to include or exclude classes from
collecting code coverage. For example: +:com.*,+:org.*,-:my.app*.*
#codeCoverageClassFilesDirectories: # Optional
#codeCoverageSourceDirectories: # Optional
#codeCoverageFailIfEmpty: false # Optional
#javaHomeOption: 'JDKVersion' # Options: jDKVersion, path
#jdkVersionOption: 'default' # Optional. Options: default, 1.11, 1.10, 1.9, 1.8, 1.7, 1.6
#jdkDirectory: # Required when javaHomeOption == Path
#jdkArchitectureOption: 'x64' # Optional. Options: x86, x64
#mavenVersionOption: 'Default' # Options: default, path
#mavenDirectory: # Required when mavenVersionOption == Path
#mavenSetM2Home: false # Required when mavenVersionOption == Path
#mavenOptions: '-Xmx1024m' # Optional
#mavenAuthenticateFeed: false
#effectivePomSkip: false
#sonarQubeRunAnalysis: false
#sqMavenPluginVersionChoice: 'latest' # Required when sonarQubeRunAnalysis == True# Options: latest, pom
#checkStyleRunAnalysis: false # Optional
#pmdRunAnalysis: false # Optional
#findBugsRunAnalysis: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

mavenPOMFile (Required) Relative path from the repository root to the


Maven POM file Maven POM file. See Introduction to the POM.
Default value: pom.xml
Argument aliases: mavenPomFile
A RGUM EN T DESC RIP T IO N

goals (Optional) In most cases, set this to package to compile


Goal(s) your code and package it into a .war file. If you leave this
argument blank, the build will fail. See Introduction to the
Maven build lifecycle.
Default value: package

options (Optional) Specify any Maven command-line options you


Options want to use.

publishJUnitResults (Required) Select this option to publish JUnit test results


Publish to Azure Pipelines produced by the Maven build to Azure Pipelines. Each test
results file matching Test Results Files will be published
as a test run in Azure Pipelines.
Default value: true

testResultsFiles (Required) Specify the path and pattern of test results files to
Test results files publish. Wildcards can be used (more information). For
example, */TEST-.xml for all XML files whose name starts
with TEST- . If no root path is specified, files are matched
beneath the default working directory, the value of which is
available in the variable: $(System.DefaultWorkingDirectory).
For example, a value of ' /TEST- .xml' will actually result
in matching files from
'$(System.DefaultWorkingDirector y)/ /TEST-.xml'.
Default value: **/surefire-reports/TEST-*.xml

testRunTitle (Optional) Provide a name for the test run.


Test run title

codeCoverageTool (Optional) Select the code coverage tool. Enabling code


Code coverage tool coverage inserts the clean goal into the Maven goals list
when Maven runs.
Default value: None
Argument aliases: codeCoverageToolOption

classFilter (Optional) Comma-separated list of filters to include or


Class inclusion/exclusion filters exclude classes from collecting code coverage. For example:
+:com.,+:org.,-:my.app.
Argument aliases: codeCoverageClassFilter

classFilesDirectories (Optional) This field is required for a multi-module project.


Class files directories Specify a comma-separated list of relative paths from the
Maven POM file to directories containing class files and
archive files (JAR, WAR, etc.). Code coverage is reported for
class files in these directories.
For example: target/classes,target/testClasses.
Argument aliases: codeCoverageClassFilesDirectories

srcDirectories (Optional) This field is required for a multi-module project.


Source files directories Specify a comma-separated list of relative paths from the
Maven POM file to source code directories. Code coverage
reports will use these to highlight source code.
For example: src/java,src/Test.
Argument aliases: codeCoverageSourceDirectories
A RGUM EN T DESC RIP T IO N

failIfCoverageEmpty (Optional) Fail the build if code coverage did not produce any
Fail when code coverage results are missing results to publish.
Default value: false
Argument aliases: codeCoverageFailIfEmpty

javaHomeSelection (Required) Sets JAVA_HOME either by selecting a JDK version


Set JAVA_HOME by that will be discovered during builds or by manually entering
a JDK path.
Default value: JDKVersion
Argument aliases: javaHomeOption

jdkVersion (Optional) Will attempt to discover the path to the selected


JDK version JDK version and set JAVA_HOME accordingly.
Default value: default
Argument aliases: jdkVersionOption

jdkUserInputPath (Required) Sets JAVA_HOME to the given path.


JDK path Argument aliases: jdkDirectory

jdkArchitecture (Optional) Optionally supply the architecture (x86, x64) of the


JDK architecture JDK.
Default value: x64
Argument aliases: jdkArchitectureOption

mavenVersionSelection (Required) Uses either the default Maven version or the


Maven version version in the specified custom path.
Default value: Default
Argument aliases: mavenVersionOption

mavenPath (Required) Supply the custom path to the Maven installation


Maven path (e.g., /usr/share/maven).
Argument aliases: mavenDirectory

mavenSetM2Home (Required) Sets the M2_HOME variable to a custom Maven


Set M2_HOME variable installation path.
Default value: false

mavenOpts (Optional) Sets the MAVEN_OPTS environment variable,


Set MAVEN_OPTS to which is used to send command-line arguments to start the
JVM. The -Xmx flag specifies the maximum memory available
to the JVM.
Default value: -Xmx1024m
Argument aliases: mavenOptions

mavenFeedAuthenticate (Required) Automatically authenticate Maven feeds from


Authenticate built-in Maven feeds Azure Artifacts. If built-in Maven feeds are not in use,
deselect this option for faster builds.
Default value: false
Argument aliases: mavenAuthenticateFeed

skipEffectivePom (Required) Authenticate built-in Maven feeds using the POM


Skip generating effective POM while authenticating built-in only, allowing parent POMs in Azure Artifacts/Azure DevOps
feeds Server [Package Management] feeds.
Default value: false
Argument aliases: effectivePomSkip
A RGUM EN T DESC RIP T IO N

sqAnalysisEnabled (Required) This option has changed from version 1 of the


Run SonarQube or SonarCloud analysis Maven task to use the SonarQube and SonarCloud
marketplace extensions. Enable this option to run SonarQube
or SonarCloud analysis after executing goals in the Goals
field. The install or package goal should run first. You must
also add a Prepare Analysis Configuration task from one
of the extensions to the build pipeline before this Maven
task.
Default value: false
Argument aliases: sonarQubeRunAnalysis

sqMavenPluginVersionChoice <>SonarQube scanner for (Required) The SonarQube Maven plugin version to use. You
Maven version can use latest version, or rely on the version in your pom.xml.
Default value: latest

checkstyleAnalysisEnabled (Optional) Run the Checkstyle tool with the default Sun
Run Checkstyle checks. Results are uploaded as build artifacts.
Default value: false
Argument aliases: checkStyleRunAnalysis

pmdAnalysisEnabled (Optional) Use the PMD static analysis tool to look for bugs
Run PMD in the code. Results are uploaded as build artifacts.
Default value: false
Argument aliases: pmdRunAnalysis

findbugsAnalysisEnabled (Optional) Use the FindBugs static analysis tool to look for
Run FindBugs bugs in the code. Results are uploaded as build artifacts.
Default value: false
Argument aliases: findBugsRunAnalysis

C O N T RO L O P T IO N S

IMPORTANT
When using the -q option in your MAVEN_OPTS, an effective pom won't be generated correctly and Azure Artifacts feeds
may not be able to be authenticated.

Example
Build and Deploy your Java application to an Azure Web App

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
MSBuild task
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Use this task to build with MSBuild.

Demands
msbuild

Azure Pipelines: If your team uses Visual Studio 2017 and you want to use the Microsoft-hosted agents,
make sure you select as your default pool the Hosted VS2017 . See Microsoft-hosted agents.

YAML snippet
# MSBuild
# Build with MSBuild
- task: MSBuild@1
inputs:
#solution: '**/*.sln'
#msbuildLocationMethod: 'version' # Optional. Options: version, location
#msbuildVersion: 'latest' # Optional. Options: latest, 16.0, 15.0, 14.0, 12.0, 4.0
#msbuildArchitecture: 'x86' # Optional. Options: x86, x64
#msbuildLocation: # Optional
#platform: # Optional
#configuration: # Optional
#msbuildArguments: # Optional
#clean: false # Optional
#maximumCpuCount: false # Optional
#restoreNugetPackages: false # Optional
#logProjectEvents: false # Optional
#createLogFile: false # Optional
#logFileVerbosity: 'normal' # Optional. Options: quiet, minimal, normal, detailed, diagnostic

Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N

solution (Required) If you want to build a single project, click the ...
Project button and select the project.
If you want to build multiple projects, specify search
criteria. You can use a single-folder wildcard ( * ) and
recursive wildcards ( ** ). For example, **.*proj
searches for all MSBuild project (.*proj) files in all
subdirectories.
Make sure the projects you specify are downloaded by
this build pipeline. On the Repository tab:
If you use TFVC, make sure that the project is a child
of one of the mappings on the Repository tab.
If you use Git, make sure that the project or project is
in your Git repo, in a branch that you're building.
Tip: If you are building a solution, we recommend you use
the Visual Studio build task instead of the MSBuild task.

Default value: **/*.sln

msbuildLocationMethod (Optional)
MSBuild Default value: version

msbuildVersion (Optional) If the preferred version cannot be found, the latest


MSBuild Version version found will be used instead. On an macOS agent,
xbuild (Mono) will be used if version is lower than 15.0
Default value: latest

msbuildArchitecture (Optional) Optionally supply the architecture (x86, x64) of


MSBuild Architecture MSBuild to run
Default value: x86

msbuildLocation (Optional) Optionally supply the path to MSBuild


Path to MSBuild

platform (Optional) Specify the platform you want to build such as


Platform Win32 , x86 , x64 or any cpu .

Tips:
If you are targeting an MSBuild project (.*proj) file
instead of a solution, specify AnyCPU (no whitespace).
Declare a build variable such as BuildPlatform on
the Variables tab (selecting Allow at Queue Time) and
reference it here as $(BuildPlatform) . This way you
can modify the platform when you queue the build
and enable building multiple configurations.
A RGUM EN T DESC RIP T IO N

configuration (Optional) Specify the configuration you want to build


Configuration such as debug or release .
Tip: Declare a build variable such as
BuildConfiguration on the Variables tab (selecting
Allow at Queue Time) and reference it here as
$(BuildConfiguration) . This way you can modify the
platform when you queue the build and enable building
multiple configurations.

msbuildArguments (Optional) Additional arguments passed to MSBuild (on


MSBuild Arguments Windows) and xbuild (on macOS)

clean Set to False if you want to make this an incremental build.


Clean This setting might reduce your build time, especially if
your codebase is large. This option has no practical effect
unless you also set Clean repository to False.
Set to True if you want to rebuild all the code in the code
projects. This is equivalent to the MSBuild
/target:clean argument.

See, repo options

Default value: false

maximumCpuCount (Optional) If your MSBuild target configuration is compatible


Build in Parallel with building in parallel, you can optionally check this input to
pass the /m switch to MSBuild (Windows only). If your target
configuration is not compatible with building in parallel,
checking this option may cause your build to result in file-in-
use errors, or intermittent or inconsistent build failures.
Default value: false

restoreNugetPackages (Impor tant) This option is deprecated. Make sure to clear


Restore NuGet Packages this checkbox and instead use the NuGet Installer build task.
Default value: false

A DVA N C ED

logProjectEvents Optionally record timeline details for each project (Windows


Record Project Details only)
Default value: false

createLogFile Optionally create a log file (Windows only)


Create Log File
Default value: false

logFileVerbosity Optional log file verbosity


Log File Verbosity
Default value: normal

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Should I use the Visual Studio Build task or the MSBuild task?
If you are building a solution, in most cases you should use the Visual Studio Build task. This task automatically:
Sets the /p:VisualStudioVersion property for you. This forces MSBuild to use a particular set of targets that
increase the likelihood of a successful build.
Specifies the MSBuild version argument.
In some cases, you might need to use the MSBuild task. For example, you should use it if you are building code
projects apart from a solution.
Where can I learn more about MSBuild?
MSBuild reference
MSBuild command-line reference
How do I build multiple configurations for multiple platforms?
1. On the Variables tab, make sure you've got variables defined for your configurations and platforms. To
specify multiple values, separate them with commas.
For example, for a .NET app you could specify:

Name Value

BuildConfiguration debug, release

BuildPlatform any cpu

For example, for a C++ app you could specify:

Name Value

BuildConfiguration debug, release

BuildPlatform x86, x64

2. On the Options tab, select MultiConfiguration and specify the Multipliers, separated by commas. For
example: BuildConfiguration, BuildPlatform
Select Parallel if you want to distribute the jobs (one for each combination of values) to multiple agents in
parallel if they are available.
3. On the Build tab, select this step and specify the Platform and Configuration arguments. For example:
Platform: $(BuildPlatform)
Configuration: $(BuildConfiguration)
Can I build TFSBuild.proj files?
You cannot build TFSBuild.proj files. These kinds of files are generated by TFS 2005 and 2008. These files contain
tasks and targets are supported only using XAML builds.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.

Troubleshooting
This section provides troubleshooting tips for common issues that a user might encounter when using the
MSBuild task.
Build failed with the following error: An internal failure occurred while running MSBuild
Build failed with the following error: An internal failure occurred while running MSBuild
Possible causes
Troubleshooting suggestions
Possible causes
Change in the MSBuild version.
Issues with a third-party extension.
New updates to Visual Studio that can cause missing assemblies on the build agent.
Moved or deleted some of the necessary NuGet packages.
Troubleshooting suggestions
Run the pipeline with diagnostics to retrieve detailed logs
Try to reproduce the error locally
What else can I do?
R u n t h e p i p e l i n e w i t h d i a g n o st i c s t o r e t r i e v e d e t a i l e d l o g s

One of the available options to diagnose the issue is to take a look at the generated logs. You can view your
pipeline logs by selecting the appropriate task and job in your pipeline run summary.
To get the logs of your pipeline execution Get logs to diagnose problems
You can also setup and download a customized verbose log to assist with your troubleshooting:
Configure verbose logs
View and download logs
In addition to the pipeline diagnostic logs, you can also check these other types of logs that contain more
information to help you debug and solve the problem:
Worker diagnostic logs
Agent diagnostic logs
Other logs (Environment and capabilities)
T r y t o r epr o du c e t h e er r o r l o c al l y

If you are using a hosted build agent, you might want to try to reproduce the error locally. This will help you to
narrow down whether the failure is the result of the build agent or the build task.
Run the same MSBuild command on your local machine using the same arguments. Check out MSBuild command
for reference

TIP
If you can reproduce the problem on your local machine, then your next step is to investigate the MSBuild issue.

For more information on Microsoft hosted agents


To setup your own self-hosted agent and run the build jobs:
Self-hosted Windows agents
Self-hosted Linux agents
Self-hosted MacOS agents
W h a t e l se c a n I d o ?

At the bottom of this page, check out the GitHub issues in the Open and Closed tabs to see if there is a similar
issue that has been resolved previously by our team.
Some of the MSBuild errors are caused by a change in Visual Studio so you can search on Visual Studio Developer
Community to see if this issue has been reported. We also welcome your questions, suggestions, and feedback.
Visual Studio Build task
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Use this task to build with MSBuild and set the Visual Studio version property.

Demands
msbuild, visualstudio

Azure Pipelines: If your team wants to use Visual Studio 2017 with the Microsoft-hosted agents, select
Hosted VS2017 as your default build pool. See Microsoft-hosted agents.

YAML snippet
# Visual Studio build
# Build with MSBuild and set the Visual Studio version property
- task: VSBuild@1
inputs:
#solution: '**\*.sln'
#vsVersion: 'latest' # Optional. Options: latest, 16.0, 15.0, 14.0, 12.0, 11.0
#msbuildArgs: # Optional
#platform: # Optional
#configuration: # Optional
#clean: false # Optional
#maximumCpuCount: false # Optional
#restoreNugetPackages: false # Optional
#msbuildArchitecture: 'x86' # Optional. Options: x86, x64
#logProjectEvents: true # Optional
#createLogFile: false # Optional
#logFileVerbosity: 'normal' # Optional. Options: quiet, minimal, normal, detailed, diagnostic

Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N

solution (Required) If you want to build a single solution, click the


Solution ... button and select the solution.
If you want to build multiple solutions, specify search
criteria. You can use a single-folder wildcard (`*`) and
recursive wildcards (`**`). For example, `**.sln` searches
for all .sln files in all subdirectories.
Make sure the solutions you specify are downloaded by
this build pipeline. On the Repository tab:
If you use TFVC, make sure that the solution is a child
of one of the mappings on the Repository tab.
If you use Git, make sure that the project or solution
is in your Git repo, and in a branch that you're
building.
Tips:
You can also build MSBuild project (.*proj) files.
If you are building a customized MSBuild project file,
we recommend you use the MSBuild task instead of
the Visual Studio Build task.

Default value: **\*.sln

vsVersion To avoid problems overall, you must make sure this value
Visual Studio Version matches the version of Visual Studio used to create your
solution.
The value you select here adds the
/p:VisualStudioVersion=
{numeric_visual_studio_version}
argument to the MSBuild command run by the build. For
example, if you select Visual Studio 2015 ,
/p:VisualStudioVersion=14.0 is added to the
MSBuild command.
Azure Pipelines: If your team wants to use Visual
Studio 2017 with the Microsoft-hosted agents, select
Hosted VS2017 as your default build pool. See
Microsoft-hosted agents.

Default value: latest

msbuildArgs (Optional) You can pass additional arguments to MSBuild.


MSBuild Arguments For syntax, see MSBuild Command-Line Reference.
A RGUM EN T DESC RIP T IO N

platform (Optional) Specify the platform you want to build such as


Platform Win32 , x86 , x64 or any cpu .

Tips:
If you are targeting an MSBuild project (.*proj) file
instead of a solution, specify AnyCPU (no
whitespace).
Declare a build variable such as BuildPlatform on
the Variables tab (selecting Allow at Queue Time) and
reference it here as $(BuildPlatform) . This way
you can modify the platform when you queue the
build and enable building multiple configurations.

configuration (Optional) Specify the configuration you want to build


Configuration such as debug or release .
Tip: Declare a build variable such as
BuildConfiguration on the Variables tab (selecting
Allow at Queue Time) and reference it here as
$(BuildConfiguration) . This way you can modify the
platform when you queue the build and enable building
multiple configurations.

clean (Optional) Set to False if you want to make this an


Clean incremental build. This setting might reduce your build
time, especially if your codebase is large. This option has
no practical effect unless you also set Clean repository to
False.
Set to True if you want to rebuild all the code in the code
projects. This is equivalent to the MSBuild
/target:clean argument.

A DVA N C ED

maximumCpuCount (Optional) If your MSBuild target configuration is compatible


Build in Parallel with building in parallel, you can optionally check this input
to pass the /m switch to MSBuild (Windows only). If your
target configuration is not compatible with building in
parallel, checking this option may cause your build to result
in file-in-use errors, or intermittent or inconsistent build
failures.
Default value: false

restoreNugetPackages (Impor tant) This option is deprecated. Make sure to clear


Restore NuGet Packages this checkbox and instead use the NuGet Installer build task.
Default value: false
A RGUM EN T DESC RIP T IO N

msbuildArchitecture Optionally supply the architecture (x86, x64) of MSBuild


MSBuild Architecture to run
Tip: Because Visual Studio runs as a 32-bit application,
you could experience problems when your build is
processed by a build agent that is running the 64-bit
version of Team Foundation Build Service. By selecting
MSBuild x86, you might resolve these kinds of problems.

Default value: x86

logProjectEvents Optionally record timeline details for each project


Record Project Details Default value: true

createLogFile Optionally create a log file (Windows only)


Create Log File Default value: false

logFileVerbosity Optional log file verbosity


Log File Verbosity Default value: normal

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Should I use the Visual Studio Build task or the MSBuild task?
If you are building a solution, in most cases you should use the Visual Studio Build task. This task automatically:
Sets the /p:VisualStudioVersion property for you. This forces MSBuild to use a particular set of targets
that increase the likelihood of a successful build.
Specifies the MSBuild version argument.
In some cases you might need to use the MSBuild task. For example, you should use it if you are building code
projects apart from a solution.
Where can I learn more about MSBuild?
MSBuild task
MSBuild reference
MSBuild command-line reference
How do I build multiple configurations for multiple platforms?
1. On the Variables tab, make sure you've got variables defined for your configurations and platforms. To
specify multiple values, separate them with commas.
For example, for a .NET app you could specify:
Name Value

BuildConfiguration debug, release

BuildPlatform any cpu

For example, for a C++ app you could specify:

Name Value

BuildConfiguration debug, release

BuildPlatform x86, x64

2. On the Options tab select Parallel if you want to distribute the jobs (one for each combination of values)
to multiple agents in parallel if they are available.
3. On the Build tab, select this step and specify the Platform and Configuration arguments. For example:
Platform: $(BuildPlatform)
Configuration: $(BuildConfiguration)
4. Under the agent job of the assigned task, on the Parallelism tab, select Multi-configuration and specify
the Multipliers separated by commas. For example: BuildConfiguration, BuildPlatform
Can I build TFSBuild.proj files?
You cannot build TFSBuild.proj files. These kinds of files are generated by TFS 2005 and 2008. These files contain
tasks and targets are supported only using XAML builds.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Xamarin.Android task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build an Android app with Xamarin.

Demands
AndroidSDK, MSBuild, Xamarin.Android

YAML snippet
# Xamarin.Android
# Build an Android app with Xamarin
- task: XamarinAndroid@1
inputs:
#projectFile: '**/*.csproj'
#target: # Optional
#outputDirectory: # Optional
#configuration: # Optional
#createAppPackage: true # Optional
#clean: false # Optional
#msbuildLocationOption: 'version' # Optional. Options: version, location
#msbuildVersionOption: '15.0' # Optional. Options: latest, 15.0, 14.0, 12.0, 4.0
#msbuildFile: # Required when msbuildLocationOption == Location
#msbuildArchitectureOption: 'x86' # Optional. Options: x86, x64
#msbuildArguments: # Optional
#jdkOption: 'JDKVersion' # Options: jDKVersion, path
#jdkVersionOption: 'default' # Optional. Options: default, 1.11, 1.10, 1.9, 1.8, 1.7, 1.6
#jdkDirectory: # Required when jdkOption == Path
#jdkArchitectureOption: 'x64' # Optional. Options: x86, x64

Arguments
A RGUM EN T DESC RIP T IO N

project (Required) Relative path from repo root of Xamarin.Android


Project project(s) to build. Wildcards can be used more information.
For example, **/*.csproj for all csproj files in all subfolders.
The project must have a PackageForAndroid target if
Create App Package is selected.
Default value: **/*.csproj
Argument aliases: projectFile

target (Optional) Build these targets in this project. Use a semicolon


Target to separate multiple targets.

outputDir Optionally provide the output directory for the build.


Output Directory Example: $(build.binariesDirector y)/bin/Release
Argument aliases: outputDirectory
A RGUM EN T DESC RIP T IO N

configuration (Optional) Specify the configuration you want to build


Configuration such as debug or release .
Tip: Declare a build variable such as
BuildConfiguration on the Variables tab (selecting
Allow at Queue Time) and reference it here as
$(BuildConfiguration) . This way you can modify the
platform when you queue the build and enable building
multiple configurations.

createAppPackage (Optional) Passes the target (/t:PackageForAndroid) during


Create app package build to generate an APK.
Default value: true

clean (Optional) Passes the clean target (/t:clean) during build


Clean Default value: false

M SB UIL D O P T IO N S

msbuildLocationMethod (Optional) Path to MSBuild (on Windows) or xbuild (on


MSBuild macOS). Default behavior is to search for the latest version.
Default value: version
Argument aliases: msbuildLocationOption

msbuildVersion (Optional) If the preferred version cannot be found, the latest


MSBuild version version found will be used instead. On macOS, xbuild (Mono)
or MSBuild (Visual Studio for Mac) will be used.
Default value: 15.0
Argument aliases: msbuildVersionOption

msbuildLocation (Required) Optionally supply the path to MSBuild (on


MSBuild location Windows) or xbuild (on macOS)
Default value: version
Argument aliases: msbuildFile

msbuildArchitecture Optionally supply the architecture (x86, x64) of MSBuild to


MSBuild architecture run
Default value: x86
Argument aliases: msbuildArchitectureOption

msbuildArguments (Optional) Additional arguments passed to MSBuild (on


Additional Arguments Windows) or xbuild (on macOS).

JDK O P T IO N S

jdkSelection (Required) Pick the JDK to be used during the build by


Select JDK to use for the build selecting a JDK version that will be discovered during builds or
by manually entering a JDK path.
JDK Version: Select the JDK version you want to use.
JDK Path: Specify the path to the JDK you want to use.

Default value: JDKVersion


Argument aliases: jdkOption
A RGUM EN T DESC RIP T IO N

jdkVersion (Optional) Use the selected JDK version during build.


JDK version Default value: default
Argument aliases: jdkVersionOption

jdkUserInputPath (Required) Use the selected JDK version during build.


JDK path Default value: default
Argument aliases: jdkDirectory

jdkArchitecture Optionally supply the architecture (x86, x64) of JDK


JDK Architecture Default value: x64
Argument aliases: jdkArchitectureOption

C O N T RO L O P T IO N S

Example
Build your Xamarin app

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Xamarin.iOS task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task in a pipeline to build an iOS app with Xamarin on macOS. For more information, see the Xamarin
guidance and Sign your app during CI.

Demands
Xamarin.iOS

YAML snippet
# Xamarin.iOS
# Build an iOS app with Xamarin on macOS
- task: XamariniOS@2
inputs:
#solutionFile: '**/*.sln'
#configuration: 'Release'
#clean: false # Optional
#packageApp: true
#buildForSimulator: false # Optional
#runNugetRestore: false
#args: # Optional
#workingDirectory: # Optional
#mdtoolFile: # Optional
#signingIdentity: # Optional
#signingProvisioningProfileID: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

solution (Required) Relative path from the repository root of the


Solution Xamarin.iOS solution or csproj project to build. May contain
wildcards.
Default value: **/*.sln
Argument aliases: solutionFile

configuration (Required) Standard configurations are Ad-Hoc, AppStore,


Configuration Debug, Release.
Default value: Release

clean (Optional) Run a clean build (/t:clean) prior to the build.


Clean Default value: false

packageApp (Required) Indicates whether an IPA should be generated as a


Create app package part of the build.
Default value: true
A RGUM EN T DESC RIP T IO N

forSimulator (Optional) Optionally build for the iOS Simulator instead of


Build for iOS Simulator physical iOS devices.
Default value: false
Argument aliases: buildForSimulator

runNugetRestore (Required) Optionally run nuget restore on the Xamarin


Run NuGet restore iOS solution to install all referenced packages before build. The
'nuget' tool in the PATH of the build agent machine will be
used. To use a different version of NuGet or set additional
arguments, use the NuGet Tool Installer task.
Default value: false

args (Optional) Additional command line arguments that should be


Arguments used to build.

cwd (Optional) Working directory in which builds will run. When


Working directory empty, the root of the repository is used.
Argument aliases: workingDirectory

buildToolLocation (Optional) Optionally supply the full path to MSBuild (the


Build tool path Visual Studio for Mac build tool). When empty, the default
MSBuild path is used.
Argument aliases: mdtoolFile , mdtoolLocation

iosSigningIdentity (Optional) Optionally override the signing identity that will be


Signing identity used to sign the build. If nothing is entered, the setting in the
project will be used.
Argument aliases: signingIdentity

provProfileUuid (Optional) Optional UUID of an installed provisioning profile


Provisioning profile UUID to be used for this build.
Argument aliases: signingProvisioningProfileID

C O N T RO L O P T IO N S

Example
Build your Xamarin app

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Xcode task
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015


Use this task to build, test, or archive an Xcode workspace on macOS, and optionally package an app.

Demands
xcode

YAML snippet
# Xcode
# Build, test, or archive an Xcode workspace on macOS. Optionally package an app.
- task: Xcode@5
inputs:
#actions: 'build'
#configuration: '$(Configuration)' # Optional
#sdk: '$(SDK)' # Optional
#xcWorkspacePath: '**/*.xcodeproj/project.xcworkspace' # Optional
#scheme: # Optional
#xcodeVersion: 'default' # Optional. Options: 8, 9, 10, default, specifyPath
#xcodeDeveloperDir: # Optional
packageApp:
#archivePath: # Optional
#exportPath: 'output/$(SDK)/$(Configuration)' # Optional
#exportOptions: 'auto' # Optional. Options: auto, plist, specify
#exportMethod: 'development' # Required when exportOptions == Specify
#exportTeamId: # Optional
#exportOptionsPlist: # Required when exportOptions == Plist
#exportArgs: # Optional
#signingOption: 'nosign' # Optional. Options: nosign, default, manual, auto
#signingIdentity: # Optional
#provisioningProfileUuid: # Optional
#provisioningProfileName: # Optional
#teamId: # Optional
#destinationPlatformOption: 'default' # Optional. Options: default, iOS, tvOS, macOS, custom
#destinationPlatform: # Optional
#destinationTypeOption: 'simulators' # Optional. Options: simulators, devices
#destinationSimulators: 'iPhone 7' # Optional
#destinationDevices: # Optional
#args: # Optional
#workingDirectory: # Optional
#useXcpretty: true # Optional
#publishJUnitResults: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

actions (Required) Enter a space-delimited list of actions. Valid options are


Actions build , clean , test , analyze , and archive . For example,
clean build will run a clean build. See Apple: Building from the
command line with Xcode FAQ.
Default value: build
A RGUM EN T DESC RIP T IO N

configuration (Optional) Enter the Xcode project or workspace configuration to be


Configuration built. The default value of this field is the variable
$(Configuration) . When using a variable, make sure to specify a
value (for example, Release ) on the Variables tab.
Default value: $(Configuration)

sdk (Optional) Specify an SDK to use when building the Xcode project or
SDK workspace. From the macOS Terminal application, run
xcodebuild -showsdks to display the valid list of SDKs. The default
value of this field is the variable $(SDK) . When using a variable,
make sure to specify a value (for example, iphonesimulator ) on
the Variables tab.
Default value: $(SDK)

xcWorkspacePath (Optional) Enter a relative path from the root of the repository to
Workspace or project path the Xcode workspace or project. For example,
MyApp/MyApp.xcworkspace or MyApp/MyApp.xcodeproj .
Default value: **/*.xcodeproj/project.xcworkspace

scheme (Optional) Enter a scheme name defined in Xcode. It must be a


Scheme shared scheme, with its Shared checkbox enabled under Managed
Schemes in Xcode. If you specify a Workspace or project path
above without specifying a scheme, and the workspace has a single
shared scheme, it will be automatically used.

xcodeVersion (Optional) Specify the target version of Xcode. Select Default to


Xcode version use the default version of Xcode on the agent machine. Selecting a
version number (e.g. Xcode 10 ) relies on environment variables
being set on the agent machine for the version's location (e.g.
XCODE_10_DEVELOPER_DIR=/Applications/Xcode_10.0.0.app/Contents/Developer
). Select Specify path to provide a specific path to the Xcode
developer directory.
Default value: default

xcodeDeveloperDir (Optional) Enter a path to a specific Xcode developer directory (e.g.


Xcode developer path /Applications/Xcode_10.0.0.app/Contents/Developer ). This is
useful when multiple versions of Xcode are installed on the agent
machine.

( O P T IO N A L ) SIGN IN G & P RO VISIO N IN G

signingOption (Optional) Choose the method of signing the build. Select


Signing style Do not code sign to disable signing. Select Project defaults
to use only the project's signing configuration. Select
Manual signing to force manual signing and optionally specify a
signing identity and provisioning profile. Select Automatic signing
to force automatic signing and optionally specify a development
team ID. If your project requires signing, use the "Install Apple..."
tasks to install certificates and provisioning profiles prior to the
Xcode build.
Default value: nosign

signingIdentity (Optional) Enter a signing identity override with which to sign the
Signing identity build. This may require unlocking the default keychain on the agent
machine. If no value is entered, the Xcode project's setting will be
used.

provisioningProfileUuid (Optional) Enter the UUID of an installed provisioning profile to be


Provisioning profile UUID used for this build. Use separate build tasks with different schemes
or targets to specify separate provisioning profiles by target in a
single workspace (iOS, tvOS, watchOS).
A RGUM EN T DESC RIP T IO N

provisioningProfileName (Optional) Enter the name of an installed provisioning profile to be


Provisioning profile name used for this build. If specified, this takes precedence over the
provisioning profile UUID. Use separate build tasks with different
schemes or targets to specify separate provisioning profiles by
target in a single workspace (iOS, tvOS, watchOS).

teamId (Optional, unless you are a member of multiple development teams.)


Team ID Specify the 10-character development team ID.

PA C K A GE O P T IO N S

packageApp Indicate whether an IPA app package file should be generated as a


Create app package part of the build.
Default value: false

archivePath (Optional) Specify a directory where created archives should be


Archive path placed.

exportPath (Optional) Specify the destination for the product exported from the
Export path archive.
Default value: output/$(SDK)/$(Configuration)

exportOptions (Optional) Select a way of providing options for exporting the


Export options archive. When the default value of Automatic is selected, the
export method is automatically detected from the archive. Select
plist to specify a plist file containing export options. Select
Specify to provide a specific Expor t method and Team ID .
Default value: auto

exportMethod (Required) Enter the method that Xcode should use to export the
Export method archive. For example: app-store , package , ad-hoc ,
enterprise , or development .
Default value: development

exportTeamId (Optional) Enter the 10-character team ID from the Apple Developer
Team ID Portal to use during export.

exportOptionsPlist (Required) Enter the path to the plist file that contains options to
Export options plist use during export.

exportArgs (Optional) Enter additional command line arguments to be used


Export arguments during export.

DEVIC ES & SIM UL ATO RS

destinationPlatformOption (Optional) Select the destination device's platform to be used for UI


Destination platform testing when the generic build device isn't valid. Choose Custom to
specify a platform not included in this list. When Default is
selected, no simulators nor devices will be targeted.
Default value: default

destinationPlatform (Optional) Select the destination device's platform to be used for UI


Custom destination platform testing when the generic build device isn't valid. Choose Custom to
specify a platform not included in this list. When Default is
selected, no simulators nor devices will be targeted.
Default value: default
A RGUM EN T DESC RIP T IO N

destinationTypeOption (Optional) Choose the destination type to be used for UI testing.


Destination type Devices must be connected to the Mac performing the build via a
cable or network connection. See Devices and Simulators in
Xcode.
Default value: simulators

destinationSimulators (Optional) Enter an Xcode simulator name to be used for UI testing.


Simulators For example, enter iPhone X (iOS and watchOS) or Apple TV 4K
(tvOS). A target OS version is optional and can be specified in the
format 'OS=versionNumber', such as iPhone X,OS=11.1 . A list of
simulators installed on the Hosted macOS agent can be found
here.
Default value: iPhone 7

destinationDevices (Optional) Enter the name of the device to be used for UI testing,
Devices such as Raisa's iPad . Only one device is currently supported.
Note that Apple does not allow apostrophes ( ' ) in device names.
Instead, right single quotation marks ( ' ) can be used.

A DVA N C ED

args (Optional) Enter additional command line arguments with which to


Arguments build. This is useful for specifying -target or -project
arguments instead of specifying a workspace/project and scheme.
See Apple: Building from the command line with Xcode FAQ.

cwd (Optional) Enter the working directory in which to run the build. If
Working directory no value is entered, the root of the repository will be used.
Argument aliases: workingDirectory

useXcpretty (Optional) Specify whether to use xcpretty to format xcodebuild


Use xcpretty output and generate JUnit test results. Enabling this requires
xcpretty to be installed on the agent machine. It is preinstalled on
Microsoft-hosted build agents. See xcpretty on GitHub.
Default value: true

xcprettyArgs (Optional) If xcpretty is enabled above, specify arguments for


Arguments for xcpretty xcpretty. See xcpretty on GitHub for a list of xcpretty arguments.

publishJUnitResults (Optional) If xcpretty is enabled above, specify whether to publish


Publish test results to Azure Pipelines/TFS JUnit test results to Azure Pipelines/TFS.
Default value: false

testRunTitle (Optional) If xcpretty and publishJUnitResults are enabled above,


Test run title you can specify test run title.

C O N T RO L O P T IO N S

Example
Build your Xcode app

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

Using multiple provisioning profiles


Currently there's no support of multiple provisioning profiles for Xcode task (for example for iOS App Extension)
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Xcode Package iOS task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to generate an .ipa file from Xcode build output.

Deprecated
The Xcode Package iOS task has been deprecated. It is relevant only if you are using Xcode 6.4.
Other wise, use the latest version of the Xcode task .

Demands
xcode

YAML snippet
# Xcode Package iOS
# Generate an .ipa file from Xcode build output using xcrun (Xcode 7 or below)
- task: XcodePackageiOS@0
inputs:
#appName: 'name.app'
#ipaName: 'name.ipa'
provisioningProfile:
#sdk: 'iphoneos'
#appPath: '$(SDK)/$(Configuration)/build.sym/$(Configuration)-$(SDK)'
#ipaPath: '$(SDK)/$(Configuration)/build.sym/$(Configuration)-$(SDK)/output'

Arguments
A RGUM EN T DESC RIP T IO N

Name of .app Name of the .app file, which is sometimes different from the
.ipa file.

Name of .ipa Name of the .ipa file, which is sometimes different from the
.app file.

Provisioning Profile Name Name of the provisioning profile to use when signing.

SDK The SDK you want to use. Run xcodebuild -showsdks to


see a list of valid SDK values.

A DVA N C ED

Path to .app Relative path to the built .app file. The default value is
$(SDK)/$(Configuration)/build.sym/$(Configuration)-
$(SDK)
. Make sure to specify the variable values on the variables tab.
A RGUM EN T DESC RIP T IO N

Path to place .ipa Relative path where the .ipa will be placed. The directory will
be created if it doesn't exist. The default value is
$(SDK)/$(Configuration)/build.sym/$(Configuration)-
$(SDK)/output
. Make sure to specify the variable values on the variables tab.

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Archive Files task
6/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to create an archive file from a source folder. A range of standard archive formats are supported
including .zip, .jar, .war, .ear, .tar, .7z, and more.

Demands
None

YAML snippet
# Archive files
# Compress files into .7z, .tar.gz, or .zip
- task: ArchiveFiles@2
inputs:
#rootFolderOrFile: '$(Build.BinariesDirectory)'
#includeRootFolder: true
#archiveType: 'zip' # Options: zip, 7z, tar, wim
#tarCompression: 'gz' # Optional. Options: gz, bz2, xz, none
#archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
#replaceExistingArchive: true
#verbose: # Optional
#quiet: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

rootFolderOrFile (Required) Enter the root folder or file path to add to the
Root folder or file to archive archive. If a folder, everything under the folder will be added
to the resulting archive
Default value: $(Build.BinariesDirectory)

includeRootFolder (Required) If selected, the root folder name will be prefixed to


Prepend root folder name to archive paths file paths within the archive. Otherwise, all file paths will start
one level lower.
For example , suppose the selected root folder is:
/home/user/output/classes/ , and contains:
com/acme/Main.class .
If selected, the resulting archive would contain:
classes/com/acme/Main.class
Otherwise, the resulting archive would contain:
com/acme/Main.class..
A RGUM EN T DESC RIP T IO N

archiveType (Required) Specify the compression scheme used. To create


Archive type foo.jar , for example, choose zip for the compression, and
specify foo.jar as the archive file to create. For all tar files
(including compressed ones), choose tar .
zip - default, zip format, choose this for all zip
compatible types, (.zip, .jar, .war, .ear)
7z - 7-Zip format, (.7z)
tar - tar format, choose this for compressed tars,
(.tar.gz, .tar.bz2, .tar.xz)
wim - wim format, (.wim)

sevenZipCompression Optionally choose a compression level, or choose None to


7z compression create an uncompressed 7z file
Default value: 5

tarCompression Optionally choose a compression scheme, or choose None


Tar compression to create an uncompressed tar file.
gz - default, gzip compression (.tar.gz, .tar.tgz, .taz)
bz2 - bzip2 compression (.tar.bz2, .tz2, .tbz2)
xz - xz compression (.tar.xz, .txz)
None - no compression, choose this to create a
uncompressed tar file (.tar)

Default value: gz

archiveFile (Required) Specify the name of the archive file to create.


Archive file to create For example , to create foo.tgz , select the tar archive
type and gz for tar compression.
Default value:
$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip

replaceExistingArchive (Required) If an existing archive exists, specify whether to


Replace existing archive overwrite it. Otherwise, files will be added to it as long as it is
not a compressed tar.
If adding to an existing archive, these types are supported:
zip
7z
tar - uncompressed only
wim

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure Network Load Balancer task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to connect or disconnect an Azure virtual machine's network interface to a load balancer's address
pool.

YAML snippet
# Azure Network Load Balancer
# Connect or disconnect an Azure virtual machine's network interface to a Load Balancer's back end address
pool
- task: AzureNLBManagement@1
inputs:
azureSubscription:
resourceGroupName:
loadBalancer:
action: # Options: disconnect, connect

Arguments
A RGUM EN T DESC RIP T IO N

ConnectedServiceName (Required) Select the Azure Resource Manager subscription for


Azure Subscription the deployment
Argument aliases: azureSubscription

ResourceGroupName (Required) Select the resource group name


Resource Group

LoadBalancer (Required) Select or enter the load balancer


Load Balancer Name

Action (Required)
Action Disconnect : Removes the virtual machine’s primary network
interface from the load balancer’s backend pool. So that it
stops receiving network traffic.
Connect : Adds the virtual machine’s primary network
interface to load balancer backend pool. So that it starts
receiving network traffic based on the load balancing rules for
the load balancer resource

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Bash task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to run a Bash script on macOS, Linux, or Windows.

YAML snippet
# Bash
# Run a Bash script on macOS, Linux, or Windows
- task: Bash@3
inputs:
#targetType: 'filePath' # Optional. Options: filePath, inline
#filePath: # Required when targetType == FilePath
#arguments: # Optional
#script: '# echo Hello world' # Required when targetType == inline
#workingDirectory: # Optional
#failOnStderr: false # Optional
#noProfile: true # Optional
#noRc: true # Optional

The Bash task also has a shortcut syntax in YAML:

- bash: # script path or inline


workingDirectory: #
displayName: #
failOnStderr: #
env: # mapping of environment variables to add

Arguments
A RGUM EN T DESC RIP T IO N

targetType (Optional) Target script type: File Path or Inline


Type Default value: filePath

filePath (Required) Path of the script to execute. Must be a fully


Script Path qualified path or relative to
$(System.DefaultWorkingDirectory).

arguments (Optional) Arguments passed to the Bash script.


Arguments

script (Required, if Type is inline) Contents of the script


Script Default value:
"# Write your commands here\n\necho 'Hello world'\n"

workingDirectory (Optional) Specify the working directory in which you want to


Working Directory run the command. If you leave it empty, the working
directory is $(Build.SourcesDirectory)
A RGUM EN T DESC RIP T IO N

failOnStderr (Optional) If this is true, this task will fail if any errors are
Fail on standard error written to stderr.
Default value: false

noProfile (Optional) Don't load the system-wide startup file


Don't load the system-wide startup/initialization files /etc/profile or any of the personal initialization files

noRc (Optional) If this is true, the task will not process .bashrc
Don't read the ~/.bashrc file from the user's home directory.
Default value: true

env (Optional) A list of additional items to map into the process's


Environment variables environment.
For example, secret variables are not automatically mapped. If
you have a secret variable called Foo , you can map it in like
this:

steps:
- task: Bash@3
inputs:
targetType: 'inline'
script: echo $MYSECRET
env:
MYSECRET: $(Foo)

This is equivalent to:

steps:
- script: echo $MYSECRET
env:
MYSECRET: $(Foo)

The Bash task will find the first Bash implementation on your system. Running which bash on Linux/macOS or
where bash on Windows will give you an idea of which one it'll select.

Bash scripts checked into the repo should be set executable ( chmod +x ). Otherwise, the task will show a warning
and source the file instead.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Batch Script task
11/7/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run a Windows .bat or .cmd script. Optionally, allow it to permanently modify environment
variables.

NOTE
This task is not compatible with Windows containers. If you need to run a batch script on a Windows container, use the
command line task instead.
For information on supporting multiple platforms, see cross platform scripting.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

YAML snippet
# Batch script
# Run a Windows command or batch script and optionally allow it to change the environment
- task: BatchScript@1
inputs:
filename:
#arguments: # Optional
#modifyEnvironment: False # Optional
#workingFolder: # Optional
#failOnStandardError: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

filename (Required) Path of the cmd or bat script to execute. Should


Path be fully qualified path or relative to the default working
directory

arguments (Optional) Specify arguments to pass to the script.


Arguments

modifyEnvironment (Optional) Determines whether environment variable


Modify environment modifications will affect subsequent tasks
Default value: False

workingFolder (Optional) Current working directory when script is run.


Working folder Defaults to the agent's default working directory
A RGUM EN T DESC RIP T IO N

failOnStandardError (Optional) If this is true, this task will fail if any errors are
Fail on Standard Error written to the StandardError stream.
Default value: false

Example
Create test.bat at the root of your repo:

@echo off
echo Hello World from %AGENT_NAME%.
echo My ID is %AGENT_ID%.
echo AGENT_WORKFOLDER contents:
@dir %AGENT_WORKFOLDER%
echo AGENT_BUILDDIRECTORY contents:
@dir %AGENT_BUILDDIRECTORY%
echo BUILD_SOURCESDIRECTORY contents:
@dir %BUILD_SOURCESDIRECTORY%
echo Over and out.

On the Build tab of a build pipeline, add this task:

Run test.bat.
Path: test.bat

Utility: Batch Script

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn Windows commands?
An A-Z Index of the Windows CMD command line
How do I set a variable so that it can be read by subsequent scripts and tasks?
Define and modify your build variables in a script
Define and modify your release variables in a script
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are
some predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Command Line task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run a program from the command prompt.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.

Demands
None

YAML snippet
# Command line
# Run a command line script using Bash on Linux and macOS and cmd.exe on Windows
- task: CmdLine@2
inputs:
script: 'echo Write your commands here.'
#workingDirectory: # Optional
#failOnStderr: false # Optional

The CmdLine task also has a shortcut syntax in YAML:

- script: # script path or inline


workingDirectory: #
displayName: #
failOnStderr: #
env: { string: string } # mapping of environment variables to add

Running batch and .CMD files


Azure Pipelines puts your inline script contents into a temporary batch file (.cmd) in order to run it. When you
want to run a batch file from another batch file in Windows CMD, you must use the call command, otherwise
the first batch file is terminated. This will result in Azure Pipelines running your intended script up until the first
batch file, then running the batch file, then ending the step. Additional lines in the first script wouldn't be run.
You should always prepend call before executing a batch file in an Azure Pipelines script step.

IMPORTANT
You may not realize you're running a batch file. For example, npm on Windows, along with any tools that you install
using npm install -g , are actually batch files. Always use call npm <command> to run NPM commands in a
Command Line task on Windows.
Arguments
A RGUM EN T DESC RIP T IO N

script (Required) Contents of the script you want to run


Script Default value:
echo Write your commands here\n\necho Hello
world\n"

workingDirectory (Optional) Specify the working directory in which you want


Working directory to run the command. If you leave it empty, the working
directory is $(Build.SourcesDirectory).

failOnStderr If this is true, this task will fail if any errors are written to
Fail on Standard Error stderr

env (Optional) A list of additional items to map into the process's


Environment variables environment.
For example, secret variables are not automatically mapped.
If you have a secret variable called Foo , you can map it in
as shown in the following example.

- script: echo %MYSECRET%


env:
MySecret: $(Foo)

Example
YAML
Classic

steps:
- script: date /t
displayName: Get the date
- script: dir
workingDirectory: $(Agent.BuildDirectory)
displayName: List contents of a folder
- script: |
set MYVAR=foo
set
displayName: Set a variable and then display all
env:
aVarFromYaml: someValue

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn Windows commands?
An A-Z Index of the Windows CMD command line
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Copy and Publish Build Artifacts task
6/12/2020 • 2 minutes to read • Edit Online

TFS 2015
Use this task to copy build artifacts to a staging folder and then publish them to the server or a file share. Files are
copied to the $(Build.ArtifactStagingDirectory) staging folder and then published.

IMPORTANT
If you're using Azure Pipelines, or Team Foundation Server (TFS) 2017 or newer, we recommend that you do NOT use this
deprecated task. Instead, use the Copy Files and Publish Build Ar tifacts tasks. See Artifacts in Azure Pipelines.

IMPORTANT
Are you using Team Foundation Server (TFS) 2015.4? If so, we recommend that you do NOT use this deprecated task.
Instead, use the Copy Files and Publish Build Ar tifacts tasks. See Artifacts in Azure Pipelines.
You should use this task only if you're using Team Foundation Server (TFS) 2015 RTM. In that version of TFS, this task is listed
under the Build category and is named Publish Build Ar tifacts .

Demands
None

Arguments
A RGUM EN T DESC RIP T IO N

Copy Root Folder that contains the files you want to copy. If you
leave it empty, the copying is done from the root folder of
the repo (same as if you had specified
$(Build.SourcesDirectory) ).

If your build produces artifacts outside of the sources


directory, specify $(Agent.BuildDirectory) to copy files
from the build agent working directory.

Contents Specify pattern filters (one on each line) that you want to
apply to the list of files to be copied. For example:
** copies all files in the root folder.
**\* copies all files in the root folder and all files in
all sub-folders.
**\bin copies files in any sub-folder named bin.

Artifact Name Specify the name of the artifact. For example: drop
A RGUM EN T DESC RIP T IO N

Artifact Type Choose ser ver to store the artifact on your Team
Foundation Server. This is the best and simplest option in
most cases. See Artifacts in Azure Pipelines.

C O N T RO L O P T IO N S

FAQ
Q: This step didn't produce the outcome I was expecting. How can I fix it?
This task has a couple of known issues:
Some minimatch patterns don't work.
It eliminates the most common root path for all paths matched.
You can avoid these issues by instead using the Copy Files task and the Publish Build Artifacts task.
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Copy Files task
11/7/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015.3
Use this task to copy files from a source folder to a target folder using match patterns.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Demands
None

YAML snippet
# Copy files
# Copy files from a source folder to a target folder using patterns matching file paths (not folder paths)
- task: CopyFiles@2
inputs:
#sourceFolder: # Optional
#contents: '**'
targetFolder:
#cleanTargetFolder: false # Optional
#overWrite: false # Optional
#flattenFolders: false # Optional
#preserveTimestamp: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

SourceFolder (Optional) Folder that contains the files you want to copy. If
Source Folder you leave it empty, the copying is done from the root folder
of the repo (same as if you had specified
$(Build.SourcesDirectory) ).
If your build produces artifacts outside of the sources
directory, specify $(Agent.BuildDirectory) to copy files
from the directory created for the pipeline.
A RGUM EN T DESC RIP T IO N

Contents (Required) File paths to include as part of the copy. Supports


Contents multiple lines of match patterns.
For example:
* copies all files in the specified source folder
** copies all files in the specified source folder and
all files in all sub-folders
**\bin\** copies all files recursively from any bin
folder

The pattern is used to match only file paths, not


folder paths. So you should specify patterns such as
**\bin\** instead of **\bin.
You must use the path separator that matches your
build agent type. Example, / must be used for Linux
agents. More examples are shown below.
Default value: **

TargetFolder (Required) Target folder or UNC path files will copy to. You
Target Folder can use variables.
Example: $(build.ar tifactstagingdirector y)

CleanTargetFolder (Optional) Delete all existing files in target folder before copy
Clean Target Folder Default value: false

OverWrite (Optional) Replace existing files in target folder


Overwrite Default value: false

flattenFolders (Optional) Flatten the folder structure and copy all files into
Flatten Folders the specified target folder
Default value: false

preserveTimestamp (Optional) Using the original source file, preserve the target
Preserve Target Timestamp file timestamp.
Default value: false

Notes
If no files are matched, the task will still report success. If a matched file already exists in the target, the task will
report failure unless Overwrite is set to true.

Usage
A typical pattern for using this task is:
Build something
Copy build outputs to a staging directory
Publish staged artifacts
For example:
steps:
- script: ./buildSomething.sh
- task: CopyFiles@2
inputs:
contents: '_buildOutput/**'
targetFolder: $(Build.ArtifactStagingDirectory)
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: $(Build.ArtifactStagingDirectory)
artifactName: MyBuildOutputs

Examples
Copy executables and a readme file
Goal
You want to copy just the readme and the files needed to run this C# console app:

`-- ConsoleApplication1
|-- ConsoleApplication1.sln
|-- readme.txt
`-- ClassLibrary1
|-- ClassLibrary1.csproj
`-- ClassLibrary2
|-- ClassLibrary2.csproj
`-- ConsoleApplication1
|-- ConsoleApplication1.csproj

NOTE
ConsoleApplication1.sln contains a bin folder with .dll and .exe files, see the Results below to see what gets moved!

On the Variables tab, $(BuildConfiguration) is set to release .


YAML
Classic
Example with multiple match patterns:

steps:
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
Contents: |
ConsoleApplication1\ConsoleApplication1\bin\**\*.exe
ConsoleApplication1\ConsoleApplication1\bin\**\*.dll
ConsoleApplication1\readme.txt
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Example with OR condition:


steps:
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
Contents: |
ConsoleApplication1\ConsoleApplication1\bin\**\?(*.exe|*.dll)
ConsoleApplication1\readme.txt
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Example with NOT condition:

steps:
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
Contents: |
ConsoleApplication1\**\bin\**\!(*.pdb|*.config)
!ConsoleApplication1\**\ClassLibrary*\**
ConsoleApplication1\readme.txt
TargetFolder: '$(Build.ArtifactStagingDirectory)'

YAML builds are not yet available on TFS.


Results
These files are copied to the staging directory:

`-- ConsoleApplication1
|-- readme.txt
`-- ConsoleApplication1
`-- bin
`-- Release
| -- ClassLibrary1.dll
| -- ClassLibrary2.dll
| -- ConsoleApplication1.exe

Copy everything from the source directory except the .git folder
YAML
Classic
Example with multiple match patterns:

steps:
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
SourceFolder: '$(Build.SourcesDirectory)'
Contents: |
**/*
!.git/**/*
TargetFolder: '$(Build.ArtifactStagingDirectory)'

YAML builds are not yet available on TFS.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about file matching patterns?
File matching patterns reference
How do I use this task to publish artifacts?
See Artifacts in Azure Pipelines.
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are
some predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
cURL Upload Files task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to use cURL to upload files with supported protocols such as FTP, FTPS, SFTP, HTTP, and more.

Demands
curl

YAML snippet
# cURL upload files
# Use cURL's supported protocols to upload files
- task: cURLUploader@2
inputs:
files:
#authType: 'ServiceEndpoint' # Optional. Options: serviceEndpoint, userAndPass
#serviceEndpoint: # Required when authType == ServiceEndpoint
#username: # Optional
#password: # Optional
#url: # Required when authType == UserAndPass
#remotePath: 'upload/$(Build.BuildId)/' # Optional
#options: # Optional
#redirectStderr: true # Optional

Arguments
A RGUM EN T DESC RIP T IO N

files (Required) File(s) to be uploaded. Wildcards can be used.


Files For example, **/*.zip for all ZIP files in all subfolders

authType Default value: ServiceEndpoint


Authentication Method

serviceEndpoint (Required) The service connection with the credentials for the
Service Connection server authentication.
Use the Generic service connection type for the service
connection

username (Optional) Specify the username for server authentication.


Username

password (Optional) Specify the password for server authentication.


Password Impor tant : Use a secret variable to avoid exposing this value
A RGUM EN T DESC RIP T IO N

url (Required) URL to the location where you want to upload the
URL files.
If you are uploading to a folder, make sure to end the
argument with a trailing slash.

Acceptable URL protocols include


DICT://, FILE://, FTP://, FTPS://, GOPHER://,
HTTP://, HTTPS://, IMAP://, IMAPS://, LDAP://,
LDAPS://, POP3://, POP3S://, RTMP://, RTSP://,
SCP://, SFTP://, SMTP://, SMTPS://, TELNET://,
and TFTP://

remotePath (Optional) If supplied, this is the sub-folder on the remote


Remote Directory server for the URL supplied in the credentials
Default value: upload/$(Build.BuildId)/

options (Optional) Arguments to pass to cURL.


Optional Arguments

redirectStderr Adds --stderr - as an argument to cURL. By default, cURL


Redirect Standard Error to Standard Out writes its progress bar to stderr, which is interpreted by the
build as error output. Enabling this checkbox suppresses that
behavior
Default value: true

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn more about file matching patterns?
File matching patterns reference
Where can I learn FTP commands?
List of raw FTP commands
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Decrypt File (OpenSSL) task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to decrypt files using OpenSSL.

YAML snippet
# Decrypt file (OpenSSL)
# Decrypt a file using OpenSSL
- task: DecryptFile@1
inputs:
#cipher: 'des3'
inFile:
passphrase:
#outFile: # Optional
#workingDirectory: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

cipher (Required) Encryption cypher to use. See cypher suite names


Cypher for a complete list of possible values
Default value: des3

inFile (Required) Relative path of file to decrypt.


Encrypted file

passphrase (Required) Passphrase to use for decryption. Use a Variable to


Passphrase encrypt the passphrase.

outFile (Optional) Optional filename for decrypted file. Defaults to the


Decrypted file path Encrypted File with a ".out" extension

cwd (Optional) Working directory for decryption. Defaults to the


Working directory root of the repository
Argument aliases: workingDirectory

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Delay task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task in an agentless job of a release pipeline to pause execution of the pipeline for a fixed delay time.

Demands
Can be used in only an agentless job of a release pipeline.

YAML snippet
# Delay
# Delay further execution of a workflow by a fixed time
jobs:
- job: RunsOnServer
pool: Server
steps:
- task: Delay@1
inputs:
#delayForMinutes: '0'

Arguments
A RGUM EN T S DESC RIP T IO N

delayForMinutes (Required) Delay the execution of the workflow by specified


Delay Time (minutes) time in minutes.
0 value means that workflow execution will start without
delay
Default value: 0

Also see this task on GitHub.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Delete Files task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015.3
Use this task to delete files or folders from the agent working directory.

Demands
None

YAML snippet
# Delete files
# Delete folders, or files matching a pattern
- task: DeleteFiles@1
inputs:
#SourceFolder: # Optional
#Contents: 'myFileShare'
#RemoveSourceFolder: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

SourceFolder (Optional) Folder that contains the files you want to delete. If
Source Folder you leave it empty, the deletions are done from the root
folder of the repo (same as if you had specified
$(Build.SourcesDirector y) ).
If your build produces artifacts outside of the sources
directory, specify $(Agent.BuildDirectory) to delete files
from the build agent working directory.

Contents (Required) File/folder paths to delete. Supports multiple lines


Contents of minimatch patterns; each one is processed before moving
onto the next line. More Information.
For example:
**/* deletes all files and folders in the root folder.
temp deletes the temp folder in the root folder.
temp* deletes any file or folder in the root folder with
a name that begins with temp.
**/temp/* deletes all files and folders in any sub-
folder named temp.
**/temp* deletes any file or folder with a name that
begins with temp.
!(*.vsix) deletes all files in the root folder that do
not have a .vsix extension.
A RGUM EN T DESC RIP T IO N

RemoveSourceFolder (Optional) Attempt to remove the source folder after


Remove SourceFolder attempting to remove Contents .
Default value: false .
If you want to remove the whole folder, set this to true and
set Contents to * .

Examples
Delete several patterns
This example will delete some/file , all files beginning with test , and all files in all subdirectories called bin .

steps:
- task: DeleteFiles@1
displayName: 'Remove unneeded files'
inputs:
contents: |
some/file
test*
**/bin/*

Delete all but one subdirectory


This example will delete some/one , some/three and some/four but will leave some/two .

steps:
- task: DeleteFiles@1
displayName: 'Remove unneeded files'
inputs:
contents: |
some/!(two)

Delete using brace expansion


This example will delete some/one and some/four but will leave some/two and some/three .

steps:
- task: DeleteFiles@1
displayName: 'Remove unneeded files'
inputs:
contents: |
some/{one,four}

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Q: What's a minimatch pattern? How does it work?
A: See:
https://github.com/isaacs/minimatch
https://realguess.net/tags/minimatch/
http://man7.org/linux/man-pages/man3/fnmatch.3.html
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Download Build Artifacts task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to download build artifacts.

YAML snippet
# Download build artifacts
# Download files that were saved as artifacts of a completed build
- task: DownloadBuildArtifacts@0
inputs:
#buildType: 'current' # Options: current, specific
#project: # Required when buildType == Specific
#pipeline: # Required when buildType == Specific
#specificBuildWithTriggering: false # Optional
#buildVersionToDownload: 'latest' # Required when buildType == Specific. Options: latest,
latestFromBranch, specific
#allowPartiallySucceededBuilds: false # Optional
#branchName: 'refs/heads/master' # Required when buildType == Specific && BuildVersionToDownload ==
LatestFromBranch
#buildId: # Required when buildType == Specific && BuildVersionToDownload == Specific
#tags: # Optional
#downloadType: 'single' # Choose whether to download a single artifact or all artifacts of a specific
build. Options: single, specific
#artifactName: # Required when downloadType == Single
#itemPattern: '**' # Optional
#downloadPath: '$(System.ArtifactsDirectory)'
#parallelizationLimit: '8' # Optional

Arguments
A RGUM EN T DESC RIP T IO N

buildType (Required) Download artifacts produced by the current build,


Download artifacts produced by or from a specific build
Default value: current

project (Required) The project from which to download the build


Project artifacts

definition (Required) Select the build pipeline name


Build pipeline Argument aliases: pipeline

specificBuildWithTriggering (Optional) If true, this build task will try to download artifacts
When appropriate, download artifacts from the triggering from the triggering build. If there is no triggering build from
build the specified pipeline, it will download artifacts from the build
specified in the options below.
Default value: false
A RGUM EN T DESC RIP T IO N

buildVersionToDownload (Required) Specify which version of the build to download.


Build version to download Choose latest to download the latest available build
version
Choose latestFromBranch to download the latest
available build version of the branch specified by
branchName and tags specified by tags
Choose specific to download the build version
specified by buildId

allowPartiallySucceededBuilds (Optional) If checked, this build task will try to download


Download artifacts even from partially succeeded builds artifacts whether the build is succeeded or partially succeeded
Default value: false

branchName (Required) Specify to filter on branch/ref name.


Branch name Default value: refs/heads/develop

buildId (Required) The build from which to download the artifacts


Build

tags (Optional) A comma-delimited list of tags. Only builds with


Build tags these tags will be returned.

downloadType (Required) Choose whether to download a single artifact or all


Download type artifacts of a specific build.
Default value: single

artifactName (Required) The name of the artifact to download


Artifact name

itemPattern (Optional) Specify files to be downloaded as multi-line


Matching pattern minimatch pattern. More Information.
The default pattern will download all files across all artifacts in
the build if the Specific files option is selected. To
download all files within an artifact drop use drop/
Default value: \*\*

downloadPath (Required) Path on the agent machine where the artifacts will
Destination directory be downloaded
Default value: $(System.ArtifactsDirectory)

parallelizationLimit (Optional) Number of files to download simultaneously


Parallelization limit Default value: 8

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Download Fileshare Artifacts task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to download fileshare artifacts.

YAML snippet
# Download artifacts from file share
# Download artifacts from a file share, like \\share\drop
- task: DownloadFileshareArtifacts@1
inputs:
filesharePath:
artifactName:
#itemPattern: '**' # Optional
#downloadPath: '$(System.ArtifactsDirectory)'
#parallelizationLimit: '8' # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Fileshare path (Required) Example \server\folder

Artifact name (Required) The name of the artifact to download.

Matching pattern (Optional) Specify files to be downloaded as multiline


minimatch patterns. More Information.
The default pattern ( ** ) will download all files within the
artifact.

Download path (Required) Path on the agent machine where the artifacts will
be downloaded.

Parallelization limit (Optional) Number of files to download simultaneously.

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Download GitHub Release task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task in your pipeline to download assets from your GitHub release as part of your CI/CD pipeline.

Prerequisites
GitHub service connection
This task requires a GitHub service connection with Read permission to the GitHub repository. You can create a
GitHub service connection in your Azure Pipelines project. Once created, use the name of the service connection in
this task's settings.

YAML snippet
# Download GitHub Release
# Downloads a GitHub Release from a repository
- task: DownloadGitHubRelease@0
inputs:
connection:
userRepository:
#defaultVersionType: 'latest' # Options: latest, specificVersion, specificTag
#version: # Required when defaultVersionType != Latest
#itemPattern: '**' # Optional
#downloadPath: '$(System.ArtifactsDirectory)'

Arguments
A RGUM EN T DESC RIP T IO N

connection (Required) Enter the service connection name for your GitHub
GitHub Connection connection. Learn more about service connections here.

userRepository (Required) Select the name of GitHub repository in which


Repository GitHub releases will be created.

defaultVersionType (Required) The version of the GitHub Release from which the
Default version assets are downloaded. The version type can be 'Latest
Release', 'Specific Version' or 'Specific Tag'
Default value: latest

version (Required) This option shows up if 'Specific Version' or 'Specific


Release Tag' is selected as Default Release version type. Based on the
version type selected, Release name or the Tag needs to be
provided.

itemPattern (Optional) Minimatch pattern to filter files to be downloaded


Item pattern from the available release assets. To download all files within
release use **.
A RGUM EN T DESC RIP T IO N

downloadPath (Required) Path on the agent machine where the release assets
Destination directory will be downloaded.
Default value: $(System.ArtifactsDirectory)

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Download Package task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to download a package from a package management feed in Azure Artifacts or TFS. Requires the
Package Management extension.

YAML snippet
# Download package
# Download a package from a package management feed in Azure Artifacts
- task: DownloadPackage@1
inputs:
packageType: # 'nuget' Options: maven, npm, nuget, pypi, upack
feed: # <feedId> for organization-scoped feeds, <projectId>/<feedId> for project-scoped feeds.
#view: ' ' # Optional
definition: # '$(packageName)'
version: # '1.0.0'
#files: '**' # Optional
#extract: true # Optional
downloadPath: # '$(System.ArtifactsDirectory)'

Arguments
A RGUM EN T DESC RIP T IO N

PackageType (Required) Select the type of package to download.

Feed (Required) ID of the feed that contains the package. For


project-scoped feeds, the format is projectID/feedID. See our
FAQ below for information on how to get a feed or project ID,
or information on using project and feed name instead.

View (Optional) Select a view to see package versions only


promoted to that view.

Definition (Required) Select the package to download. This can be the


artifact ID or the package name.

Version (Required) Version of the package.

Files (Optional) Specify files to be downloaded as multiline


minimatch patterns. More Information. The default pattern
(**) will download all files within the artifact.

Extract (Optional) Specify whether to extract the package contents at


the destination directory.

DownloadPath (Required) Path on the agent machine where the package will
be downloaded.
Examples
Download a NuGet package from an organization-scoped feed and extract to destination directory

# Download an artifact with id 'cfe01b64-ded4-47b7-a569-2ac17cbcedbd' to $(System.ArtifactsDirectory)


- task: DownloadPackage@1
inputs:
packageType: 'nuget'
feed: '6a60ef3b-e29f-41b6-9885-7874278baac7'
definition: 'cfe01b64-ded4-47b7-a569-2ac17cbcedbd' # Can also be package name
version: '1.0.0'
extract: true
downloadPath: '$(System.ArtifactsDirectory)'

Download a maven package from a project-scoped feed and download only pom files.

# Download an artifact with name 'com.test:testpackage' to $(System.ArtifactsDirectory)


- task: DownloadPackage@1
inputs:
packageType: 'maven'
feed: '132f5c2c-2aa0-475a-8b47-02c79617954b/c85e5de9-7b12-4cfd-9293-1b33cdff540e' # <projectId>/<feedId>
definition: 'com.test:testpackage'
version: '1.0.0-snapshot' # Should be normalized version
files: '*.pom'
downloadPath: '$(System.ArtifactsDirectory)'

FAQ
How do I find the ID of the feed (or project) I want to download my artifact from
The get feed api can be used to retrieve the feed and project ID for your feed. The api is documented here.
Can I use the project or feed name instead of IDs
Yes, you can use the project or feed name in your definition, however if your project or feed is renamed in the
future, your task will also have to be updated or it might fail.

Open-source
This task is open source on GitHub. Feedback and contributions are welcome.
Download Pipeline Artifacts task
11/2/2020 • 3 minutes to read • Edit Online

Use this task to download pipeline artifacts from earlier stages in this pipeline, or from another pipeline.

NOTE
For more information, including Azure CLI commands, see downloading artifacts.

YAML snippet
# Download pipeline artifacts
# Download build and pipeline artifacts
- task: DownloadPipelineArtifact@2
inputs:
#source: 'current' # Options: current, specific
#project: # Required when source == Specific
#pipeline: # Required when source == Specific
#preferTriggeringPipeline: false # Optional
#runVersion: 'latest' # Required when source == Specific# Options: latest, latestFromBranch, specific
#runBranch: 'refs/heads/master' # Required when source == Specific && RunVersion == LatestFromBranch
#runId: # Required when source == Specific && RunVersion == Specific
#tags: # Optional
#artifact: # Optional
#patterns: '**' # Optional
#path: '$(Pipeline.Workspace)'

Arguments
A RGUM EN T DESC RIP T IO N

source (Required) Download artifacts produced by the current


Download artifacts produced by pipeline run, or from a specific pipeline run.
Options: current , specific
Default value: current
Argument aliases: buildType

project (Required) The project GUID from which to download the


Project pipeline artifacts.

pipeline (Required) The definition ID of the build pipeline.


Build Pipeline Argument aliases: definition

preferTriggeringPipeline (Optional) A boolean specifying whether to download artifacts


When appropriate, download artifacts from the triggering from a triggering build.
build Default value: false
Argument aliases: specificBuildWithTriggering
A RGUM EN T DESC RIP T IO N

runVersion (Required) Specifies which build version to download. Options:


Build version to download latest , latestFromBranch , specific
Default value: latest
Argument aliases: buildVersionToDownload

runBranch Specify to filter on branch/ref name. For example:


Branch Name refs/heads/develop .
Default value: refs/heads/master
Argument aliases: branchName

runId (Required) The build from which to download the artifacts. For
Build example: 1764
Argument aliases: pipelineId , buildId

tags (Optional) A comma-delimited list of tags. Only builds with


Build Tags these tags will be returned.

allowPartiallySucceededBuilds (Optional) If checked, this build task will try to download


Download artifacts from partially succeeded builds artifacts whether the build is succeeded or partially succeeded
Default value: false

allowFailedBuilds (Optional) If checked, this build task will try to download


Download artifacts from failed builds artifacts whether the build is succeeded or failed
Default value: false

artifact (Optional) The name of the artifact to download. If left empty,


Artifact Name all artifacts associated to the pipeline run will be downloaded.
Argument aliases: artifactName

patterns (Optional) One or more file matching patterns (new line


Matching Patterns delimited) that limit which files get downloaded. More
Information on file matching patterns
Default value: **
Argument aliases: itemPattern

path (Required) Directory to download the artifact files. Can be


Destination Directory relative to the pipeline workspace directory or absolute. If
multi-download option is applied (by leaving an empty
artifact name), a sub-directory will be created for each. See
Artifacts in Azure Pipelines.
Default value: $(Pipeline.Workspace)
Argument aliases: targetPath , downloadPath

NOTE
If you want to consume artifacts as part of CI/CD flow, refer to the download shortcut here.

Examples
Download a specific artifact
# Download an artifact named 'WebApp' to 'bin' in $(Build.SourcesDirectory)
- task: DownloadPipelineArtifact@2
inputs:
artifact: 'WebApp'
path: $(Build.SourcesDirectory)/bin

Download artifacts from a specific project/pipeline

# Download artifacts from a specific pipeline.


- task: DownloadPipelineArtifact@2
inputs:
source: 'specific'
project: 'FabrikamFiber'
pipeline: 12
runVersion: 'latest'

Download artifacts from a specific branch

# Download artifacts from a specific branch with a tag


- task: DownloadPipelineArtifact@2
inputs:
source: 'specific'
project: 'FabrikamFiber'
pipeline: 12
runVersion: 'latestFromBranch'
runBranch: 'refs/heads/master'
tags: 'testTag'

Download an artifact from a specific build run

# Download an artifact named 'WebApp' from a specific build run to 'bin' in $(Build.SourcesDirectory)
- task: DownloadPipelineArtifact@2
inputs:
source: 'specific'
artifact: 'WebApp'
path: $(Build.SourcesDirectory)/bin
project: 'FabrikamFiber'
pipeline: 12
runVersion: 'specific'
runId: 40

FAQ
How can I find the ID of the Pipeline I want to download an artifact from?
You can find the ID of the pipeline in the 'Pipeline variables'. The pipeline ID is the system.definitionId variable.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Download Secure File task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task in a pipeline to download a secure file to the agent machine. When specifying the name of the file
(using the secureFile input) use the name you specified when uploading it rather than the actual filename.
Once downloaded, use the name value that is set on the task (or "Reference name" in the classic editor) to
reference the path to the secure file on the agent machine. For example, if the task is given the name mySecureFile
, its path can be referenced in the pipeline as $(mySecureFile.secureFilePath) . Alternatively, downloaded secure
files can be found in the directory given by $(Agent.TempDirectory) . See a full example below.
When the pipeline job completes, no matter whether it succeeds, fails, or is canceled, the secure file is deleted from
its download location.
It is unnecessary to use this task with the Install Apple Certificate or Install Apple Provisioning Profile tasks
because they automatically download, install, and delete (at the end of the pipeline job) the secure file.

YAML snippet
# Download secure file
# Download a secure file to the agent machine
- task: DownloadSecureFile@1
name: mySecureFile # The name with which to reference the secure file's path on the agent, like
$(mySecureFile.secureFilePath)
inputs:
secureFile: # The file name or GUID of the secure file
#retryCount: 5 # Optional

Arguments
A RGUM EN T DESC RIP T IO N

secureFile (Required) The file name or unique identifier (GUID) of the


Secure File secure file to download to the agent machine. The file will be
deleted when the pipeline job completes.

retryCount (Optional) Number of times to retry downloading a secure file


Retry Count if the download fails.
Default value: 5

Example
This example downloads a secure certificate file and installs it to a trusted certificate authority (CA) directory on
Linux:
- task: DownloadSecureFile@1
name: caCertificate
displayName: 'Download CA certificate'
inputs:
secureFile: 'myCACertificate.pem'

- script: |
echo Installing $(caCertificate.secureFilePath) to the trusted CA directory...
sudo chown root:root $(caCertificate.secureFilePath)
sudo chmod a+r $(caCertificate.secureFilePath)
sudo ln -s -t /etc/ssl/certs/ $(caCertificate.secureFilePath)
Extract Files task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to extract files from archives to a target folder using match patterns. A range of standard archive
formats is supported, including .zip, .jar, .war, .ear, .tar, .7z, and more.

Demands
None

YAML snippet
# Extract files
# Extract a variety of archive and compression files such as .7z, .rar, .tar.gz, and .zip
- task: ExtractFiles@1
inputs:
#archiveFilePatterns: '**/*.zip'
destinationFolder:
#cleanDestinationFolder: true

Arguments
A RGUM EN T DESC RIP T IO N

Archive file patterns Patterns to match the archives you want to extract. By
default, patterns start in the root folder of the repo (same
as if you had specified $(Build.SourcesDirectory) ).
Specify pattern filters, one per line, that match the
archives to extract. For example:
test.zip extracts the test.zip file in the root folder.
test/*.zip extracts all .zip files in the test folder.
**/*.tar extracts all .tar files in the root folder and
sub-folders.
**/bin/*.7z extracts all ''.7z'' files in any sub-folder
named bin.
The pattern is used to match only archive file paths, not
folder paths, and not archive contents to be extracted. So
you should specify patterns such as **/bin/** instead
of **/bin .

Destination folder Folder where the archives will be extracted. The default file
path is relative to the root folder of the repo (same as if you
had specified $(Build.SourcesDirectory) ).

Clean destination folder before extracting Select this check box to delete all existing files in the
destination folder before beginning to extract archives.

C O N T RO L O P T IO N S
Examples
Extract all .zip files recursively
This example will extract all .zip files recursively, including both root files and files from sub-folders

steps:
- task: ExtractFiles@1
inputs:
archiveFilePatterns: '**/*.zip'
cleanDestinationFolder: true

Extract all .zip files from subfolder


This example will extract test/one.zip , test/two.zip but will leave test/nested/three.zip .

steps:
- task: ExtractFiles@1
inputs:
archiveFilePatterns: 'test/*.zip'
cleanDestinationFolder: true

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn more about file matching patterns?
File matching patterns reference
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
File Transform task
11/2/2020 • 3 minutes to read • Edit Online

Use this task to apply file transformations and variable substitutions on configuration and parameters files. For
details of how translations are processed, see File transforms and variable substitution reference.
File transformations
At present file transformations are supported for only XML files.
To apply XML transformation to configuration files (*.config) you must specify a newline-separated list of
transformation file rules using the syntax:
-transform <path to the transform file> -xml <path to the source file> -result <path to the result file>

File transformations are useful in many scenarios, particularly when you are deploying to an App service
and want to add, remove or modify configurations for different environments (such as Dev, Test, or Prod) by
following the standard Web.config Transformation Syntax.
You can also use this functionality to transform other files, including Console or Windows service
application configuration files (for example, FabrikamService.exe.config).
Config file transformations are run before variable substitutions.
Variable substitution
At present only XML and JSON file formats are supported for variable substitution.
Tokens defined in the target configuration files are updated and then replaced with variable values.
Variable substitutions are run after config file transformations.
Variable substitution is applied for only the JSON keys predefined in the object hierarchy. It does not create
new keys.
Examples
If you need XML transformation to run on all the configuration files named with pattern .Production.config , the
transformation rule should be specified as:
-transform **\*.Production.config -xml **\*.config

If you have a configuration file named based on the stage name in your pipeline, you can use:
-transform **\*.$(Release.EnvironmentName).config -xml **\*.config

To substitute JSON variables that are nested or hierarchical, specify them using JSONPath expressions. For
example, to replace the value of ConnectionString in the sample below, you must define a variable as
Data.DefaultConnection.ConnectionString in the build or release pipeline (or in a stage within the release pipeline).
{
"Data": {
"DefaultConnection": {
"ConnectionString": "Server=(localdb)\SQLEXPRESS;Database=MyDB;Trusted_Connection=True"
}
}
}

NOTE
Only custom variables defined in build and release pipelines are used in substitution. Default and system pipeline variables
are excluded.
Here's a list of currently excluded prefixes:
'agent.'
'azure_http_user_agent'
'build.'
'common.'
'release.'
'system.'
'tf_'
If the same variables are defined in both the release pipeline and in a stage, the stage-defined variables supersede the
pipeline-defined variables.

See also: File transforms and variable substitution reference.

Demands
None

YAML snippet
# File transform
# Replace tokens with variable values in XML or JSON configuration files
- task: FileTransform@1
inputs:
#folderPath: '$(System.DefaultWorkingDirectory)/**/*.zip'
#enableXmlTransform: # Optional
#xmlTransformationRules: '-transform **\*.Release.config -xml **\*.config-transform
**\*.$(Release.EnvironmentName).config -xml **\*.config' # Optional
#fileType: # Optional. Options: xml, json
#targetFiles: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Package or folder File path to the package or a folder. Variables ( Build | Release
folderPath ), wildcards are supported. For example,
$(System.DefaultWorkingDirectory)/*/.zip . For zipped
folders, the contents are extracted to the TEMP location,
transformations executed, and the results zipped in original
artifact location.
A RGUM EN T DESC RIP T IO N

XML transformation Enable this option to apply XML transformations based on


enableXmlTransform the rules specified below. Config transforms run prior to any
variable substitution. XML transformations are supported
only for the Windows platform.

Transformation rules Provide a newline-separated list of transformation file rules


xmlTransformationRules using the syntax
-transform <path to="" the transform file> -xml
<path to the source configuration file> -result
<path to the result file>
The result file path is optional and, if not specified, the source
configuration file will be replaced with the transformed result
file.

File format Specify the file format on which substitution is to be


fileType performed. Variable substitution runs after any configuration
transforms. For XML, Variables defined in the build or release
pipelines will be matched against the token ('key' or 'name')
entries in the appSettings, applicationSettings, and
connectionStrings sections of any config file and
parameters.xml file.

Target files Provide a newline-separated list of files for variable


targetFiles substitution. Files names must be specified relative to the root
folder.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FTP Upload task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to upload files to a remote machine using the File Transfer Protocol (FTP), or securely with FTPS.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Demands
None

YAML snippet
# FTP upload
# Upload files using FTP
- task: FtpUpload@2
inputs:
#credentialsOption: 'serviceEndpoint' # Options: serviceEndpoint, inputs
#serverEndpoint: # Required when credentialsOption == ServiceEndpoint
#serverUrl: # Required when credentialsOption == Inputs
#username: # Required when credentialsOption == Inputs
#password: # Required when credentialsOption == Inputs
rootDirectory:
#filePatterns: '**'
#remoteDirectory: '/upload/$(Build.BuildId)/'
#clean: false
#cleanContents: false # Required when clean == False
#preservePaths: false
#trustSSL: false

Arguments
A RGUM EN T DESC RIP T IO N

credsType (Required) Use FTP service connection or enter connection


Authentication Method credentials
Default value: serviceEndpoint
Argument aliases: credentialsOption
A RGUM EN T DESC RIP T IO N

serverEndpoint (Required) Select the service connection for your FTP server. To
FTP Service Connection create one, click the Manage link and create a new Generic
service connection, enter the FTP server URL for the server
URL, Example, ftp://server.example.com, and required
credentials.
Secure connections will always be made regardless of the
specified protocol (ftp:// or ftps://) if the target server
supports FTPS. To allow only secure connections, use the
ftps:// protocol. For example, ftps://ser ver.example.com .
Connections to servers not supporting FTPS will fail if ftps://
is specified.

serverUrl (Required)
Server URL

username (Required)
Username

password (Required)
Password

rootFolder (Required) The source folder to upload files from


Root folder Argument aliases: rootDirectory

filePatterns (Required) File paths or patterns of the files to upload.


File patterns Supports multiple lines of minimatch patterns. More
Information.
Default value: **

remotePath (Required) Upload files to this directory on the remote FTP


Remote directory server.
Default value: /upload/$(Build.BuildId)/
Argument aliases: remoteDirectory

enableUtf8 (Optional) Enables UTF-8 support for the FTP connection


Enable UTF8 support ('OPTS UTF8 ON')
Default value: false

clean (Required) Delete the remote directory including its contents


Delete remote directory before uploading
Default value: false

cleanContents (Required) Recursively delete all contents of the remote


Clear remote directory contents directory before uploading. The existing directory will not be
deleted. For better performance, consider using
Delete remote directory instead
Default value: false
A RGUM EN T DESC RIP T IO N

preservePaths (Required) If selected, the relative local directory structure is


Preserve file paths recreated under the remote directory where files are
uploaded. Otherwise, files are uploaded directly to the remote
directory without creating additional subdirectories.
For example, suppose your source folder is:
/home/user/source/ and contains the file:
foo/bar/foobar.txt , and your remote directory is:
/uploads/.
If selected, the file is uploaded to:
/uploads/foo/bar/foobar.txt . Otherwise, to:
/uploads/foobar.txt
Default value: false

trustSSL (Required) Selecting this option results in the FTP server's SSL
Trust server certificate certificate being trusted with ftps://, even if it is self-signed or
cannot be validated by a Certificate Authority (CA).
Default value: false

customCmds (Optional) Optional FTP Commands that will be sent to the


FTP Commands remote FTP server upon connection

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn more about file matching patterns?
File matching patterns reference
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
GitHub Release task
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines
Use this task in your pipeline to create, edit, or discard a GitHub release.

Prerequisites
GitHub service connection
This task requires a GitHub service connection with Write permission to the GitHub repository. You can create a
GitHub service connection in your Azure Pipelines project. Once created, use the name of the service connection in
this task's settings.

YAML snippet
# GitHub Release
# Create, edit, or delete a GitHub release
- task: GitHubRelease@0
inputs:
gitHubConnection:
#repositoryName: '$(Build.Repository.Name)'
#action: 'create' # Options: create, edit, delete
#target: '$(Build.SourceVersion)' # Required when action == Create || Action == Edit
#tagSource: 'auto' # Required when action == Create# Options: auto, manual
#tagPattern: # Optional
#tag: # Required when action == Edit || Action == Delete || TagSource == Manual
#title: # Optional
#releaseNotesSource: 'file' # Optional. Options: file, input
#releaseNotesFile: # Optional
#releaseNotes: # Optional
#assets: '$(Build.ArtifactStagingDirectory)/*' # Optional
#assetUploadMode: 'delete' # Optional. Options: delete, replace
#isDraft: false # Optional
#isPreRelease: false # Optional
#addChangeLog: true # Optional
#compareWith: 'lastFullRelease' # Required when addChangeLog == True. Options: lastFullRelease,
lastRelease, lastReleaseByTag
#releaseTag: # Required when compareWith == LastReleaseByTag

Arguments
A RGUM EN T DESC RIP T IO N

GitHub Connection (Required) Enter the service connection name for your GitHub
connection. Learn more about service connections here.

Repository (Required) Select the name of GitHub repository in which


GitHub releases will be created.

Action (Required) Select the type of release operation you want


perform. This task can create, edit, or discard a GitHub release.
A RGUM EN T DESC RIP T IO N

Target (Required) This is the commit SHA for which the GitHub
release will be created. E.g.
48b11d8d6e92a22e3e9563a3f643699c16fd6e27 . You can also
use variables here.

Tag source (Required) Configure the tag to be used for release creation.
The 'Git tag' option automatically takes the tag which is
associated with this commit. Use the 'User specified tag'
option in case you want to manually provide a tag.

Tag (Required) Specify the tag for which you want to create, edit,
or discard a release. You can also use variables here. E.g.
$(tagName) .

Release title (Optional) Specify the title of the GitHub release. If left empty,
the tag will be used as the release title.

Release notes source (Optional) Specify the description of the GitHub release. Use
the 'Release notes file' option to use the contents of a file as
release notes. Use the 'Inline release notes' option to manually
enter the release notes.

Release notes file path (Optional) Select the file which contains the release notes.

Release notes (Optional) Type your release notes here. Markdown is


supported.

Assets (Optional) Specify the files to be uploaded as assets for the


release. You can use wildcard characters to specify a set of files.
E.g. $(Build.ArtifactStagingDirectory)/*.zip . You can
also specify multiple patterns - one per line. By default, all files
in the $(Build.ArtifactStagingDirectory) directory will
be uploaded.

Asset upload mode (Optional) Use the 'Delete existing assets' option to first delete
any existing assets in the release and then upload all assets.
Use the 'Replace existing assets' option to replace any assets
that have the same name.

Draft release (Optional) Indicate whether the release should be saved as a


draft (unpublished). If false , the release will be published.

Pre-release (Optional) Indicate whether the release should be marked as a


pre-release.

Add changelog (Optional) If set to true , a list of changes (commits and


issues) between this and the last published release will be
generated and appended to release notes.

C O N T RO L O P T IO N S

Examples
Create a GitHub release
The following YAML creates a GitHub release every time the task runs. The build number is used as the tag version
for the release. All .exe files and README.txt files in the $(Build.ArtifactStagingDirectory) folder are uploaded as
assets. By default, the task also generates a change log (a list of commits and issues that are part of this release)
and publishes it as release notes.

- task: GithubRelease@0
displayName: 'Create GitHub Release'
inputs:
gitHubConnection: zenithworks
repositoryName: zenithworks/javaAppWithMaven
tagSource: manual
tag: $(Build.BuildNumber)
assets: |
$(Build.ArtifactStagingDirectory)/*.exe
$(Build.ArtifactStagingDirectory)/README.txt

You can also control the creation of the release based on repository tags. The following YAML creates a GitHub
release only when the commit that triggers the pipeline has a Git tag associated with it. The GitHub release is
created with the same tag version as the associated Git tag.

- task: GithubRelease@0
displayName: 'Create GitHub Release'
inputs:
gitHubConnection: zenithworks
repositoryName: zenithworks/javaAppWithMaven
assets: $(Build.ArtifactStagingDirectory)/*.exe

You may also want to use the task in conjunction with task conditions to get even finer control over when the task
runs, thereby restricting the creation of releases. For example, in the following YAML the task runs only when the
pipeline is triggered by a Git tag matching the pattern 'refs/tags/release-v*'.

- task: GithubRelease@0
displayName: 'Create GitHub Release'
condition: startsWith(variables['Build.SourceBranch'], 'refs/tags/release-v')
inputs:
gitHubConnection: zenithworks
repositoryName: zenithworks/javaAppWithMaven
assets: $(Build.ArtifactStagingDirectory)/*.exe

Edit a GitHub release


The following YAML updates the status of a GitHub release from 'draft' to 'published'. The release to be edited is
determined by the specified tag.

- task: GithubRelease@0
displayName: 'Edit GitHub Release'
inputs:
gitHubConnection: zenithworks
repositoryName: zenithworks/javaAppWithMaven
action: edit
tag: $(myDraftReleaseVersion)
isDraft: false

Delete a GitHub release


The following YAML deletes a GitHub release. The release to be deleted is determined by the specified tag.
- task: GithubRelease@0
displayName: 'Delete GitHub Release'
inputs:
gitHubConnection: zenithworks
repositoryName: zenithworks/javaAppWithMaven
action: delete
tag: $(myDraftReleaseVersion)

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Install Apple Certificate task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
Use this task to install an Apple certificate that is required to build on a macOS agent. You can use this task to
install an Apple certificate that is stored as a secure file on the server.

Demands
xcode

YAML snippet
# Install Apple certificate
# Install an Apple certificate required to build on a macOS agent machine
- task: InstallAppleCertificate@2
inputs:
certSecureFile:
#certPwd: # Optional
#keychain: 'temp' # Options: default, temp, custom
#keychainPassword: # Required when keychain == Custom || Keychain == Default
#customKeychainPath: # Required when keychain == Custom
#deleteCert: # Optional
#deleteCustomKeychain: # Optional
#signingIdentity: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Certificate (P12) Select the certificate (.p12) that was uploaded to Secure
Files to install on the macOS agent.

Certificate (P12) Password Password to the Apple certificate (.p12). Use a new build
variable with its lock enabled on the Variables tab to encrypt
this value.

Advanced - Keychain Select the keychain in which to install the Apple certificate.
You can choose to install the certificate in a temporary
keychain (default), the default keychain or a custom keychain.
A temporary keychain will always be deleted after the build or
release is complete.

Advanced - Keychain Password Password to unlock the keychain. Use a new build variable
with its lock enabled on the Variables tab to encrypt this
value. A password is generated for the temporary keychain if
not specified.

Advanced - Delete Certificate from Keychain Select to delete the certificate from the keychain after the
build or release is complete. This option is visible when
custom keychain or default keychain are selected.
A RGUM EN T DESC RIP T IO N

Advanced - Custom Keychain Path Full path to a custom keychain file. The keychain will be
created if it does not exist. This option is visible when a
custom keychain is selected.

Advanced - Delete Custom Keychain Select to delete the custom keychain from the agent after the
build or release is complete. This option is visible when a
custom keychain is selected.
Install Apple Provisioning Profile task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
Use this task to install an Apple provisioning profile that is required to build on a macOS agent. You can use this
task to install provisioning profiles needed to build iOS Apps, Apple WatchKit Apps and App Extensions.
You can install an Apple provisioning profile that is:
Stored as a secure file on the server.
(Azure Pipelines ) Committed to the source repository or copied to a local path on the macOS agent. We
recommend encrypting the provisioning profiles if you are committing them to the source repository. The
Decr ypt File task can be used to decrypt them during a build or release.

Demands
xcode

YAML snippet
# Install Apple provisioning profile
# Install an Apple provisioning profile required to build on a macOS agent machine
- task: InstallAppleProvisioningProfile@1
inputs:
#provisioningProfileLocation: 'secureFiles' # Options: secureFiles, sourceRepository
#provProfileSecureFile: # Required when provisioningProfileLocation == SecureFiles
#provProfileSourceRepository: # Required when provisioningProfileLocation == SourceRepository
#removeProfile: true # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Provisioning Profile Location (Azure Pipelines ) Select the location of the provisioning profile to install. The
provisioning profile can be uploaded to Secure Files or
stored in your source repository or a local path on the agent.

Provisioning Profile Select the provisioning profile that was uploaded to Secure
Files to install on the macOS agent (or) Select the
provisioning profile from the source repository or specify the
local path to a provisioning profile on the macOS agent.

Remove Profile After Build Select to specify that the provisioning profile should be
removed from the agent after the build or release is complete.
Install SSH Key task
11/7/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task in a pipeline to install an SSH key prior to a build or release step.

YAML snippet
# Install SSH key
# Install an SSH key prior to a build or deployment
- task: InstallSSHKey@0
inputs:
knownHostsEntry:
sshPublicKey:
#sshPassphrase: # Optional
sshKeySecureFile:

Arguments
A RGUM EN T DESC RIP T IO N

Known Hosts Entry (Required) The entry for this SSH key for the known_hosts file.

SSH Public Key (Optional) The contents of the public SSH key.

SSH Passphrase (Optional) The passphrase for the SSH key, if any.

SSH Key (Secure File) (Required) Select the SSH key that was uploaded to
Secure Files to install on the agent.

C O N T RO L O P T IO N S

Prerequisites
GitBash for Windows

Example setup using GitHub


This section describes how to use a private GitHub repository with YAML from within Azure Pipelines.
If you have a repository that you don't want to expose to the open-source community, a common practice is to
make the repository private. However, a CI/CD tool like Azure DevOps needs access to the repository if you want to
use the tool to manage the repository. To give Azure DevOps access, you might need an SSH key to authenticate
access to GitHub.
Here are the steps to complete to use an SSH key to authenticate access to GitHub:
1. Generate a key pair to use to authenticate access from GitHub to Azure DevOps:
a. In GitBash, run the following command:
ssh-keygen -t rsa

b. Enter a name for the SSH key pair. In our example, we use myKey .

c. (Optional) You can enter a passphrase to encrypt your private key. This step is optional. Using a
passphrase is more secure than not using one.

The SSH key pairs are created and the following success message appears:

d. In Windows File Explorer, check your newly created key pair:

2. Add the public key to the GitHub repository. (The public key ends in ".pub"). To do this, go the following URL
in your browser: https://github.com/(organization-name)/(repository-name)/settings/keys .
a. Select Add deploy key .
b. In the Add new dialog box, enter a title, and then copy and paste the SSH key:
c. Select Add key .
3. Upload your private key to Azure DevOps:
a. In Azure DevOps, in the left menu, select Pipelines > Librar y .

b. Select Secure files > + Secure file :

c. Select Browse , and then select your private key:

4. Recover your "Known Hosts Entry". In GitBash, enter the following command:
ssh-keyscan github.com

Your "Known Hosts Entry" is the displayed value that doesn't begin with # in the GitBash results:

5. Create a YAML pipeline.


To create a YAML pipeline, in the YAML definition, add the following task:

- task: InstallSSHKey@0
inputs:
knownHostsEntry: #{Enter your Known Hosts Entry Here}
sshPublicKey: #{Enter your Public key Here}
sshKeySecureFile: #{Enter the name of your key in "Secure Files" Here}

Now, the SSH keys are installed and you can proceed with the script to connect by using SSH, and not the default
HTTPS.

Usage and best practices


If you install an SSH key in the hosted pools, in later steps in your pipeline, you can connect to a remote system in
which the matching public key is already in place. For example, you can connect to a Git repository or to a VM in
Azure.
We recommend that you don't pass in your public key as plain text to the task configuration. Instead, set a secret
variable in your pipeline for the contents of your mykey.pub file. Then, call the variable in your pipeline definition
as $(myPubKey) . For the secret part of your key, use the Secure File library in Azure Pipelines.
To create your task, use the following example of a well-configured Install SSH Key task:

steps:
- task: InstallSSHKey@0
displayName: 'Install an SSH key'
inputs:
knownHostsEntry: 'SHA256:1Hyr55tsxGifESBMc0s+2NtutnR/4+LOkVwrOGrIp8U johndoe@contoso'
sshPublicKey: '$(myPubKey)'
sshKeySecureFile: 'id_rsa'

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Invoke Azure Function task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task in an agentless job of a release pipeline to invoke an HTTP triggered function in an Azure function
app and parse the response.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Demands
Can be used in only an agentless job of a release pipeline.

YAML snippet
# Invoke Azure Function
# Invoke an Azure Function
- task: AzureFunction@1
inputs:
function:
key:
#method: 'POST' # Options: OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, PATCH
#headers: '{Content-Type:application/json, PlanUrl: $(system.CollectionUri), ProjectId:
$(system.TeamProjectId), HubName: $(system.HostType), PlanId: $(system.PlanId), JobId: $(system.JobId),
TimelineId: $(system.TimelineId), TaskInstanceId: $(system.TaskInstanceId), AuthToken: $(system.AccessToken)}'
#queryParameters: # Optional
#body: # Required when method != GET && Method != HEAD
#waitForCompletion: 'false' # Options: true, false
#successCriteria: # Optional

Arguments
PA RA M ET ER C O M M EN T S

Azure function URL Required. The URL of the Azure function to be invoked.

Function key Required. The value of the available function or the host key
for the function to be invoked. Should be secured by using a
hidden variable.

Method Required. The HTTP method with which the function will be
invoked.

Headers Optional. The header in JSON format to be attached to the


request sent to the function.
PA RA M ET ER C O M M EN T S

Quer y parameters Optional. Query parameters to append to the function URL.


Must not start with "? " or "& ".

Body Optional. The request body for the Azure function call in
JSON format.

Completion Event Required. How the task reports completion. Can be API
response (the default) - completion is when function returns
success and success criteria evaluates to true, or Callback -
the Azure function makes a callback to update the timeline
record.

Success criteria Optional. How to parse the response body for success.

Control options See Control options

Succeeds if the function returns success and the response body parsing is successful, or when the function
updates the timeline record with success.
For more information about using this task, see Approvals and gates overview.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where should a task signal completion when Callback is chosen as the completion event?
To signal completion, the Azure function should POST completion data to the following pipelines REST endpoint.

{planUri}/{projectId}/_apis/distributedtask/hubs/{hubName}/plans/{planId}/events?api-version=2.0-preview.1

**Request Body**
{ "name": "TaskCompleted", "taskId": "taskInstanceId", "jobId": "jobId", "result": "succeeded" }

See this simple cmdline application for specifics. In addition, a C# helper library is available to enable live logging
and managing task status for agentless tasks. Learn more
Why does the task failed within 1 minute when the timeout is longer?
In case the Azure Function executes for more than 1 minute, then you'll need to use the Callback completion
event. API Response completion option is supported for requests that complete within 60 seconds.
Invoke REST API task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
Use this task to invoke an HTTP API and parse the response.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

This task is available in both builds and releases in TFS 2018.2 In TFS 2018 RTM, this task is available only in
releases.

Demands
This task can be used in only an agentless job.

YAML snippet
# Invoke REST API
# Invoke a REST API as a part of your pipeline.
- task: InvokeRESTAPI@1
inputs:
#connectionType: 'connectedServiceName' # Options: connectedServiceName, connectedServiceNameARM
#serviceConnection: # Required when connectionType == ConnectedServiceName
#azureServiceConnection: # Required when connectionType == ConnectedServiceNameARM
#method: 'POST' # Options: OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, PATCH
#headers: '{Content-Type:application/json, PlanUrl: $(system.CollectionUri), ProjectId:
$(system.TeamProjectId), HubName: $(system.HostType), PlanId: $(system.PlanId), JobId: $(system.JobId),
TimelineId: $(system.TimelineId), TaskInstanceId: $(system.TaskInstanceId), AuthToken: $(system.AccessToken)}'
#body: # Required when method != GET && Method != HEAD
#urlSuffix: # Optional
#waitForCompletion: 'false' # Options: true, false
#successCriteria: # Optional

Arguments
PA RA M ET ER C O M M EN T S

Connection type Required. Select Azure Resource Manager to invoke an


Azure managment API or Generic for all other APIs.

Generic ser vice connection Required. Generic service connection that provides the
baseUrl for the call and the authorization to use.

Azure subscription Required. Azure Resource Manager subscription to configure


and use for invoking Azure management APIs.
PA RA M ET ER C O M M EN T S

Method Required. The HTTP method with which the API will be
invoked; for example, GET , PUT , or UPDATE .

Headers Optional. The header in JSON format to be attached to the


request sent to the API.

Body Optional. The request body for the function call in JSON
format.

URL suffix and parameters The string to append to the baseUrl from the Generic service
connection while making the HTTP call

Wait for completion Required. How the task reports completion. Can be API
response (the default) - completion is when the function
returns success within 20 seconds and the success criteria
evaluates to true, or Callback - the external service makes a
callback to update the timeline record.

Success criteria Optional. How to parse the response body for success. By
default, the task passes when 200 OK is returned from the
call. Additionally, the success criteria - if specified - is
evaluated.

Control options See Control options

Succeeds if the API returns success and the response body parsing is successful, or when the API updates the
timeline record with success.
The Invoke REST API task does not perform deployment actions directly. Instead, it allows you to invoke any
generic HTTP REST API as part of the automated pipeline and, optionally, wait for it to be completed.
For more information about using this task, see Approvals and gates overview.

Open source
Also see this task on GitHub.

FAQ
What base URLs are used when invoking Azure Management APIs?
Azure management APIs are invoked using ResourceManagerEndpoint of the selected environment. For example
https://management.Azure.com is used when the subscription is in AzureCloud environment.

Where should a task signal completion when Callback is chosen as the completion event?
To signal completion, the external service should POST completion data to the following pipelines REST endpoint.
{planUri}/{projectId}/_apis/distributedtask/hubs/{hubName}/plans/{planId}/events?api-version=2.0-preview.1

**Request Body**
{ "name": "TaskCompleted", "taskId": "taskInstanceId", "jobId": "jobId", "result": "succeeded" }

See this simple cmdline application for specifics.


In addition, a C# helper library is available to enable live logging and managing task status for agentless tasks.
Learn more
Jenkins Download Artifacts task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to download artifacts produced by a Jenkins job.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

YAML snippet
# Jenkins download artifacts
# Download artifacts produced by a Jenkins job
- task: JenkinsDownloadArtifacts@1
inputs:
jenkinsServerConnection:
jobName:
#jenkinsJobType: # Optional
#saveTo: 'jenkinsArtifacts'
#jenkinsBuild: 'LastSuccessfulBuild' # Options: lastSuccessfulBuild, buildNumber
#jenkinsBuildNumber: '1' # Required when jenkinsBuild == BuildNumber
#itemPattern: '**' # Optional
#downloadCommitsAndWorkItems: # Optional
#startJenkinsBuildNumber: # Optional
#artifactDetailsFileNameSuffix: # Optional
#propagatedArtifacts: false # Optional
#artifactProvider: 'azureStorage' # Required when propagatedArtifacts == NotValid# Options: azureStorage
#connectedServiceNameARM: # Required when propagatedArtifacts == True
#storageAccountName: # Required when propagatedArtifacts == True
#containerName: # Required when propagatedArtifacts == True
#commonVirtualPath: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Jenkins service connection (Required) Select the service connection for your Jenkins
instance. To create one, click the Manage link and create a new
Jenkins service connection.

Job name (Required) The name of the Jenkins job to download artifacts
from. This must exactly match the job name on the Jenkins
server.

Jenkins job type (Optional) Jenkins job type, detected automatically.

Save to (Required) Jenkins artifacts will be downloaded and saved to


this directory. This directory will be created if it does not exist.
A RGUM EN T DESC RIP T IO N

Download artifacts produced by (Required) Download artifacts produced by the last successful
build, or from a specific build instance.

Jenkins build number (Required) Download artifacts produced by this build.

Item Pattern (Optional) Specify files to be downloaded as multi line


minimatch pattern. More Information
The default pattern () will download all files across all
ar tifacts produced by the Jenkins job. To
download all files within ar tifact drop use drop/.

Download Commits and WorkItems (Optional) Enables downloading the commits and workitem
details associated with the Jenkins Job

Download commits and workitems from (Optional) Optional start build number for downloading
commits and work items. If provided, all commits and work
items between start build number and build number given as
input to download artifacts will be downloaded.

Commit and WorkItem FileName (Optional) Optional file name suffix for commits and workitem
attachment. Attachment will be created with
commits_{suffix}.json and workitem_{suffix}.json. If this input is
not provided attachments will be create with the name
commits.json and workitems.json

Artifacts are propagated to Azure (Optional) Check this if Jenkins artifacts were propagated to
Azure. To upload Jenkins artifacts to azure, refer to this Jenkins
plugin

Artifact Provider (Required) Choose the external storage provider used in


Jenkins job to upload the artifacts.

Azure Subscription (Required) Choose the Azure Resource Manager subscription


for the artifacts.

Storage Account Name (Required) Azure Classic and Resource Manager storage
accounts are listed. Select the Storage account name in which
the artifacts are propagated.

Container Name (Required) Name of the container in the storage account to


which artifacts are uploaded.

Common Virtual Path (Optional) Path to the artifacts inside the Azure storage
container.

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Manual Intervention task
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task in a release pipeline to pause an active deployment within a stage, typically to perform some manual
steps or actions, and then continue the automated deployment tasks.

Demands
Can be used in only an agentless job of a release pipeline. This task is supported only in classic release pipelines.

Arguments
PA RA M ET ER C O M M EN T S

Display name Required. The name to display for this task.

Instructions Optional. The instruction text to display to the user when the
task is activated.

Notify users Optional. The list of users that will be notified that the task
has been activated.

On timeout Required. The action to take (reject or resume) if the task


times out with no manual intervention. The default is to reject
the deployment.

Control options See Control options

The Manual Inter vention task does not perform deployment actions directly. Instead, it allows you to pause an
active deployment within a stage, typically to perform some manual steps or actions, and then continue the
automated deployment tasks. For example, the user may need to edit the details of the current release before
continuing; perhaps by entering the values for custom variables used by the tasks in the release.
The Manual Inter vention task configuration includes an Instructions parameter that can be used to provide
related information, or to specify the manual steps the user should execute during the agentless job. You can
configure the task to send email notifications to users and user groups when it is awaiting intervention, and
specify the automatic response (reject or resume the deployment) after a configurable timeout occurs.

You can use built-in and custom variables to generate portions of your instructions.

When the Manual Intervention task is activated during a deployment, it sets the deployment state to IN
PROGRESS and displays a message bar containing a link that opens the Manual Intervention dialog containing
the instructions. After carrying out the manual steps, the administrator or user can choose to resume the
deployment, or reject it. Users with Manage deployment permission on the stage can resume or reject the
manual intervention.
For more information about using this task, see Approvals and gates overview.
PowerShell task
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run a PowerShell script.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Demands
DotNetFramework

YAML snippet
# PowerShell
# Run a PowerShell script on Linux, macOS, or Windows
- task: PowerShell@2
inputs:
#targetType: 'filePath' # Optional. Options: filePath, inline
#filePath: # Required when targetType == FilePath
#arguments: # Optional
#script: '# Write your PowerShell commands here.Write-Host Hello World' # Required when targetType ==
Inline
#errorActionPreference: 'stop' # Optional. Options: stop, continue, silentlyContinue
#failOnStderr: false # Optional
#ignoreLASTEXITCODE: false # Optional
#pwsh: false # Optional
#workingDirectory: # Optional

The PowerShell task also has two shortcuts in YAML:

- powershell: # inline script


workingDirectory: #
displayName: #
failOnStderr: #
errorActionPreference: #
ignoreLASTEXITCODE: #
env: # mapping of environment variables to add

- pwsh: # inline script


workingDirectory: #
displayName: #
failOnStderr: #
errorActionPreference: #
ignoreLASTEXITCODE: #
env: # mapping of environment variables to add

Both of these resolve to the PowerShell@2 task. powershell runs Windows PowerShell and will only work on a
Windows agent. pwsh runs PowerShell Core, which must be installed on the agent or container.

NOTE
Each PowerShell session lasts only for the duration of the job in which it runs. Tasks that depend on what has been
bootstrapped must be in the same job as the bootstrap.

Arguments
A RGUM EN T DESC RIP T IO N

targetType (Optional) Sets whether this is an inline script or a path to a


Type .ps1 file. Defaults to filepath
Default value: filePath

filePath (Required) Path of the script to execute. Must be a fully


Script Path qualified path or relative to
$(System.DefaultWorkingDirectory) . Required if Type is
filePath

arguments (Optional) Arguments passed to the Powershell script.


Arguments For example,
-Name someName -Path -Value "Some long string
value"

Note: unused when Type is inline .

script (Required) Contents of the script. Required if targetType is


Script inline .
Default value: # Write your PowerShell commands here.
Write-Host "Hello World"

errorActionPreference (Optional) Prepends the line


ErrorActionPreference $ErrorActionPreference = 'VALUE' at the top of your
script
Default value: stop

failOnStderr (Optional) If this is true, this task will fail if any errors are
Fail on Standard Error written to the error pipeline, or if any data is written to the
Standard Error stream. Otherwise the task will rely on the
exit code to determine failure
Default value: false

ignoreLASTEXITCODE (Optional) If this is false, the line


Ignore $LASTEXITCODE if ((Test-Path -LiteralPath
variable:\\LASTEXITCODE)) { exit $LASTEXITCODE }
is appended to the end of your script. This will cause the last
exit code from an external command to be propagated as
the exit code of powershell. Otherwise the line is not
appended to the end of your script
Default value: false

pwsh (Optional) If this is true, then on Windows the task will use
Use PowerShell Core pwsh.exe from your PATH instead of powershell.exe
Default value: false
A RGUM EN T DESC RIP T IO N

workingDirectory (Optional) Specify the working directory in which you want


Working directory to run the command. If you leave it empty, the working
directory is $(Build.SourcesDirectory)

Environment variables A list of additional items to map into the process's


environment. For example, secret variables are not
automatically mapped. If you have a secret variable called
Foo , you can map it in like this:

- powershell: echo $env:MYSECRET


env:
MySecret: $(Foo)

Examples
Hello World
Create test.ps1 at the root of your repo:

Write-Host "Hello World from $Env:AGENT_NAME."


Write-Host "My ID is $Env:AGENT_ID."
Write-Host "AGENT_WORKFOLDER contents:"
gci $Env:AGENT_WORKFOLDER
Write-Host "AGENT_BUILDDIRECTORY contents:"
gci $Env:AGENT_BUILDDIRECTORY
Write-Host "BUILD_SOURCESDIRECTORY contents:"
gci $Env:BUILD_SOURCESDIRECTORY
Write-Host "Over and out."

On the Build tab of a build pipeline, add this task:

TA SK A RGUM EN T S

Run test.ps1.
Utility: PowerShell Script filename : test.ps1

Write a warning
Add the PowerShell task, set the Type to inline , and paste in this script:

# Writes a warning to build summary and to log in yellow text


Write-Host "##vso[task.LogIssue type=warning;]This is the warning"

Write an error
Add the PowerShell task, set the Type to inline , and paste in this script:

# Writes an error to build summary and to log in red text


Write-Host "##vso[task.LogIssue type=error;]This is the error"
TIP
If you want this error to fail the build, then add this line:

exit 1

ApplyVersionToAssemblies.ps1
Use a script to customize your build pipeline
Call PowerShell script with multiple arguments
Create PowerShell script test2.ps1 :

param ($input1, $input2)


Write-Host "$input1 $input2"

In your YAML pipeline, call:

- task: PowerShell@2
inputs:
targetType: 'filePath'
filePath: $(System.DefaultWorkingDirectory)\test2.ps1
arguments: > # Use this to avoid newline characters in multiline string
-input1 "Hello"
-input2 "World"
displayName: 'Print Hello World'

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn about PowerShell scripts?
Scripting with Windows PowerShell
Microsoft Script Center (the Scripting Guys)
Windows PowerShell Tutorial
PowerShell.org
How do I set a variable so that it can be read by subsequent scripts and tasks?
Define and modify your build variables in a script
Define and modify your release variables in a script
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are
some predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Publish Build Artifacts task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015.3


Use this task in a build pipeline to publish build artifacts to Azure Pipelines, TFS, or a file share.

TIP
Looking to get started working with build artifacts? See Artifacts in Azure Pipelines.

Demands
None

YAML snippet
# Publish build artifacts
# Publish build artifacts to Azure Pipelines or a Windows file share
- task: PublishBuildArtifacts@1
inputs:
#pathToPublish: '$(Build.ArtifactStagingDirectory)'
#artifactName: 'drop'
#publishLocation: 'Container' # Options: container, filePath
#targetPath: # Required when publishLocation == FilePath
#parallel: false # Optional
#parallelCount: # Optional
#fileCopyOptions: #Optional

Arguments
A RGUM EN T DESC RIP T IO N

pathToPublish The folder or file path to publish. This can be a fully-


Path to publish qualified path or a path relative to the root of the
repository. Wildcards are not supported. See Artifacts in
Azure Pipelines.

ArtifactName Specify the name of the artifact that you want to create. It
Artifact name can be whatever you want. For example: drop

publishLocation Choose whether to store the artifact in Azure Pipelines (


Artifact publish location Container ), or to copy it to a file share ( FilePath ) that
must be accessible from the build agent. To learn more, see
Artifacts in Azure Pipelines.

TargetPath Specify the path to the file share where you want to copy
File share path the files. The path must be a fully-qualified path or a valid
path relative to the root directory of your repository.
Publishing artifacts from a Linux or macOS agent to a file
share is not supported.
A RGUM EN T DESC RIP T IO N

Parallel Select whether to copy files in parallel using multiple


Parallel copy (Azure Pipelines , TFS 2018 , or newer) threads for greater potential throughput. If this setting is
not enabled, a single thread will be used.

ParallelCount Enter the degree of parallelism (the number of threads)


Parallel count (Azure Pipelines , TFS 2018 , or newer) used to perform the copy. The value must be at least 1 and
not greater than 128. Choose a value based on CPU
capabilities of the build agent. Typically, 8 is a good starting
value.

FileCopyOptions Pass additional options to the Robocopy command.


File copy options

Control options

NOTE
You cannot use Bin , App_Data and other folder names reserved by IIS as an artifact name because this content is not
served in response to Web requests. Please see ASP.NET Web Project Folder Structure for more details.

Usage
A typical pattern for using this task is:
Build something
Copy build outputs to a staging directory
Publish staged artifacts
For example:

steps:
- script: ./buildSomething.sh
- task: CopyFiles@2
inputs:
contents: '_buildOutput/**'
targetFolder: $(Build.ArtifactStagingDirectory)
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: $(Build.ArtifactStagingDirectory)
artifactName: MyBuildOutputs

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use.
Variables are available in expressions as well as scripts; see variables to learn more about how to use them.
There are some predefined build and release variables you can also rely on.
Publish Pipeline Artifacts task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task in a pipeline to publish artifacts for the Azure Pipeline (note that publishing is NOT supported in
release pipelines. It is supported in multi stage pipelines, build pipelines, and yaml pipelines).

TIP
Looking to get started working with build artifacts? See Artifacts in Azure Pipelines.

Demand
None

YAML snippet
# Publish pipeline artifacts
# Publish (upload) a file or directory as a named artifact for the current run
- task: PublishPipelineArtifact@1
inputs:
#targetPath: '$(Pipeline.Workspace)'
#artifactName: # 'drop'

Arguments
A RGUM EN T DESC RIP T IO N

targetPath Path to the folder or file you want to publish. The path must
be a fully-qualified path or a valid path relative to the root
directory of your repository. See Artifacts in Azure Pipelines.

artifactName Specify the name of the artifact that you want to create. It can
be whatever you want. For example: drop

Control options

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Publish To Azure Service Bus task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task in an agentless job of a release pipeline to send a message to an Azure Service Bus using a service
connection and without using an agent.

Demands
Can be used in only an agentless job of a release pipeline.

YAML snippet
# Publish To Azure Service Bus
# Sends a message to Azure Service Bus using a service connection (no agent is required)
- task: PublishToAzureServiceBus@1
inputs:
azureSubscription:
#messageBody: # Optional
#sessionId: # Optional
#signPayload: false
#certificateString: # Required when signPayload == True
#signatureKey: 'signature' # Optional
#waitForCompletion: false

Arguments
PA RA M ET ER C O M M EN T S

Display name Required. The name to display for this task.

Azure Ser vice Bus Connection Required. An existing service connection to an Azure Service
Bus.

Message body Required. The text of the message body to send to the Service
Bus.

Wait for Task Completion Optional. Set this option to force the task to halt until a
response is received.

Control options See Control options

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You do not need an agent to run this task. This task can be used in only an agentless job of a release pipeline.
Where should a task signal completion?
To signal completion, the external service should POST completion data to the following pipelines REST endpoint.

{planUri}/{projectId}/_apis/distributedtask/hubs/{hubName}/plans/{planId}/events?api-version=2.0-preview.1

**Request Body**
{ "name": "TaskCompleted", "taskId": "taskInstanceId", "jobId": "jobId", "result": "succeeded" }

See this simple cmdline application for specifics.


In addition, a C# helper library is available to enable live logging and managing task status for agentless tasks.
Learn more
Python Script task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to run a Python script.

YAML snippet
# Python script
# Run a Python file or inline script
- task: PythonScript@0
inputs:
#scriptSource: 'filePath' # Options: filePath, inline
#scriptPath: # Required when scriptSource == filePath
#script: # Required when scriptSource == inline
#arguments: # Optional
#pythonInterpreter: # Optional
#workingDirectory: # Optional
#failOnStderr: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

scriptSource (Required) Target script type: File path or Inline


Type

scriptPath (Required when scriptSource == filePath ) Path of the


Script Path script to execute. Must be a fully qualified path or relative to
$(System.DefaultWorkingDirectory).

script (Required when scriptSource == inline ) The Python


Script script to run

arguments (Optional) A string containing arguments passed to the script.


Arguments They'll be available through sys.argv as if you passed them
on the command line.

pythonInterpreter (Optional) Absolute path to the Python interpreter to use. If


Python interpreter not specified, the task assumes a Python interpreter is
available on the PATH and simply attempts to run the
python command.

workingDirectory (Optional)
Working directory

failOnStderr (Optional) If true, this task will fail if any text is written to
Fail on standard error stderr .

Control options
Remarks
By default, this task will invoke python from the system path. Run Use Python Version to put the version you want
in the system path.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Query Azure Monitor Alerts task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task in an agentless job of a release pipeline to observe the configured Azure monitor rules for active
alerts.
Can be used in only an agentless job of a release pipeline.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Demands
None

YAML snippet
# Query Azure Monitor alerts
# Observe the configured Azure Monitor rules for active alerts
- task: AzureMonitor@1
inputs:
connectedServiceNameARM:
resourceGroupName:
#filterType: 'none' # Options: resource, alertrule, none
#resource: # Required when filterType == Resource
#alertRule: # Required when filterType == Alertrule
#severity: 'Sev0,Sev1,Sev2,Sev3,Sev4' # Optional. Options: sev0, sev1, sev2, sev3, sev4
#timeRange: '1h' # Optional. Options: 1h, 1d, 7d, 30d
#alertState: 'Acknowledged,New' # Optional. Options: new, acknowledged, closed
#monitorCondition: 'Fired' # Optional. Options: fired , resolved

Arguments
PA RA M ET ER C O M M EN T S

Azure subscription Required. Select an Azure Resource Manager service


connection.

Resource group Required. The resource group being monitored in the


subscription.

Resource type Required. Select the resource type in the selected group.

Resource name Required. Select the resources of the chosen types in the
selected group.
PA RA M ET ER C O M M EN T S

Aler t rules Required. Select from the currently configured alert rules to
query for status.

Control options See Control options

Succeeds if none of the alert rules are activated at the time of sampling.
For more information about using this task, see Approvals and gates overview.
Also see this task on GitHub.
Query Work Items task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task in an agentless job of a release pipeline to ensure the number of matching items returned by a work
item query is within the configured thresholds.
Can be used in only an agentless job of a release pipeline.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Demands
None

YAML snippet
# Query work items
# Execute a work item query and check the number of items returned
- task: queryWorkItems@0
inputs:
queryId:
#maxThreshold: '0'
#minThreshold: '0'

Arguments
PA RA M ET ER C O M M EN T S

Quer y Required. Select a work item query within the current project.
Can be a built-in or custom query.

Upper threshold Required. Maximum number of matching workitems for the


query. Default value = 0

Lower threshold Required. Minimum number of matching workitems for the


query. Default value = 0

Control options See Control options

Succeeds if minimum-threshold <= #-matching-workitems <= maximum-threshold


For more information about using this task, see Approvals and gates overview.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Service Fabric PowerShell task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to run a PowerShell script within the context of an Azure Service Fabric cluster connection. Runs any
PowerShell command or script in a PowerShell session that has a Service Fabric cluster connection initialized.

Prerequisites
Service Fabric
This task uses a Service Fabric installation to connect and deploy to a Service Fabric cluster.
Azure Service Fabric Core SDK on the build agent.

YAML snippet
# Service Fabric PowerShell
# Run a PowerShell script in the context of an Azure Service Fabric cluster connection
- task: ServiceFabricPowerShell@1
inputs:
clusterConnection:
#scriptType: 'FilePath' # Options: filePath, inlineScript
#scriptPath: # Optional
#inline: '# You can write your PowerShell scripts inline here. # You can also pass predefined and custom variables
to this script using arguments' # Optional
#scriptArguments: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Cluster Connection The Azure Service Fabric service connection to use to connect and
authenticate to the cluster.

Script Type Specify whether the script is provided as a file or inline in the task.

Script Path Path to the PowerShell script to run. Can include wildcards and
variables. Example:
$(system.defaultworkingdirectory)/**/drop/projectartifacts/**/docker-
compose.yml
. Note : combining compose files is not supported as part of this
task.

Script Arguments Additional parameters to pass to the PowerShell script. Can be


either ordinal or named parameters.

Inline Script The PowerShell commands to run on the build agent. More
information

Control options See Control options

Also see: Service Fabric Compose Deploy task

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Shell Script task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run a shell script using bash.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

Demands
sh

YAML snippet
- task: ShellScript@2
inputs:
scriptPath:
#args: '' # Optional
#disableAutoCwd: false # Optional
#cwd: '' # Optional
#failOnStandardError: false

Arguments
A RGUM EN T DESC RIP T IO N

Script Path Relative path from the repo root to the shell script file that
you want to run.

Arguments Arguments that you want to pass to the script.

A DVA N C ED

Working Directory Working directory in which you want to run the script. If you
leave it empty it is folder where the script is located.

Fail on Standard Error Select if you want this task to fail if any errors are written to
the StandardError stream.

C O N T RO L O P T IO N S

Example
Create test.sh at the root of your repo. We recommend creating this file from a Linux environment (such as a
real Linux machine or Windows Subsystem for Linux) so that line endings are correct. Also, don't forget to
chmod +x test.sh before you commit it.

#!/bin/bash
echo "Hello World"
echo "AGENT_WORKFOLDER is $AGENT_WORKFOLDER"
echo "AGENT_WORKFOLDER contents:"
ls -1 $AGENT_WORKFOLDER
echo "AGENT_BUILDDIRECTORY is $AGENT_BUILDDIRECTORY"
echo "AGENT_BUILDDIRECTORY contents:"
ls -1 $AGENT_BUILDDIRECTORY
echo "SYSTEM_HOSTTYPE is $SYSTEM_HOSTTYPE"
echo "Over and out."

On the Build tab of a build pipeline, add this task:

Run test.bat.
Script Path: test.sh

Utility: Shell Script

This example also works with release pipelines.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn about Bash scripts?
Beginners/BashScripting to get started.
Awesome Bash to go deeper.
How do I set a variable so that it can be read by subsequent scripts and tasks?
Define and modify your build variables in a script
Define and modify your release variables in a script
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Update Service Fabric Manifests task
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In TFS 2017 this task is named Update Ser vice Fabric App Versions task .

Use this task in a build pipeline to automatically update the versions of a packaged Service Fabric app. This task
appends a version suffix to all service and app versions, specified in the manifest files, in an Azure Service Fabric
app package.

NOTE
This task is not yet available in release pipelines.

Demands
None

YAML snippet
# Update Service Fabric manifests
# Automatically update portions of application and service manifests in a packaged Azure Service Fabric
application
- task: ServiceFabricUpdateManifests@2
inputs:
#updateType: 'Manifest versions' # Options: manifest Versions, docker Image Settings
applicationPackagePath:
#versionSuffix: '.$(Build.BuildNumber)' # Required when updateType == Manifest Versions
#versionBehavior: 'Append' # Optional. Options: append, replace
#updateOnlyChanged: false # Required when updateType == Manifest Versions
#pkgArtifactName: # Required when updateType == Manifest versions && updateOnlyChanged == true
#logAllChanges: true # Optional
#compareType: 'LastSuccessful' # Optional. Options: lastSuccessful, specific
#buildNumber: # Optional
#overwriteExistingPkgArtifact: true # Optional
#imageNamesPath: # Optional
#imageDigestsPath: # Required when updateType == Docker Image Settings

Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N

Application Package The location of the Service Fabric application package to


be deployed to the cluster.
Example:
`$(system.defaultworkingdirectory)/**/drop/application
package`
Can include wildcards and variables.

Version Value The value appended to the versions in the manifest files.
Default is `.$(Build.BuildNumber)`.
**Tip:** You can modify the [build number format]
(../../process/run-number.md) directly or use a [logging
command](https://go.microsoft.com/fwlink/?
LinkId=821347) to dynamically set a variable in any
format. For example, you can use `$(VersionSuffix)` defined
in a PowerShell task:
`$versionSuffix =
".$([DateTimeOffset]::UtcNow.ToString('yyyyMMdd.HHmm
ss'))"`
`Write-Host "##vso[task.setvariable
variable=VersionSuffix;]$versionSuffix"`

Version Behavior Specify whether to append the version value to existing


values in the manifest files, or replace them.

Update only if changed Select this check box if you want to append the new
version suffix to only the packages that have changed
from a previous build. If no changes are found, the
version suffix from the previous build will be appended.
**Note:** By default, the compiler will create different
outputs even if you made no changes. Use the
[deterministic compiler flag]
(https://go.microsoft.com/fwlink/?LinkId=808668) to
ensure builds with the same inputs produce the same
outputs.

Package Artifact Name The name of the artifact containing the application
package from the previous build.

Log all changes Select this check box to compare all files in every package
and log if the file was added, removed, or if its content
changed. Otherwise, compare files in a package only until
the first change is found for potentially faster
performance.

Compare against Specify whether to compare against the last completed,


successful build or against a specific build.

Build Number If comparing against a specific build, the build number to


use.
A RGUM EN T DESC RIP T IO N

C O N T RO L O P T IO N S

Also see: Service Fabric Application Deployment task


This task can only be used in a build pipeline to automatically update the versions of a packaged Service Fabric
app.
This task support two types of update:
1. Manifest version : It will update Service and Application versions specified in manifest files in Service
fabric application package. It specified, it compares current files against a previous build and updates
version only for those services which have been changed.
2. Docker image settings : It will update docker container image settings specified in manifest files in Service
fabric application package. The image settings to be placed are picked from two files:
a. Image names file : This file is generated by build task
b. Image digests file : This file is generated by docker task when it pushes images to registry

Task Inputs
PA RA M ET ERS DESC RIP T IO N

updateType (Required) Specify the type of update that should be made to


Update Type the manifest files. In order to use both update types, add an
instance of this task to the build pipeline for each type of
update to be executed
Default value: Manifest versions

applicationPackagePath (Required) Path to the application package. [Variables]


Application Package (../../build/variables.md) and wildcards can be used in the path

versionSuffix (Required) The value used to specify the version in the


Version Value manifest files. Default is .$(Build.BuildNumber)
Default value: .$(Build.BuildNumber)

versionBehavior Specify whether to append the version value to existing values


Version Behavior in the manifest files or replace them
Default value: Append

updateOnlyChanged (Required) Incrementally update only the packages that have


Update only if changed changed. Use the [deterministic compiler flag]
(https://go.microsoft.com/fwlink/?LinkId=808668) to ensure
builds with the same inputs produce the same outputs
Default value: false

pkgArtifactName The name of the artifact containing the application package


Package Artifact Name for comparison
PA RA M ET ERS DESC RIP T IO N

logAllChanges Compare all files in every package and log if the file was
Log all changes added, removed, or if its content changed. Otherwise,
compare files in a package only until the first change is found
for faster performance
Default value: true

compareType The build for comparison


Compare against Default value: LastSuccessful

buildNumber The build number for comparison


Build Number

overwriteExistingPkgArtifact Always download a new copy of the artifact. Otherwise use an


Overwrite Existing Package Artifact existing copy, if present
Default value: true

imageNamesPath Path to a text file that contains the names of the Docker
Image Names Path images associated with the Service Fabric application that
should be updated with digests. Each image name must be on
its own line and must be in the same order as the digests in
Image Digests file. If the images are created by the Service
Fabric project, this file is generated as part of the Package
target and its output location is controlled by the property
BuiltDockerImagesFilePath

imageDigestsPath (Required) Path to a text file that contains the digest values of
Image Digests Path the Docker images associated with the Service Fabric
application. This file can be output by the [Docker task]
(../../index.yml) when using the push action. The file should
contain lines of text in the format of
'registry/image_name@digest_value'

Example
Also see: Service Fabric Application Deployment task

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
App Center Test task
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
This task lets you run test suites against an application binary ( .apk or .ipa file) using App Center Test.
Sign up with App Center first.
For details about using this task, see the App Center documentation article Using Azure DevOps for UI Testing.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

YAML snippet
# App Center test
# Test app packages with Visual Studio App Center
- task: AppCenterTest@1
inputs:
appFile:
#artifactsDirectory: '$(Build.ArtifactStagingDirectory)/AppCenterTest'
#prepareTests: true # Optional
#frameworkOption: 'appium' # Required when prepareTests == True# Options: appium, espresso, calabash,
uitest, xcuitest
#appiumBuildDirectory: # Required when prepareTests == True && Framework == Appium
#espressoBuildDirectory: # Optional
#espressoTestApkFile: # Optional
#calabashProjectDirectory: # Required when prepareTests == True && Framework == Calabash
#calabashConfigFile: # Optional
#calabashProfile: # Optional
#calabashSkipConfigCheck: # Optional
#uiTestBuildDirectory: # Required when prepareTests == True && Framework == Uitest
#uitestStorePath: # Optional
#uiTestStorePassword: # Optional
#uitestKeyAlias: # Optional
#uiTestKeyPassword: # Optional
#uiTestToolsDirectory: # Optional
#signInfo: # Optional
#xcUITestBuildDirectory: # Optional
#xcUITestIpaFile: # Optional
#prepareOptions: # Optional
#runTests: true # Optional
#credentialsOption: 'serviceEndpoint' # Required when runTests == True# Options: serviceEndpoint, inputs
#serverEndpoint: # Required when runTests == True && CredsType == ServiceEndpoint
#username: # Required when runTests == True && CredsType == Inputs
#password: # Required when runTests == True && CredsType == Inputs
#appSlug: # Required when runTests == True
#devices: # Required when runTests == True
#series: 'master' # Optional
#dsymDirectory: # Optional
#localeOption: 'en_US' # Required when runTests == True# Options: da_DK, nl_NL, en_GB, en_US, fr_FR,
de_DE, ja_JP, ru_RU, es_MX, es_ES, user
#userDefinedLocale: # Optional
#loginOptions: # Optional
#runOptions: # Optional
#skipWaitingForResults: # Optional
#cliFile: # Optional
#showDebugOutput: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

app (Required) Relative path from the repo root to the APK or IPA
Binary application file path file that you want to test.
Argument alias: appFile

artifactsDir (Required) Where to place the artifacts produced by the


Artifacts directory prepare step and used by the run step. This directory will be
created if it does not exist.
Default value:
$(Build.ArtifactStagingDirectory)/AppCenterTest
Argument alias: artifactsDirectory

enablePrepare (Optional) Specify whether to prepare tests.


Prepare tests Default value: true
Argument alias: prepareTests
A RGUM EN T DESC RIP T IO N

framework (Required) Options: appium, calabash, espresso, uitest


Test framework (Xamarin UI Test), xcuitest.
Default value: appium
Argument alias: frameworkOption

appiumBuildDir (Required) Path to directory with Appium tests.


Build directory (Appium) Argument alias: appiumBuildDirectory

espressoBuildDir (Optional) Path to Espresso output directory


Build directory (Espresso) Argument alias: espressoBuildDirectory

espressoTestApkPath (Optional) Path to APK file with Espresso tests. If not set,
Test APK path (Espresso) build-dir is used to discover it. Wildcard is allowed.
Argument alias: espressoTestApkFile

calabashProjectDir (Required) Path to Calabash workspace directory.


Project directory (Calabash) Argument alias: calabashProjectDirectory

calabashConfigFile (Optional) Path to Cucumber configuration file, usually


Cucumber config file (Calabash) cucumber.yml.

calabashProfile (Optional) Profile to run. This value must exists in the


Profile to run (Calabash) Cucumber configuration file.

calabashSkipConfigCheck (Optional) Force running without Cucumber profile.


Skip Configuration Check (Calabash) Default value: false

uitestBuildDir (Required) Path to directory with built test assemblies


Build directory (Xamarin UI Test) Argument alias: uiTestBuildDirectory

uitestStorePath (Optional) Path to the store file used to sign the app.
Store file (Xamarin UI Test)

uitestStorePass (Optional) Password of the store file used to sign the app. Use
Store password (Xamarin UI Test) a new variable with its lock enabled on the Variables tab to
encrypt this value.
Argument alias: uiTestStorePassword

uitestKeyAlias (Optional) Enter the alias that identifies the public/private key
Key alias (Xamarin UI Test) pair used in the store file

uitestKeyPass (Optional) Enter the key password for the alias and store file.
Key password (Xamarin UI Test) Use a new variable with its lock enabled on the Variables tab
to encrypt this value.
Argument alias: uiTestKeyPassword

uitestToolsDir (Optional) Path to directory with Xamarin UI test tools that


Test tools directory (Xamarin UI Test) contains test-cloud.exe
Argument alias: uiTestToolsDirectory
A RGUM EN T DESC RIP T IO N

signInfo (Optional) Use Signing Information for signing the test server.
Signing information (Calabash/Xamarin UI Test

xcuitestBuildDir (Optional) Path to the build output directory, usually


Build directory (XCUITest) $(ProjectDir)/Build/Products/Debug-iphoneos) .
Argument alias: xcUITestBuildDirectory

xcuitestTestIpaPath (Optional) Path to the *.ipa file with the XCUITest tests.
Test IPA path (XCUITest) Argument alias: xcUITestIpaFile

prepareOpts (Optional) Additional arguments passed to the App Center


Additional options (for preparing tests) test prepare step.
Argument alias: prepareOptions

enableRun (Optional) Specify whether to run the tests.


Run tests Default value: true
Argument alias: runTests

credsType (Required) Use App Center service connection or enter


Authentication method credentials to connect to App Center.
Default value: serviceEndpoint
Argument alias: credentialsOption

serverEndpoint (Required) Select the service connection for App Center.


App Center service connection Create a new App Center service connection in Azure DevOps
project settings.

username (Required) Visit https://appcenter.ms/settings/profile to get


App Center username (when not using a service connection) your username.

password (Required) Visit https://appcenter.ms/settings/profile to set


App Center password (when not using a service connection) your password. It can accept a variable defined in build or
release pipelines as $(passwordVariable) . You may mark
variable type as secret to secure it.

appSlug (Required) The app slug is in the format of


App slug {username}/{app_identifier} . To locate {username} and
{app_identifier} for an app, click on its name from
https://appcenter.ms/apps, and the resulting URL is in the
format of
https://appcenter.ms/users/{username}/apps/{app_identifier}.|

devices (Required) String to identify what devices this test will run
Devices against. Copy and paste this string when you define a new
test run from App Center Test beacon.

series (Optional) The series name for organizing test runs (e.g.
Test series master, production, beta).
Default value: master

dsymDir (Optional) Path to iOS symbol files.


dSYM directory Argument alias: dsymDirectory
A RGUM EN T DESC RIP T IO N

locale (Required) Options: da_DK, de_DE, en_GB, en_US, es_ES,


System language es_MX, fr_FR, ja_JP, nl_NL, ru_RU, user. If your language isn't
an option, use user/Other and enter its locale below, such
as en_US .
Default value: en_US
Argument alias: localeOption

userDefinedLocale (Optional) Enter any two-letter ISO-639 language code along


Other locale with any two-letter ISO 3166 country code in the format
[language]_[country] , such as en_US.

loginOpts (Optional) Additional arguments passed to the App Center


Additional options for login login step.
Argument alias: loginOptions

runOpts (Optional) Additional arguments passed to the App Center


Additional options for run test run.
Argument alias: runOptions

async (Optional) Specify whether to execute tests asynchronously,


Do not wait for test result exiting just after tests are uploaded, without waiting for test
results.
Default value: false
Argument alias: skipWaitingForResults

cliLocationOverride (Optional) Path to the App Center CLI on the build or release
App Center CLI location agent.
Argument alias: cliFile

debug (Optional) Add --debug to the App Center CLI for verbose
Enable debug output output.
Argument alias: showDebugOutput

Example
This example runs Espresso tests on an Android app using the App Center Test task.

steps:
- task: AppCenterTest@1
displayName: 'Espresso Test - Synchronous'
inputs:
appFile: 'Espresso/espresso-app.apk'
artifactsDirectory: '$(Build.ArtifactStagingDirectory)/AppCenterTest'
frameworkOption: espresso
espressoBuildDirectory: Espresso
serverEndpoint: 'myAppCenterServiceConnection'
appSlug: 'xplatbg1/EspressoTests'
devices: a84c93af

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Cloud-based Apache JMeter Load Test task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Cau t i on

The cloud-based load testing service is deprecated. More information about the deprecation, the service
availability, and alternative services can be found here.
Use this task to run Apache JMeter load tests in the cloud.

Demands
The agent must have the following capability:
Azure PowerShell

YAML snippet
# Cloud-based Apache JMeter load test
# Run an Apache JMeter load test in the cloud
- task: ApacheJMeterLoadTest@1
inputs:
#connectedServiceName: # Optional
testDrop:
#loadTest: 'jmeter.jmx'
#agentCount: '1' # Options: 1, 2, 3, 4, 5
#runDuration: '60' # Options: 60, 120, 180, 240, 300
#geoLocation: 'Default' # Optional. Options: default, australia East, australia Southeast, brazil South,
central India, central US, east Asia, east US 2, east US, japan East, japan West, north Central US, north
Europe, south Central US, south India, southeast Asia, west Europe, west US
#machineType: '0' # Optional. Options: 0, 2

Arguments
A RGUM EN T DESC RIP T IO N

Azure Pipelines Connection (Optional) Select a previously registered service connection to


talk to the cloud-based load test service. Choose 'Manage' to
register a new connection.

Apache JMeter test files folder (Required) Relative path from repo root where the load test
files are available.

Apache JMeter file (Required) The Apache JMeter test filename to be used under
the load test folder specified above.

Agent Count (Required) Number of test agents (dual-core) used in the run.

Run Duration (sec) (Required) Load test run duration in seconds.

Run load test using (Optional) undefined


A RGUM EN T DESC RIP T IO N

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Cloud-based Load Test task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Cau t i on

The cloud-based load testing service is deprecated. More information about the deprecation, the service
availability, and alternative services can be found here.
Use this task to run a load test in the cloud, to understand, test, and validate your app's performance. The task uses
the Cloud-based Load Test Service based in Microsoft Azure and can be used to test your app's performance by
generating load on it.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Demands
The agent must have the following capability:
Azure PowerShell

YAML snippet
# Cloud-based load test
# Run a load test in the cloud with Azure Pipelines
- task: CloudLoadTest@1
inputs:
#connectedServiceName: # Optional
#testDrop: '$(System.DefaultWorkingDirectory)'
loadTest:
#activeRunSettings: 'useFile' # Optional. Options: useFile, changeActive
#runSettingName: # Required when activeRunSettings == ChangeActive
#testContextParameters: # Optional
#testSettings: # Optional
#thresholdLimit: # Optional
#machineType: '0' # Options: 0, 2
#resourceGroupName: 'default' # Optional
#numOfSelfProvisionedAgents: # Optional

Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N

Azure Pipelines connection The name of a Generic service connection that references the
Azure DevOps organization you will be running the load test
from and publishing the results to.
- Required for builds and releases on TFS and must specify a
connection to the Azure DevOps organization where the load
test will run.
- Optional for builds and releases on Azure Pipelines. In this
case, if not provided, the current Azure Pipelines connection is
used.
- See Generic service connection.

Test settings file Required. The path relative to the repository root of the test
settings file that specifies the files and data required for the
load test such as the test settings, any deployment items, and
setup/clean-up scripts. The task will search this path and any
subfolders.

Load test files folder Required. The path of the load test project. The task looks here
for the files required for the load test, such as the load test file,
any deployment items, and setup/clean-up scripts. The task
will search this path and any subfolders.

Load test file Required. The name of the load test file (such as
myfile.loadtest ) to be executed as part of this task. This
allows you to have more than one load test file and choose
the one to execute based on the deployment environment or
other factors.

Number of permissible threshold violations Optional. The number of critical violations that must occur for
the load test to be deemed unsuccessful, aborted, and marked
as failed.

Control options See Control options

Examples
Scheduling Load Test Execution

More Information
Cloud-based Load Testing
Source code for this task
Build your Visual Studio solution
Cloud-based Load Testing Knowledge Base

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
How do I use a Test Settings file?
The Test settings file references any setup and cleanup scripts required to execute the load test. For more details
see: Using Setup and Cleanup Script in Cloud Load Test
When should I specify the number of permissible threshold violations?
Use the Number of permissible threshold violations setting if your load test is not already configured with
information about how many violations will cause a failure to be reported. For more details, see: How to: Analyze
Threshold Violations Using the Counters Panel in Load Test Analyzer.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Cloud-based Web Performance Test task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Cau t i on

The cloud-based load testing service is deprecated. More information about the deprecation, the service
availability, and alternative services can be found here.
Use this task to run the Quick Web Performance Test to easily verify your web application exists and is responsive.
The task generates load against an application URL using the Azure Pipelines Cloud-based Load Test Service based
in Microsoft Azure.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Demands
The agent must have the following capability:
Azure PowerShell

YAML snippet
# Cloud-based web performance test
# Run a quick web performance test in the cloud with Azure Pipelines
- task: QuickPerfTest@1
inputs:
#connectedServiceName: # Optional
websiteUrl:
testName:
#vuLoad: '25' # Options: 25, 50, 100, 250
#runDuration: '60' # Options: 60, 120, 180, 240, 300
#geoLocation: 'Default' # Optional. Options: default, australia East, australia Southeast, brazil South,
central India, central US, east Asia, east US 2, east US, japan East, japan West, north Central US, north
Europe, south Central US, south India, southeast Asia, west Europe, west US
#machineType: '0' # Options: 0, 2
#resourceGroupName: 'default' # Optional
#numOfSelfProvisionedAgents: # Optional
#avgResponseTimeThreshold: '0' # Optional

Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N

Azure Pipelines connection The name of a Generic service connection that references the
Azure DevOps organization you will be running the load test
from and publishing the results to.
- Required for builds and releases on TFS and must specify a
connection to the Azure DevOps organization where the load
test will run.
- Optional for builds and releases on Azure Pipelines. In this
case, if not provided, the current Azure Pipelines connection is
used.
- See Generic service connection.

Website URL Required. The URL of the app to test.

Test Name Required. A name for this load test, used to identify it for
reporting and for comparison with other test runs.

User Load Required. The number of concurrent users to simulate in this


test. Select a value from the drop-down list.

Run Duration (sec) Required. The duration of this test in seconds. Select a value
from the drop-down list.

Load Location The location from which the load will be generated. Select a
global Azure location, or Default to generate the load from
the location associated with your Azure DevOps organization.

Run load test using Select Automatically provisioned agents if you want the
cloud-based load testing service to automatically provision
agents for running the load tests. The application URL must
be accessible from the Internet.
Select Self-provisioned agents if you want to test apps
behind the firewall. You must provision agents and register
them against your Azure DevOps organization when using
this option. See Testing private/intranet applications using
Cloud-based load testing.

Fail test if Avg. Response Time (ms) exceeds Specify a threshold for the average response time in
milliseconds. If the observed response time during the load
test exceeds this threshold, the task will fail.

Control options See Control options

More Information
Cloud-based Load Testing

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Help and support
See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Container Structure Tests
4/20/2020 • 2 minutes to read • Edit Online

The Container Structure Tests provide a powerful framework to validate the structure of a container image. These
tests can be used to check the output of commands in an image, as well as verify metadata and contents of the
filesystem. Tests can be run either through a standalone binary, or through a Docker image.
Tests within this framework are specified through a YAML or JSON config file. Multiple config files may be specified
in a single test run. The config file will be loaded in by the test runner, which will execute the tests in order. Within
this config file, four types of tests can be written:
Command Tests (testing output/error of a specific command issued)
File Existence Tests (making sure a file is, or isn't, present in the file system of the image)
File Content Tests (making sure files in the file system of the image contain, or do not contain, specific contents)
Metadata Test, singular (making sure certain container metadata is correct)

Container Structure Test Task


This task helps you run container structure tests and publish test results to Azure Pipelines and provides a
comprehensive test reporting and analytics experience.

NOTE
This is an early preview feature. More upcoming features will be rolled out in upcoming sprints.

Arguments
A RGUM EN T DESC RIP T IO N

dockerRegistryServiceConnection (Required) Select a Docker registry service connection.


Docker registry service connection Required for commands that need to authenticate with a
registry.

repository (Required) Name of the repository


Container repository

tag The tag is used in pulling the image from docker registry
Tag service connection
Default value: $(Build.BuildId)

configFile (Required) Config files path, that contains container structure


Config file path tests. Either .yaml or .json files

testRunTitle (Optional) Provide a name for the Test Run


Test run title

failTaskOnFailedTests (Optional) Fail the task if there are any test failures. Check this
Fail task if there are test failures option to fail the task if test failures are detected.
Build, Test and Publish Test
The container structure test task can be added in the classic pipeline as well as in unified pipeline (multi-stage) &
YAML based pipelines.
YAML
Classic
In the new YAML based unified pipeline, you can search for task in the window.

Once the task is added, you need to set the config file path, docker registory service connection, container
repository and tag, if required. Task input in the yaml based pipeline is created.

YAML file
Sample YAML

steps:
- task: ContainerStructureTest@0
displayName: 'Container Structure Test '
inputs:
dockerRegistryServiceConnection: 'Container_dockerHub'
repository: adma/hellodocker
tag: v1
configFile: /home/user/cstfiles/fileexisttest.yaml

View test report


Once the task is executed, you can directly go to test tab to view the full report. The published test results are
displayed in the Tests tab in the pipeline summary and help you to measure pipeline quality, review traceability,
troubleshoot failures, and drive failure ownership.
Publish Code Coverage Results task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task in a build pipeline to publish code coverage results produced when running tests to Azure Pipelines or TFS in
order to obtain coverage reporting. The task supports popular coverage result formats such as Cobertura and JaCoCo.
This task can only be used in Build pipelines and is not supported in Release pipelines.
Tasks such as Visual Studio Test, .NET Core, Ant, Maven, Gulp, Grunt also provide the option to publish code coverage data
to the pipeline. If you are using these tasks, you do not need a separate Publish Code Coverage Results task in the pipeline.

Demands
To generate the HTML code coverage report you need dotnet 2.0.0 or later on the agent. The dotnet folder needs to be in
the environment path. If there are multiple folders containing dotnet, the one with version 2.0.0 must be before any others
in the path list.

YAML snippet
# Publish code coverage results
# Publish Cobertura or JaCoCo code coverage results from a build
- task: PublishCodeCoverageResults@1
inputs:
#codeCoverageTool: 'JaCoCo' # Options: cobertura, jaCoCo
summaryFileLocation:
#pathToSources: # Optional
#reportDirectory: # Optional
#additionalCodeCoverageFiles: # Optional
#failIfCoverageEmpty: false # Optional

The codeCoverageTool and summar yFileLocation parameters are mandatory.


To publish code coverage results for Javascript with istanbul using YAML, see JavaScript in the Ecosystems section of these
topics, which also includes examples for other languages.

Arguments
A RGUM EN T DESC RIP T IO N

summaryFileLocation (Required) Path of the summary file containing code coverage


Path to summary files statistics, such as line, method, and class coverage. The value may
contain minimatch patterns.
For example:
$(System.DefaultWorkingDirectory)/MyApp/**/site/cobertura/coverage.xml

pathToSources (Optional) Path to source files is required when coverage XML


Path to Source files reports do not contain absolute path to source files.
For example, JaCoCo reports do not use absolute paths and when
publishing JaCoCo coverage for Java apps, the pattern would be
similar to
$(System.DefaultWorkingDirectory)/MyApp/src/main/java/ .
This input is also needed if tests are run in a docker container. This
input should point to absolute path to source files on the host.
For example, $(System.DefaultWorkingDirectory)/MyApp/
A RGUM EN T DESC RIP T IO N

failIfCoverageEmpty (Optional) Fail the task if code coverage did not produce any
Fail if code coverage results are missing results to publish.

Docker
For apps using docker, build and tests may run inside the container, generating code coverage results within the container.
In order to publish the results to the pipeline, the resulting artifacts should be to be made available to the Publish Code
Coverage Results task. For reference you can see a similar example for publishing test results under Build, test, and
publish results with a Docker file section for Docker .

View results
In order to view the code coverage results in the pipeline, see Review code coverage results

Related tasks
Publish Test Results

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Is code coverage data merged when multiple files are provided as input to the task or multiple tasks are used in the
pipeline?
At present, the code coverage reporting functionality provided by this task is limited and it does not merge coverage data.
If you provide multiple files as input to the task, only the first match is considered. If you use multiple publish code
coverage tasks in the pipeline, the summary and report is shown for the last task. Any previously uploaded data is ignored.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Publish Test Results task
11/2/2020 • 14 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments, and
jobs are called phases.

This task publishes test results to Azure Pipelines or TFS when tests are executed to provide a comprehensive
test reporting and analytics experience. You can use the test runner of your choice that supports the results
format you require. Supported results formats include CTest, JUnit (including PHPUnit), NUnit 2, NUnit 3,
Visual Studio Test (TRX), and xUnit 2.
Other built-in tasks such as Visual Studio Test task and Dot NetCore CLI task automatically publish test results
to the pipeline, while tasks such as Ant, Maven, Gulp, Grunt, .NET Core and Xcode provide publishing results
as an option within the task, or build libraries such as Cobertura and JaCoCo. If you are using any of these
tasks, you do not need a separate Publish Test Results task in the pipeline.
The published test results are displayed in the Tests tab in the pipeline summary and help you to measure
pipeline quality, review traceability, troubleshoot failures, and drive failure ownership.
The following example shows the task configured to publish test results.
You can also use this task in a build pipeline to publish code coverage results produced when running
tests to Azure Pipelines or TFS in order to obtain coverage reporting.

Check prerequisites
If you're using a Windows self-hosted agent, be sure that your machine has this prerequisite installed:
.NET Framework 4.6.2 or a later version

Demands
[none]

YAML snippet
# Publish Test Results
# Publish test results to Azure Pipelines
- task: PublishTestResults@2
inputs:
#testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit, cTest
#testResultsFiles: '**/TEST-*.xml'
#searchFolder: '$(System.DefaultWorkingDirectory)' # Optional
#mergeTestResults: false # Optional
#failTaskOnFailedTests: false # Optional
#testRunTitle: # Optional
#buildPlatform: # Optional
#buildConfiguration: # Optional
#publishRunAttachments: true # Optional

The default option uses JUnit format to publish test results. When using VSTest as the testRunner , the
testResultsFiles option should be changed to **/TEST-*.trx .
testResultsFormat is an alias for the testRunner input name. The results files can be produced by multiple
runners, not just a specific runner. For example, jUnit results format is supported by many runners and not
just jUnit.
To publish test results for Python using YAML, see Python in the Ecosystems section of these topics, which
also includes examples for other languages.

Arguments
NOTE
Options specified below are applicable to the latest version of the task.

A RGUM EN T DESC RIP T IO N

testRunner (Required) Specify the format of the results files you want
Test result format to publish. The following formats are supported:
- CTest, JUnit, NUnit 2, NUnit 3, Visual Studio Test (TRX) and
xUnit 2
Default value: JUnit
Argument alias: testResultsFormat
A RGUM EN T DESC RIP T IO N

testResultsFiles (Required) Use this to specify one or more test results files.
Test results files - You can use a single-folder wildcard ( * ) and recursive
wildcards ( ** ). For example, **/TEST-*.xml searches for
all the XML files whose names start with TEST- in all
subdirectories. If using VSTest as the test result format, the
file type should be changed to .trx e.g. **/TEST-*.trx
- Multiple paths can be specified, separated by a semicolon.
- Additionally accepts minimatch patterns.
For example, !TEST[1-3].xml excludes files named
TEST1.xml , TEST2.xml , or TEST3.xml .
Default value: **/TEST-*.xml

searchFolder (Optional) Folder to search for the test result files.


Search folder Default value: $(System.DefaultWorkingDirectory)

mergeTestResults When this option is selected, test results from all the files
Merge test results will be reported against a single test run. If this option is
not selected, a separate test run will be created for each
test result file.
Note: Use merge test results to combine files from same
test framework to ensure results mapping and duration are
calculated correctly.
Default value: false

failTaskOnFailedTests (Optional) When selected, the task will fail if any of the tests
Fail if there are test failures in the results file is marked as failed. The default is false,
which will simply publish the results from the results file.
Default value: false

testRunTitle (Optional) Use this option to provide a name for the test
Test run title run against which the results will be reported. Variable
names declared in the build or release pipeline can be used.

platform (Optional) Build platform against which the test run should
Build Platform be reported.
For example, x64 or x86 . If you have defined a variable
for the platform in your build task, use that here.
Argument alias: buildPlatform

configuration Build configuration against which the Test Run should be


Build Configuration reported. For example, Debug or Release. If you have
defined a variable for configuration in your build task, use
that here.
Argument alias: buildConfiguration

publishRunAttachments (Optional) When selected, the task will upload all the test
Upload test results files result files as attachments to the test run.
Default value: true

Result formats mapping


This table lists the fields reported in the Tests tab in a build or release summary, and the corresponding
mapping with the attributes in the supported test result formats.
Visual Studio Test (TRX)
JUnit
NUnit 2
NUnit 3
xUnit 2
CTest

SC O P E F IEL D VISUA L ST UDIO T EST ( T RX)

Test run Title Test run title specified in the task

Date started /TestRun/Times.Attributes["star t "].Val


ue

Date completed /TestRun/Times.Attributes["finish "].Val


ue

Duration Date completed - Date started

Attachments Refer to Attachments suppor t


section below

Test result Title /TestRun/Results/UnitTestResult.Attrib


utes["testName "].Value Or
/TestRun/Results/WebTestResult.Attrib
utes["testName "].Value Or
/TestRun/Results/TestResultAggregatio
n.Attributes["testName "].Value

Date started /TestRun/Results/UnitTestResult.Attrib


utes["star tTime "].Value Or
/TestRun/Results/WebTestResult.Attrib
utes["star tTime "].Value Or
/TestRun/Results/TestResultAggregatio
n.Attributes["star tTime "].Value

Date completed /TestRun/Results/UnitTestResult.Attrib


utes["star tTime "].Value +
/TestRun/Results/UnitTestResult.Attrib
utes["duration "].Value Or
/TestRun/Results/WebTestResult.Attrib
utes["star tTime "].Value +
/TestRun/Results/WebTestResult.Attrib
utes["duration "].Value Or
/TestRun/Results/TestResultAggregatio
n.Attributes["star tTime "].Value +
/TestRun/Results/TestResultAggregatio
n.Attributes["duration "].Value

Duration1 /TestRun/Results/UnitTestResult.Attrib
utes["duration "].Value Or
/TestRun/Results/WebTestResult.Attrib
utes["duration "].Value Or
/TestRun/Results/TestResultAggregatio
n.Attributes["duration "].Value

Owner /TestRun/TestDefinitions/UnitTest/Own
ers/Owner.Attributes["name "].Value
SC O P E F IEL D VISUA L ST UDIO T EST ( T RX)

Outcome /TestRun/Results/UnitTestResult.Attrib
utes["outcome "].Value Or
/TestRun/Results/WebTestResult.Attrib
utes["outcome "].Value Or
/TestRun/Results/TestResultAggregatio
n.Attributes["outcome "].Value

Error message /TestRun/Results/UnitTestResult/Outp


ut/ErrorInfo/Message.InnerText Or
/TestRun/Results/WebTestResultOutpu
t/ErrorInfo/Message.InnerText Or
/TestRun/Results/TestResultAggregatio
n/Output/ErrorInfo/Message.InnerT
ext

Stack trace /TestRun/Results/UnitTestResult/Outp


ut/ErrorInfo/StackTrace.InnerText
Or
/TestRun/Results/WebTestResultOutpu
t/ErrorInfo/StackTrace.InnerText Or
/TestRun/Results/TestResultAggregatio
n/Output/ErrorInfo/StackTrace.Inne
rText

Attachments Refer to Attachments suppor t


section below

Console log /TestRun/Results/UnitTestResult/Outp


ut/StdOut.InnerText Or
/TestRun/Results/WebTestResultOutpu
t/Output/StdOut.InnerText Or
/TestRun/Results/TestResultAggregatio
n/Output/StdOut.InnerText

Console error log /TestRun/Results/UnitTestResult/Outp


ut/StdErr.InnerText Or
/TestRun/Results/WebTestResultOutpu
t/Output/StdErr.InnerText Or
/TestRun/Results/TestResultAggregatio
n/Output/StdErr.InnerText

Agent name /TestRun/Results/UnitTestResult.Attrib


utes["computerName "].Value Or
/TestRun/Results/WebTestResult.Attrib
utes["computerName "].Value Or
/TestRun/Results/TestResultAggregatio
n.Attributes["computerName "].Value

Test file /TestRun/TestDefinitions/UnitTest.Attri


butes["storage "].Value

Priority /TestRun/TestDefinitions/UnitTest.Attri
butes["priority "].Value

1 Duration is used only when Date star ted and Date completed are not available.
2 The fully Qualified name format is Namespace.Testclass.Methodname with a character limit of 512. If the
test is data driven and has parameters, the character limit will include the parameters.
Docker
For Docker based apps there are many ways to build your application and run tests:
Build and test in a build pipeline: build and tests execute in the pipeline and test results are published using
the Publish Test Results task.
Build and test with a multi-stage Dockerfile: build and tests execute inside the container using a multi-
stage Docker file, as such test results are not published back to the pipeline.
Build, test, and publish results with a Dockerfile: build and tests execute inside the container and results are
published back to the pipeline. See the example below.
Build, test, and publish results with a Docker file
In this approach, you build your code and run tests inside the container using a Docker file. The test results
are then copied to the host to be published to the pipeline. To publish the test results to Azure Pipelines, you
can use the Publish Test Results task. The final image will be published to Docker or Azure Container
Registry
Get the code
1. Import into Azure DevOps or fork into GitHub the following repository. This sample code includes a
Dockerfile file at the root of the repository along with .vsts-ci.docker.yml file.

https://github.com/MicrosoftDocs/pipelines-dotnet-core

2. Create a Dockerfile.build file at the root of the directory with the following:

# Build and run tests inside the docker container


FROM microsoft/dotnet:2.1-sdk
WORKDIR /app
# copy the contents of agent working directory on host to workdir in container
COPY . ./
# dotnet commands to build, test, and publish
RUN dotnet restore
RUN dotnet build -c Release
RUN dotnet test dotnetcore-tests/dotnetcore-tests.csproj -c Release --logger
"trx;LogFileName=testresults.trx"
RUN dotnet publish -c Release -o out
ENTRYPOINT dotnet dotnetcore-sample/out/dotnetcore-sample.dll

This file contains the instructions to build code and run tests. The tests are then copied to a file
testresults.trx inside the container.

3. To make the final image as small as possible, containing only the runtime and deployment artifacts,
replace the contents of the existing Dockerfile with the following:

# This Dockerfile creates the final image to be published to Docker or


# Azure Container Registry
# Create a container with the compiled asp.net core app
FROM microsoft/aspnetcore:2.0
# Create app directory
WORKDIR /app
# Copy only the deployment artifacts
COPY /out .
ENTRYPOINT ["dotnet", "dotnetcore-sample.dll"]

Define the build pipeline


YAML
Classic
1. If you have a Docker Hub account, and want to push the image to your Docker registry, replace the
contents of the .vsts-ci.docker.yml file with the following:

# Build Docker image for this app, to be published to Docker Registry


pool:
vmImage: 'ubuntu-16.04'
variables:
buildConfiguration: 'Release'
steps:
- script: |
docker build -f Dockerfile.build -t $(dockerId)/dotnetcore-build:$BUILD_BUILDID .
docker run --name dotnetcoreapp --rm -d $(dockerId)/dotnetcore-build:$BUILD_BUILDID
docker cp dotnetcoreapp:app/dotnetcore-tests/TestResults $(System.DefaultWorkingDirectory)
docker cp dotnetcoreapp:app/dotnetcore-sample/out $(System.DefaultWorkingDirectory)
docker stop dotnetcoreapp

- task: PublishTestResults@2
inputs:
testRunner: VSTest
testResultsFiles: '**/*.trx'
failTaskOnFailedTests: true

- script: |
docker build -f Dockerfile -t $(dockerId)/dotnetcore-sample:$BUILD_BUILDID .
docker login -u $(dockerId) -p $pswd
docker push $(dockerId)/dotnetcore-sample:$BUILD_BUILDID
env:
pswd: $(dockerPassword)

Alternatively, if you configure an Azure Container Registry and want to push the image to that registry,
replace the contents of the .vsts-ci.yml file with the following:

# Build Docker image for this app to be published to Azure Container Registry
pool:
vmImage: 'ubuntu-16.04'
variables:
buildConfiguration: 'Release'

steps:
- script: |
docker build -f Dockerfile.build -t $(dockerId)/dotnetcore-build:$BUILD_BUILDID .
docker run --name dotnetcoreapp --rm -d $(dockerId)/dotnetcore-build:$BUILD_BUILDID
docker cp dotnetcoreapp:app/dotnetcore-tests/TestResults $(System.DefaultWorkingDirectory)
docker cp dotnetcoreapp:app/dotnetcore-sample/out $(System.DefaultWorkingDirectory)
docker stop dotnetcoreapp

- task: PublishTestResults@2
inputs:
testRunner: VSTest
testResultsFiles: '**/*.trx'
failTaskOnFailedTests: true

- script: |
docker build -f Dockerfile -t $(dockerId).azurecr.io/dotnetcore-sample:$BUILD_BUILDID .
docker login -u $(dockerId) -p $pswd $(dockerid).azurecr.io
docker push $(dockerId).azurecr.io/dotnetcore-sample:$BUILD_BUILDID
env:
pswd: $(dockerPassword)

2. Push the change to the master branch in your repository.


3. If you use Azure Container Registry, ensure you have pre-created the registry in the Azure portal. Copy
the admin user name and password shown in the Access keys section of the registry settings in Azure
portal.
4. Update your build pipeline with the following
Agent pool : Hosted Ubuntu 1604
dockerId : Set the value to your Docker ID for DockerHub or the admin user name for Azure
Container Registry.
dockerPassword : Set the value to your password for DockerHub or the admin password
Azure Container Registry.
YAML file path : /.vsts-ci.docker.yml
5. Queue a new build and watch it create and push a Docker image to your registry and the test results to
Azure DevOps.
YAML builds are not yet available on TFS.

Attachments support
The Publish Test Results task provides support for attachments for both test run and test results for the
following formats. For public projects, we support 2GB of total attachments.
Visual Studio Test (TRX )
SC O P E TYPE PAT H

Test run Data Collector /TestRun/ResultSummary/CollectorDat


aEntries/Collector/UriAttachments/Uri
Attachment/A.Attributes["href "].Value

Test Result /TestRun/ResultSummary/ResultFiles/R


esultFile.Attributes["path "].Value

Code Coverage /TestRun/TestSettings/Execution/Agen


tRule/DataCollectors/DataCollector/C
onfiguration/CodeCoverage/Regular/
CodeCoverageItem.Attributes["binar
yFile "].Value And
/TestRun/TestSettings/Execution/Agen
tRule/DataCollectors/DataCollector/C
onfiguration/CodeCoverage/Regular/
CodeCoverageItem.Attributes["pdbFi
le "].Value

Test result Data Collectors /TestRun/Results/UnitTestResult/Collec


torDataEntries/Collector/UriAttachme
nts/UriAttachment/A.Attributes["href
"].Value Or
/TestRun/Results/WebTestResult/Collec
torDataEntries/Collector/UriAttachme
nts/UriAttachment/A.Attributes["href
"].Value Or
/TestRun/Results/TestResultAggregatio
n/CollectorDataEntries/Collector/UriAt
tachments/UriAttachment/A.Attribute
s["href "].Value
SC O P E TYPE PAT H

Test Result /TestRun/Results/UnitTestResult/Result


Files/ResultFile.Attributes["path "].Valu
e Or
/TestRun/Results/WebTestResult/Result
Files/ResultFile.Attributes["path "].Valu
e Or
/TestRun/Results/TestResultAggregatio
n/ResultFiles/ResultFile.Attributes["pat
h "].Value

NUnit 3
SC O P E PAT H

Test run /test-suite/attachments/attachment/filePath

Test run /test-suite[@type='Assembly']/test-


case/attachments/attachment/filePath

NOTE
The option to upload the test results file as an attachment is a default option in the task, applicable to all formats.

Related tasks
Visual Studio Test
Publish Code Coverage Results

FAQ
What is the maximum permissible limit of FQN?
The maximum FQN limit is 512 characters.
Does the FQN Character limit also include properties and their values in case of data driven tests?
Yes, the FQN character limit includes properties and their values.
Will the FQN be different for sub-results?
Currently, sub-results from the data driven tests will not show up in the corresponding data.
Example: I have a Test case: Add product to cart Data 1: Product = Apparel Data 2: Product = Footwear
All test sub-result published will only have the test case name and the data of the first row.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Run Functional Tests task
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are
called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

This task is deprecated in Azure Pipelines and TFS 2018 and later. Use version 2.x or higher of the Visual Studio Test task
together with jobs to run unit and functional tests on the universal agent.
For more details, see Testing with unified agents and jobs.

TFS 2017 and earlier


Use this task to run Coded UI tests, Selenium tests, and functional tests on a set of machines using the test agent. Use this
task when you want to run tests on remote machines, and you cannot run tests on the build machine.
Demands and prerequisites
This task must be preceded by a Visual Studio Test Agent Deployment task.

YAML snippet
# Run functional tests
# Deprecated: This task and it’s companion task (Visual Studio Test Agent Deployment) are deprecated. Use the 'Visual
Studio Test' task instead. The VSTest task can run unit as well as functional tests. Run tests on one or more agents
using the multi-agent job setting. Use the 'Visual Studio Test Platform' task to run tests without needing Visual
Studio on the agent. VSTest task also brings new capabilities such as automatically rerunning failed tests.
- task: RunVisualStudioTestsusingTestAgent@1
inputs:
testMachineGroup:
dropLocation:
#testSelection: 'testAssembly' # Options: testAssembly, testPlan
#testPlan: # Required when testSelection == TestPlan
#testSuite: # Required when testSelection == TestPlan
#testConfiguration: # Required when testSelection == TestPlan
#sourcefilters: '**\*test*.dll' # Required when testSelection == TestAssembly
#testFilterCriteria: # Optional
#runSettingsFile: # Optional
#overrideRunParams: # Optional
#codeCoverageEnabled: false # Optional
#customSlicingEnabled: false # Optional
#testRunTitle: # Optional
#platform: # Optional
#configuration: # Optional
#testConfigurations: # Optional
#autMachineGroup: # Optional

Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N

Machines A comma-separated list of machine FQDNs or IP addresses,


optionally including the port number. The maximum is 32
machines (or 32 agents). Can be:
- The name of an Azure Resource Group.
- A comma-delimited list of machine names. Example:
dbserver.fabrikam.com,dbserver_int.fabrikam.com:5986,192.168.34:5986
- An output variable from a previous task.

Test Drop Location Required. The location on the test machine(s) where the test
binaries have been copied by a Windows Machine File Copy or
Azure File Copy task. System stage variables from the test agent
machines can be used to specify the drop location. Examples:
c:\tests and %systemdrive%\Tests

Test Selection Required. Whether the tests are to be selected from test
assemblies or from a test plan.

Test Assembly Required when Test Selection is set to Test Assembly . The test
assemblies from which the tests should be executed. Paths are
relative to the sources directory.
- Separate multiple paths with a semicolon.
- Default is **\*test*.dll
- For JavaScript tests, enter the path and name of the .js files
containing the tests.
- Wildcards can be used. Example:
**\commontests\*test*.dll; **\frontendtests\*test*.dll

Test Filter criteria Optional when Test Selection is set to Test Assembly . A filter to
specify the tests to execute within the test assembly files. Works
the same way as the /TestCaseFilter option of
vstest.console.exe. Example: Priority=1 |
Name=MyTestMethod

Test Plan Required if Test Suite is not specified when Test Selection is set
to Test Plan . Select a test plan already configured for this
organization.

Test Suite Required if Test Plan is not specified when Test Selection is set
to Test Plan . Select a test suite from the selected test plan.

Test Configuration Optional when Test Selection is set to Test Plan . Select a test
configuration from the selected test plan.

Run Settings File Optional. The path to a .runsettings or .testsettings file on


the build machine. Can be the path to a file in the repository or a
file on disk. Use $(Build.SourcesDirectory) to specify the
project root folder.

Override Test Run Parameters Optional. A string containing parameter overrides for parameters
defined in the TestRunParameters section of the .runsettings
file. Example: Platform=$(platform);Port=8080

Code Coverage Enabled When set, the task will collect code coverage information during
the run and upload the results to the server. Supported for .NET
and C++ projects only.

Distribute tests by number of machines When checked, distributes tests based on the number of
machines, instead of distributing tests at the assembly level,
irrespective of the container assemblies passed to the task.
A RGUM EN T DESC RIP T IO N

Test Run Title Optional. A name for this test run, used to identify it for reporting
and in comparison with other test runs.

Platform Optional. The build platform against which the test run should be
reported. Used only for reporting.
- If you are using the Build - Visual Studio template, this is
automatically defined, such as x64 or x86
- If you have defined a variable for platform in your build task, use
that here.

Configuration Optional. The build configuration against which the test run
should be reported. Used only for reporting.
- If you are using the Build - Visual Studio template, this is
automatically defined, such as Debug or Release
- If you have defined a variable for configuration in your build
task, use that here.

Test Configurations Optional. A string that contains the filter(s) to report the
configuration on which the test case was run. Used only for
reporting with Microsoft Test Manager.
- Syntax: {expression for test method name(s)} : {configuration ID
from Microsoft Test Manager}
- Example: FullyQualifiedName~Chrome:12 to report all test
methods that have Chrome in the Fully Qualified Name and
map them to configuration ID 12 defined in Microsoft Test
Manager.
- Use DefaultTestConfiguration:{Id} as a catch-all.

Application Under Test Machines A list of the machines on which the Application Under Test (AUT) is
deployed, or on which a specific process such as W3WP.exe is
running. Used to collect code coverage data from these machines.
Use this in conjunction with the Code Coverage Enabled
setting. The list can be a comma-delimited list of machine names
or an output variable from an earlier task.

Control options See Control options

The task supports a maximum of 32 machines/agents.

Scenarios
Typical scenarios include:
Tests that require additional installations on the test machines, such as different browsers for Selenium tests
Coded UI tests
Tests that require a specific operating system configuration
To execute a large number of unit tests more quickly by using multiple test machines
Use this task to:
Run automated tests against on-premises standard environments
Run automated tests against existing Azure environments
Run automated tests against newly provisioned azure environments
You can run unit tests, integration tests, functional tests - in fact any test that you can execute using the Visual Studio test
runner (vstest).
Using multiple machines in a Machine Group enables the task to run parallel distributed execution of tests. Parallelism is at
the test assembly level, not at individual test level.
These scenarios are supported for:
TFS on-premises and Azure Pipelines
Build agents
Hosted and on-premises agents.
The build agent must be able to communicate with all test machines. If the test machines are on-premises
behind a firewall, the hosted build agents cannot be used.
The build agent must have access to the Internet to download test agents. If this is not the case, the test agent
must be manually downloaded and deployed to a network location that is accessible by the build agent, and a
Visual Studio Test Agent Deployment task used with an appropriate path for the Test Agent Location
parameter. Automatic checking for new test agent versions is not supported in this topology.
CI/CD workflow
The Build-Deploy-Test (BDT) tasks are supported in both build and release pipelines.
Machine group configuration
Only Windows machines are supported when using BDT tasks inside a Machine Group. Using Linux, macOS, or
other platforms inside a Machine Group with BDT tasks is not supported.
Installing any version or release of Visual Studio on any of the test machines is not supported.
Installing an older version of the test agent on any of the test machines is not supported.
Test machine topologies
Azure-based test machines are fully supported, both existing test machines and newly provisioned machines.
Domain-joined test machines are supported.
Workgroup-joined test machines must have HTTPS authentication enabled and configured during creation of
the Machine Group.
Test agent machines must have network access to the Team Foundation Server instance. Test machines isolated
on the network are not supported.
Usage Error Conditions
Running tests across different Machine Groups, and running builds (with any BDT tasks) in parallel against these
Machine Groups is not supported.
Cancelling an in-progress build or release with BDT tasks is not supported. If you do so, subsequent builds may
not behave as expected.
Cancelling an in-progress test run queued through BDT tasks is not supported.
Configuring a test agent and running tests under a non-administrative account or under a service account is not
supported.
More information
Using the Visual Studio Agent Deployment task on machines not connected to the internet
Run continuous tests with your builds
Testing in Continuous Integration and Continuous Deployment Workflows
Related tasks
Deploy Azure Resource Group
Azure File Copy
Windows Machine File Copy
PowerShell on Target Machines
Visual Studio Test Agent Deployment
Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
How do I create an Azure Resource Group for testing?
See Using the Azure Portal to manage your Azure resources and Azure Resource Manager - Creating a Resource Group
and a VNET.
Where can I get more information about the Run Settings file?
See Configure unit tests by using a .runsettings file
Where can I get more information about overriding settings in the Run Settings file?
See Supplying Run Time Parameters to Tests
How can I customize code coverage analysis and manage inclusions and exclusions
See Customize Code Coverage Analysis
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
::: moniker-end

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Visual Studio Test task
11/2/2020 • 11 minutes to read • Edit Online

Azure Pipelines
Use this task to run unit and functional tests (Selenium, Appium, Coded UI test, and more) using the Visual
Studio Test Runner. Other than MSTest-based tests, test frameworks that have a Visual Studio test adapter,
such as xUnit, NUnit, Chutzpah, can also be executed.
Tests that target the .NET core framework can be executed by specifying the appropriate target framework
value in the .runsettings file.
Tests can be distributed on multiple agents using version 2 of this task. For more information, see Run tests
in parallel using the Visual Studio Test task.

Check prerequisites
If you're using a Windows self-hosted agent, be sure that your machine has this prerequisite installed:
.NET Framework 4.6.2 or a later version

Demands
The agent must have the following capability:
vstest
The vstest demand can be satisfied in two ways:
1. Visual Studio is installed on the agent machine.
2. By using the Visual Studio Test Platform Installer task in the pipeline definition.

YAML snippet
# Visual Studio Test
# Run unit and functional tests (Selenium, Appium, Coded UI test, etc.) using the Visual Studio Test
(VsTest) runner. Test frameworks that have a Visual Studio test adapter such as MsTest, xUnit, NUnit,
Chutzpah (for JavaScript tests using QUnit, Mocha and Jasmine), etc. can be run. Tests can be
distributed on multiple agents using this task (version 2).
- task: VSTest@2
inputs:
#testSelector: 'testAssemblies' # Options: testAssemblies, testPlan, testRun
#testAssemblyVer2: | # Required when testSelector == TestAssemblies
# **\*test*.dll
# !**\*TestAdapter.dll
# !**\obj\**
#testPlan: # Required when testSelector == TestPlan
#testSuite: # Required when testSelector == TestPlan
#testConfiguration: # Required when testSelector == TestPlan
#tcmTestRun: '$(test.RunId)' # Optional
#searchFolder: '$(System.DefaultWorkingDirectory)'
#testFiltercriteria: # Optional
#runOnlyImpactedTests: False # Optional
#runAllTestsAfterXBuilds: '50' # Optional
#uiTests: false # Optional
#vstestLocationMethod: 'version' # Optional. Options: version, location
#vsTestVersion: 'latest' # Optional. Options: latest, 16.0, 15.0, 14.0, toolsInstaller
#vstestLocation: # Optional
#runSettingsFile: # Optional
#overrideTestrunParameters: # Optional
#pathtoCustomTestAdapters: # Optional
#runInParallel: False # Optional
#runTestsInIsolation: False # Optional
#codeCoverageEnabled: False # Optional
#otherConsoleOptions: # Optional
#distributionBatchType: 'basedOnTestCases' # Optional. Options: basedOnTestCases,
basedOnExecutionTime, basedOnAssembly
#batchingBasedOnAgentsOption: 'autoBatchSize' # Optional. Options: autoBatchSize, customBatchSize
#customBatchSizeValue: '10' # Required when distributionBatchType == BasedOnTestCases &&
BatchingBasedOnAgentsOption == CustomBatchSize
#batchingBasedOnExecutionTimeOption: 'autoBatchSize' # Optional. Options: autoBatchSize,
customTimeBatchSize
#customRunTimePerBatchValue: '60' # Required when distributionBatchType == BasedOnExecutionTime &&
BatchingBasedOnExecutionTimeOption == CustomTimeBatchSize
#dontDistribute: False # Optional
#testRunTitle: # Optional
#platform: # Optional
#configuration: # Optional
#publishRunAttachments: true # Optional
#failOnMinTestsNotRun: false # Optional
#minimumExpectedTests: '1' # Optional
#diagnosticsEnabled: false # Optional
#collectDumpOn: 'onAbortOnly' # Optional. Options: onAbortOnly, always, never
#rerunFailedTests: False # Optional
#rerunType: 'basedOnTestFailurePercentage' # Optional. Options: basedOnTestFailurePercentage,
basedOnTestFailureCount
#rerunFailedThreshold: '30' # Optional
#rerunFailedTestCasesMaxLimit: '5' # Optional
#rerunMaxAttempts: '3' # Optional

Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N

testSelector (Required) Test assembly: Use this option to specify one


Select tests using or more test assemblies that contain your tests. You can
optionally specify a filter criteria to select only specific tests.
Test plan: Use this option to run tests from your test plan
that have an automated test method associated with it. To
learn more about how to associate tests with a test case
work item, see Associate automated tests with test cases.
Test run: Use this option when you are setting up an
environment to run tests from test plans. This option
should not be used when running tests in a continuous
integration/continuous deployment (CI/CD) pipeline.
Default value: testAssemblies

testAssemblyVer2 (Required) Run tests from the specified files. Ordered tests
Test files and webtests can be run by specifying the .orderedtest
and .webtest files respectively. To run .webtest , Visual
Studio 2017 Update 4 or higher is needed. The file paths
are relative to the search folder. Supports multiple lines of
minimatch patterns. More Information
Default value:
**\\*test*.dll\n!**\\*TestAdapter.dll\n!**\\obj\\**

testPlan (Required) Select a test plan containing test suites with


Test plan automated test cases.

testSuite (Required) Select one or more test suites containing


Test suite automated test cases. Test case work items must be
associated with an automated test method. Learn more.

testConfiguration (Required) Select Test Configuration.


Test configuration

tcmTestRun (Optional) Test run based selection is used when triggering


Test Run automated test runs from test plans. This option cannot be
used for running tests in the CI/CD pipeline.

searchFolder (Required) Folder to search for the test assemblies.


Search folder

testFiltercriteria (Optional) Additional criteria to filter tests from Test


Test filter criteria assemblies.
For example: Priority=1|Name=MyTestMethod . More
information

runOnlyImpactedTests (Optional) Automatically select, and run only the tests


Run only impacted tests needed to validate the code change. More information

runAllTestsAfterXBuilds (Optional) Number of builds after which to automatically


Number of builds after which all tests should be run run all tests. Test Impact Analysis stores the mapping
between test cases and source code. It is recommended to
regenerate the mapping by running all tests, on a regular
basis.
A RGUM EN T DESC RIP T IO N

uiTests (Optional) To run UI tests, ensure that the agent is set to


Test mix contains UI tests run in interactive mode with autologon enabled. Setting up
an agent to run interactively must be done before
queueing the build/release. Checking this box does not
configure the agent in interactive mode automatically. This
option in the task is to only serve as a reminder to
configure agent appropriately to avoid failures. Hosted
Windows agents from the VS 2015 and 2017 pools can be
used to run UI tests.

vstestLocationMethod (Optional) Specify which test platform should be used.


Select test platform using

vsTestVersion (Optional) The version of Visual Studio test to use. If latest


Test platform version is specified it chooses Visual Studio 2017 or Visual Studio
2015 depending on what is installed. Visual Studio 2013 is
not supported. To run tests without needing Visual Studio
on the agent, use the Installed by tools installer
option in the UI or toolsInstaller in YAML. Be sure to
include the ‘Visual Studio Test Platform Installer’ task to
acquire the test platform from NuGet.

vstestLocation (Optional) Specify the path to VSTest.


Path to vstest.console.exe

runSettingsFile (Optional) Path to runsettings or testsettings file to


Settings file use with the tests.Starting with Visual Studio 15.7, it is
recommended to use runsettings for all types of tests. To
learn more about converting a .testsettings file to a
.runsettings file, see this topic.

overrideTestrunParameters (Optional) Override parameters defined in the


Override test run parameters TestRunParameters section of runsettings file or Properties
section of testsettings file.
For example: -key1 value1 -key2 value2 . Note:
Properties specified in testsettings file can be accessed
via the TestContext using Visual Studio 2017 Update 4 or
higher

pathtoCustomTestAdapters (Optional) Directory path to custom test adapters.


Path to custom test adapters Adapters residing in the same folder as the test assemblies
are automatically discovered.

runInParallel (Optional) If set, tests will run in parallel leveraging


Run tests in parallel on multi-core machines available cores of the machine. This will override the
MaxCpuCount if specified in your runsettings file. Click
here to learn more about how tests are run in parallel.

runTestsInIsolation (Optional) Runs the tests in an isolated process. This makes


Run tests in isolation vstest.console.exe process less likely to be stopped on
an error in the tests, but tests might run slower. This
option currently cannot be used when running with the
multi-agent job setting.

codeCoverageEnabled (Optional) Collect code coverage information from the test


Code coverage enabled run.
A RGUM EN T DESC RIP T IO N

otherConsoleOptions (Optional) Other console options that can be passed to


Other console options vstest.console.exe , as documented here. These
options are not supported and will be ignored when
running tests using the Multi agent parallel setting of an
agent job or when running tests using Test plan option.
The options can be specified using a settings file instead.

distributionBatchType (Optional) A batch is a group of tests. A batch of tests runs


Batch tests its tests at the same time and results are published for the
batch. If the job in which the task runs is set to use
multiple agents, each agent picks up any available batches
of tests to run in parallel.
Based on the number of tests and agents: Simple
batching based on the number of tests and agents
participating in the test run.
Based on past running time of tests: This batching
considers past running time to create batches of tests such
that each batch has approximately equal running time.
Based on test assemblies: Tests from an assembly are
batched together."
Default value: basedOnTestCases

batchingBasedOnAgentsOption (Optional) Simple batching based on the number of tests


Batch options and agents participating in the test run. When the batch
size is automatically determined, each batch contains
(total number of tests / number of agents) tests. If
a batch size is specified, each batch will contain the
specified number of tests.
Default value: autoBatchSize

customBatchSizeValue (Required) Specify batch size


Number of tests per batch Default value: 10

batchingBasedOnExecutionTimeOption (Optional) This batching considers past running time to


Batch options create batches of tests such that each batch has
approximately equal running time. Quick running tests will
be batched together, while longer running tests may
belong to a separate batch. When this option is used with
the multi-agent job setting, total test time is reduced to a
minimum.
Default value: autoBatchSize

customRunTimePerBatchValue (Required) Specify the running time (sec) per batch


Running time (sec) per batch Default value: 60

dontDistribute (Optional) Choosing this option will not distribute tests


Replicate tests instead of distributing when multiple agents across agents when the task is running in a multi-agent
are used in the job job.
Each of the selected test(s) will be repeated on each agent.
The option is not applicable when the agent job is
configured to run with no parallelism or with the multi-
config option.
Default value: False

testRunTitle (Optional) Provide a name for the test run


Test run title
A RGUM EN T DESC RIP T IO N

platform (Optional) Build platform against which the tests should be


Build platform reported. If you have defined a variable for platform in
your build task, use that here.

configuration (Optional) Build configuration against which the tests


Build configuration should be reported. If you have defined a variable for
configuration in your build task, use that here.

publishRunAttachments (Optional) Opt in/out of publishing run level attachments.


Upload test attachments Default value: true

failOnMinTestsNotRun (Optional) Use this option to fail the task if a minimum


Fail the task if a minimum number of tests are not run number of tests are not run. This may be useful if any
changes to task inputs or underlying test adapter
dependencies lead to only a subset of the desired tests to
be found.
Default value: False

minimumExpectedTests (Optional) Specify the minimum # of tests that should be


Minimum # of tests run for the task to succeed. Total tests run is calculated as
the sum of passed, failed and aborted tests.
Default value: 1

diagnosticsEnabled (Optional) Use this option to turn on collection of


Collect advanced diagnostics in case of catastrophic failures diagnostic data to troubleshoot catastrophic failures such
as test crash.
When this option is checked, a sequence XML file is
generated and attached to the test run. The sequence file
contains information about the sequence in which tests
ran, so that a potentially culprit test can be identified.
Default value: false

collectDumpOn (Optional) Use this option to collect a mini-dump that can


Collect process dump and attach to test run report be used for further analysis.
On abor t only : mini-dump will be collected only when
test run is aborted.
Always : mini-dump will always be collected regardless of
whether the test run completes or not.
Never : mini-dump will not be collected regardless of
whether the test run completes or not

rerunFailedTests (Optional) Selecting this option will rerun any failed tests
Rerun failed tests until they pass or the maximum # of attempts is reached.
Default value: False

rerunType (Optional) Use this option to avoid rerunning tests when


Do not rerun if test failures exceed specified threshold failure rate crosses the specified threshold. This is
applicable if any environment issues leads to massive
failures.You can specify % failures with
basedOnTestFailurePercentage or # of failed tests as a
threshold with basedOnTestFailureCount.
Default value: basedOnTestFailurePercentage
A RGUM EN T DESC RIP T IO N

rerunFailedThreshold (Optional) Use this option to avoid rerunning tests when


% failure failure rate crosses the specified threshold. This is
applicable if any environment issues leads to massive
failures
Default value: 30

rerunFailedTestCasesMaxLimit (Optional) Use this option to avoid rerunning tests when


# of failed tests number of failed test cases crosses specified limit. This is
applicable if any environment issues leads to massive
failures and if rerunType is rerunFailedTestCasesMaxLimit.
Default value: 5

rerunMaxAttempts (Optional) Specify the maximum # of times a failed test


Maximum # of attempts should be retried. If a test passes before the maximum # of
attempts is reached, it will not be rerun further.
Default value: 3

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
How can I run tests that use TestCase as a data source?
To run automated tests that use TestCase as a data source, the following is needed:
1. You must have Visual Studio 2017.6 or higher on the agent machine. Visual Studio Test Platform
Installer task cannot be used to run tests that use TestCase as a data source.
2. Create a PAT that is authorized for the scope “Work Items (full)”.
3. Add a secure Build or Release variable called Test.TestCaseAccessToken with the value set to the PAT
created in the previous step.
I am running into issues when running data-driven xUnit and NUnit tests with some of the task options.
Are there known limitations?
Data-driven tests that use xUnit and NUnit test frameworks have some known limitations and cannot be
used with the following task options:
1. Rerun failed tests.
2. Distributing tests on multiple agents and batching options.
3. Test Impact Analysis
The above limitations are because of how the adapters for these test frameworks discover and report data-
driven tests.
Visual Studio Test Agent Deployment task
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are
called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

This task is deprecated in Azure Pipelines and TFS 2018 and later. Use version 2.x or higher of the Visual Studio Test task
together with jobs to run unit and functional tests on the universal agent. For more details, see Testing with unified agents
and jobs.

TFS 2017 and earlier


Use this task to deploy and configure the test agent to run tests on a set of machines. The test agent deployed by this task
can collect data or run distributed tests using the Visual Studio Test task.
Demands and prerequisites
This task requires the target computer to have:
Windows 7 Service Pack 1 or Windows 2008 R2 Service Pack 2 or higher
.NET 4.5 or higher
PSRemoting enabled by running the Enable-PSRemoting PowerShell script
Windows Remote Management (WinRM)
This task uses Windows Remote Management (WinRM) to access on-premises physical computers or virtual computers
that are domain-joined or workgroup-joined.
To set up WinRM for on-premises physical computers or vir tual machines
Follow the steps described in domain-joined
To set up WinRM for Microsoft Azure Vir tual Machines
Azure Virtual Machines require WinRM to use the HTTPS protocol. You can use a self-signed Test Certificate. In this case,
the automation agent will not validate the authenticity of the certificate as being issued by a trusted certification authority.
Azure Classic Vir tual Machines . When you create a classic virtual machine from the Azure portal, the virtual
machine is already set up for WinRM over HTTPS, with the default port 5986 already opened in the firewall and a
self-signed certificate installed on the machine. These virtual machines can be accessed with no further
configuration required. Existing Classic virtual machines can be also selected by using the Azure Resource Group
Deployment task.
Azure Resource Group . If you have an Azure Resource Group
already defined in the Azure portal, you must configure it to use the WinRM HTTPS protocol. You need to open port 5986
in the firewall, and install a self-signed certificate.
To dynamically deploy Azure Resource Groups that contain virtual machines, use the Azure Resource Group Deployment
task. This task has a checkbox named Enable Deployment Prerequisites . Select this to automatically set up the WinRM
HTTPS protocol on the virtual machines, open port 5986 in the firewall, and install a test certificate. The virtual machines
are then ready for use in the deployment task.

YAML snippet
# Visual Studio test agent deployment
# Deprecated: Instead, use the 'Visual Studio Test' task to run unit and functional tests
- task: DeployVisualStudioTestAgent@2
inputs:
testMachines:
adminUserName:
adminPassword:
#winRmProtocol: 'Http' # Options: http, https
#testCertificate: true # Optional
machineUserName:
machinePassword:
#runAsProcess: false # Optional
#isDataCollectionOnly: false # Optional
#testPlatform: '14.0' # Optional. Options: 15.0, 14.0
#agentLocation: # Optional
#updateTestAgent: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Machines A comma-separated list of machine FQDNs or IP addresses,


optionally including the port number. The maximum is 32
machines (or 32 agents). Can be:
- The name of an Azure Resource Group.
- A comma-delimited list of machine names. Example:
dbserver.fabrikam.com,dbserver_int.fabrikam.com:5986,192.168.34:5986
- An output variable from a previous task.

Admin Login The username of either a domain or a local administrative account


on the target host(s). This parameter is required when used with a
list of machines. It is optional when specifying a machine group
and, if specified, overrides the credential settings defined for the
machine group.
- Formats such as username , domain\username , machine-
name\username , and .\username are supported.
- UPN formats such as [email protected] and built-in
system accounts such as NT Authority\System are not
supported.

Password The password for the administrative account specified above. This
parameter is required when used with a list of machines. It is
optional when specifying a machine group and, if specified,
overrides the credential settings defined for the machine group.
Consider using a secret variable global to the build or release
pipeline to hide the password. Example: $(passwordVariable)

Protocol The protocol that will be used to connect to the target host, either
HTTP or HTTPS.

Agent Configuration - Username Required. The username that the test agent will use. Must be an
account on the test machines that has administrative permissions.
- Formats such as username , domain\username , machine-
name\username , and .\username are supported.
- UPN formats such as [email protected] and built-in
system accounts such as NT Authority\System are not
supported.

Agent Configuration - Password Required. The password for the Username for the test agent. To
protect the password, create a variable and use the "padlock" icon
to hide it.
A RGUM EN T DESC RIP T IO N

Agent Configuration - Run UI tests When set, the test agent will run as an interactive process. This is
required when interacting with UI elements or starting
applications during the tests. For example, Coded UI or Selenium
tests that are running on full fidelity browsers will require this
option to be set.

Agent Configuration - Enable data collection only When set, the test agent will return previously collected data and
not re-run the tests. At present this is only available for Code
Coverage. Also see FAQ section below.

Advanced - Test agent version The version of the test agent to use.

Advanced - Test agent location Optional. The path to the test agent (vstf_testagent.exe) if
different from the default path.
- If you use a copy of the test agent located on your local
computer or network, specify the path to that instance.
- The location must be accessible by either the build agent (using
the identity it is running under) or the test agent (using the
identity configured above).
- For Azure test machines, the web location can be used.

Advanced - Update test agent If set, and the test agent is already installed on the test machines,
the task will check if a new version of the test agent is available.

Control options See Control options

The task supports a maximum of 32 machines/agents.

Supported scenarios
Use this task for:
Running automated tests against on-premises standard environments
Running automated tests against existing Azure environments
Running automated tests against newly provisioned Azure environments
The supported options for these scenarios are:
TFS
On-premises and Azure Pipelines
Build and release agents
Hosted and on-premises agents are supported.
The agent must be able to communicate with all test machines. If the test machines are on-premises behind a
firewall, an Azure Pipelines Microsoft-hosted agent cannot be used because it will not be able to communicate
with the test machines.
The agent must have Internet access to download test agents. If this is not the case, the test agent must be
manually downloaded, uploaded to a network location accessible to the agent, and the Test Agent Location
parameter used to specify the location. The user must manually check for new versions of the agent and update
the test machines.
Continuous integration/continuous deployment workflows
Build/deploy/test tasks are supported in both build and release workflows.
Machine group configuration
Only Windows-based machines are supported inside a machine group for build/deploy/test tasks. Linux,
macOS, or other platforms are not supported inside a machine group.
Installing any version of Visual Studio on any of the test machines is not supported.
Installing any older version of the test agent on any of the test machines is not supported.
Test machine topologies
Azure-based test machines are fully supported, both existing test machines and newly provisioned test
machines.
Machines with the test agent installed must have network access to the TFS instance in use. Network-isolated
test machines are not supported.
Domain-joined test machines are supported.
Workgroup-joined test machines must use HTTPS authentication configured during machine group creation.
Usage Error Conditions
Using the same test machines across different machine groups, and running builds (with any build/deploy/test
tasks) in parallel against those machine groups is not supported.
Cancelling an in-progress build or release that contains any build/deploy/test tasks is not supported. If you do
cancel, behavior of subsequent builds may be unpredictable.
Cancelling an ongoing test run queued through build/deploy/test tasks is not supported.
Configuring the test agent and running tests as a non-administrator, or by using a service account, is not
supported.
Running tests for Universal Windows Platform apps is not supported. Use the Visual Studio Test task to run
these tests.
Example
Testing in Continuous Integration and Continuous Deployment Workflows
More information
Using the Visual Studio Agent Deployment task on machines not connected to the internet
Set up automated testing for your builds
Source code for this task
Related tasks
Visual Studio Test
Azure File Copy
Windows Machine File Copy
PowerShell on Target Machines
Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
When would I use the Enable Data Collection Only option?
An example would be in a client-server application model, where you deploy the test agent on the servers and use another
task to deploy the test agent to test machines. This enables you to collect data from both server and client machines
without triggering the execution of tests on the server machines.
How do I create an Azure Resource Group for testing?
See Using the Azure Portal to manage your Azure resources and Azure Resource Manager - Creating a Resource Group
and a VNET.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
CocoaPods task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run CocoaPods pod install.
CocoaPods is the dependency manager for Swift and Objective-C Cocoa projects. This task optionally runs
pod repo update and then runs pod install .

Demands
None

YAML snippet
# CocoaPods
# Install CocoaPods dependencies for Swift and Objective-C Cocoa projects
- task: CocoaPods@0
inputs:
#workingDirectory: # Optional
forceRepoUpdate:
#projectDirectory: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

cwd (Optional) Specify the working directory in which to execute


Working directory this task. If left empty, the repository directory will be used.
Argument alias: workingDirectory

forceRepoUpdate (Required) Selecting this option will force running 'pod repo
Force repo update update' before install.
Default value: false

projectDirectory (Optional) Optionally specify the path to the root of the


Project directory project directory. If left empty, the project specified in the
Podfile will be used. If no project is specified, then a search for
an Xcode project will be made. If more than one Xcode project
is found, an error will occur.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
What other kinds of apps can I build?
Build and deploy your app examples
What other kinds of build tasks are available?
Build and release tasks catalog
How do we protect our codebase from build breaks?
Git: Improve code quality with branch policies with an option to require that code builds before it can be
merged to a branch. For GitHub repositories, similar policies are available in GitHub's repository settings
under Branches.
TFVC: Use gated check-in.
How do I modify other parts of my build pipeline?
Build and release tasks to run tests, scripts, and a wide range of other processes.
Specify build options such as specifying how completed builds are named, building multiple configurations,
creating work items on failure.
Supported source repositories to pick the source of the build and modify options such as how the agent
workspace is cleaned.
Set build triggers to modify how your CI builds run and to specify scheduled builds.
Specify build retention policies to automatically delete old builds.
I selected parallel multi-configuration, but only one build is running at a time.
If you're using Azure Pipelines, you might need more parallel jobs. See Parallel jobs in Azure Pipelines.
How do I see what has changed in my build pipeline?
View the change history of your build pipeline
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Conda Environment task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to create and activate a Conda environment.

NOTE
This task has been deprecated. Use conda directly in the bash task or batch script task as an alternative.

This task will create a Conda environment and activate it for subsequent build tasks.
If the task finds an existing environment with the same name, the task will simply reactivate it. This is possible on
self-hosted agents. To recreate the environment and reinstall any of its packages, set the "Clean the environment"
option.
Running with the "Update to the latest Conda" option will attempt to update Conda before creating or activating
the environment. If you are running a self-hosted agent and have configured a Conda installation to work with the
task, this may result in your Conda installation being updated.

NOTE
Microsoft-hosted agents won't have Conda in their PATH by default. You will need to run this task in order to use Conda.

After running this task, PATH will contain the binary directory for the activated environment, followed by the
binary directories for the Conda installation itself. You can run scripts as subsequent build tasks that run Python,
Conda, or the command-line utilities from other packages you install. For example, you can run tests with pytest or
upload a package to Anaconda Cloud with the Anaconda client.

TIP
After running this task, the environment will be "activated," and packages you install by calling conda install will get
installed to this environment.

Demands
None

Prerequisites
A Microsoft-hosted agent, or a self-hosted agent with Anaconda or Miniconda installed.
If using a self-hosted agent, you must either add the conda executable to PATH or set the CONDA environment
variable to the root of the Conda installation.

YAML snippet
# Conda environment
# This task is deprecated. Use `conda` directly in script to work with Anaconda environments.
- task: CondaEnvironment@1
inputs:
#createCustomEnvironment: # Optional
#environmentName: # Required when createCustomEnvironment == True
#packageSpecs: 'python=3' # Optional
#updateConda: true # Optional
#installOptions: # Optional
#createOptions: # Optional
#cleanEnvironment: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

createCustomEnvironment (Optional) Setting this to true creates or reactivates a


Create custom environment Conda environment instead of using the base environment.
This is recommended for self-hosted agents.
Default value: false

environmentName (Required) Name of the Conda environment to create and


Environment name activate.

packageSpecs (Optional) Space-delimited list of packages to install when


Package specs creating the environment.
Default value: python=3

updateConda (Optional) Update Conda to the latest version. This applies to


Update to the latest Conda the Conda installation found in PATH or at the path specified
by the CONDA environment variable.
Default value: true

installOptions (Optional) Space-delimited list of additional arguments to pass


Other options for conda install to the conda install command.

createOptions (Optional) Space-delimited list of other options to pass to the


Other options for conda create conda create command.

cleanEnvironment (Optional) Delete the environment and recreate it if it already


Clean the environment exists. If not selected, the task will reactivate an existing
environment.
Default value: false

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
How can I configure a self-hosted agent to use this task?
You can use this task either with a full Anaconda installation or a Miniconda installation. If using a self-hosted agent,
you must add the conda executable to PATH . Alternatively, you can set the CONDA environment variable to the root
of the Conda installation -- that is, the directory you specify as the "prefix" when installing Conda.
Package: Maven Authenticate
11/2/2020 • 2 minutes to read • Edit Online

Provides credentials for Azure Artifacts feeds and external Maven repositories in the current user's settings.xml file.

YAML snippet
# Provides credentials for Azure Artifacts feeds and external Maven repositories.
- task: MavenAuthenticate@0
#inputs:
#artifactsFeeds: MyFeedInOrg1, MyFeedInOrg2 # Optional
#mavenServiceConnections: serviceConnection1, serviceConnection2 # Optional

Arguments
A RGUM EN T DESC RIP T IO N

artifactsFeeds (Optional) Comma-separated list of Azure Artifacts feed


My feeds (select below) names to authenticate with Maven. If you only need
authentication for external maven repositories, leave this field
blank.

mavenServiceConnections (Optional) Comma-separated list of Maven service connection


Feeds from external organizations names from external organizations to authenticate with
Maven. If you only needs authentication for Azure Artifacts
feeds, leave this field blank.

Examples
Authenticate Maven feeds inside your organization
In this example, we authenticate two Azure Artifacts feeds within our organization.
Task definition

- task: MavenAuthenticate@0
displayName: 'Maven Authenticate'
inputs:
artifactsFeeds: MyFeedInOrg1,MyFeedInOrg2

The MavenAuthenticate task updates the settings.xml file present in the agent user's .m2 directory located at
{user.home}/.m2/settings.xml to add two entries inside the <servers> element.

settings.xml
<servers>
<server>
<id>MyFeedInOrg1</id>
<username>AzureDevOps</username>
<password>****</password>
</server>
<server>
<id>MyFeedInOrg2</id>
<username>AzureDevOps</username>
<password>****</password>
</server>
</servers>

You should set the repositories in your project's pom.xml to have the same <id> as the name specified in the task
for Maven to be able to correctly authenticate the task.
pom.xml
Project scoped feed

<repository>
<id>MyFeedInOrg1</id>
<url>https://pkgs.dev.azure.com/OrganzationName/ProjectName/_packaging/MyProjectScopedFeed1/Maven/v1</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>

Organization scoped feed

<repository>
<id>MyFeedInOrg1</id>
<url>https://pkgs.dev.azure.com/OrganzationName/_packaging/MyOrgScopedFeed1/Maven/v1</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>

The Artifacts feed URL may or may not contain the project. An URL for a project scoped feed must contain the
project and a URL for a organization scoped feed must not contain the project. Learn more.
Authenticate Maven feeds outside your organization.
In this example, we authenticate two external Maven repositories.
Task definition

- task: MavenAuthenticate@0
displayName: 'Maven Authenticate'
inputs:
MavenServiceConnections: central,MavenOrg

The MavenAuthenticate task updates the settings.xml file present in the agent users' .m2 directory located at
{user.home}/.m2/settings.xml to add two entries inside the <servers> element.

settings.xml
<servers>
<server>
<id>central</id>
<username>centralUsername</username>
<password>****</password>
</server>
<server>
<id>MavenOrg</id>
<username>mavenOrgUsername</username>
<password>****</password>
</server>
</servers>

You should set the repositories in your project's pom.xml to have the same <id> as the name specified in the task
for Maven to be able to correctly authenticate the task.
pom.xml

<repository>
<id>central</id>
<url>https://repo1.maven.org/maven2/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where is the settings.xml file which contains the authenticated repositories located?
The Maven Authenticate task searches for the settings.xml in the current user's home directory. For Linux and Mac,
the path is $HOME/.m2/settings.xml , for Windows the path is %USERPROFILE%\.m2\settings.xml . If the settings.xml file
doesn't exist a new one will be created at that path.
We use the mvn -s switch to specify our own settings.xml file, how do we authenticate Azure Artifacts feeds
there?
The Maven Authenticate task doesn't have access to the custom settings.xml file specified using a -m switch. To add
Azure Artifacts authentication for your custom settings.xml, add a server element inside your settings.xml like this:

<server>
<id>feedName</id> <!-- Set this to the id of the <repository> element inside your pom.xml file. -->
<username>AzureDevOps</username>
<password>${env.SYSTEM_ACCESSTOKEN}</password>
</server>

The access token variable can be set in your pipelines using these instructions.
npm task
11/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to install and publish npm packages.

NOTE
Moving forward, the npm Authenticate task is the recommended way to use authenticated feeds within a pipeline.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.

YAML snippet
# npm
# Install and publish npm packages, or run an npm command. Supports npmjs.com and authenticated registries
like Azure Artifacts.
- task: Npm@1
inputs:
#command: 'install' # Options: install, publish, custom
#workingDir: # Optional
#verbose: # Optional
#customCommand: # Required when command == Custom
#customRegistry: 'useNpmrc' # Optional. Options: useNpmrc, useFeed
#customFeed: # Required when customRegistry == UseFeed
#customEndpoint: # Optional
#publishRegistry: 'useExternalRegistry' # Optional. Options: useExternalRegistry, useFeed
#publishFeed: # Required when publishRegistry == UseFeed
#publishPackageMetadata: true # Optional
#publishEndpoint: # Required when publishRegistry == UseExternalRegistry

Install npm packages


Demands
npm
Arguments
A RGUM EN T DESC RIP T IO N

command (Required) npm command to run. Select install here


Command

workingDir Path to the folder containing the target package.json and


Working folder that contains package.json .npmrc files. Select the folder, not the file e.g.
"/packages/mypackage".
A RGUM EN T DESC RIP T IO N

advanced Select to print more information to the console on run


Verbose logging

customRegistries You can either commit a .npmrc file to your source code
Registries to use repository and set its path or select a registry from Azure
Artifacts.
useNpmrc
Select this option to use feeds specified in a .npmrc file
you've checked into source control. If no .npmrc file is
present, the task will default to using packages directly from
npmjs.
Credentials for registries outside this
organization/collection can be used to inject credentials
you've provided as an npm service connection into your
.npmrc as the build runs.
useFeed
Select this option to use one Azure Artifacts feed in the
same organization/collection as the build.

Publish npm packages


Demands
npm
Arguments
A RGUM EN T DESC RIP T IO N

command (Required) npm command to run. Select publish here.


Command

workingDir Path to the folder containing the target package.json and


Working folder that contains package.json .npmrc files. Select the folder, not the file e.g.
"/packages/mypackage".

advanced Select to print more information to the console on run


Verbose logging

customRegistries You can either commit a .npmrc file to your source code
Registries to use repository and set its path or select a registry from Azure
Artifacts.
useNpmrc
Select this option to use feeds specified in a .npmrc file
you've checked into source control. If no .npmrc file is
present, the task will default to using packages directly from
npmjs.
Credentials for registries outside this
organization/collection can be used to inject credentials
you've provided as an npm service connection into your
.npmrc as the build runs.
useFeed
Select this option to use one Azure Artifacts feed in the
same organization/collection as the build.

Custom npm command


Demands
npm
Arguments
A RGUM EN T DESC RIP T IO N

command (Required) npm command to run. Select custom here.


Command

workingDir Path to the folder containing the target package.json and


Working folder that contains package.json .npmrc files. Select the folder, not the file e.g.
"/packages/mypackage".

customCommand (Required) Custom command to run, e.g. "dist-tag ls


Command and arguments mypackage".
If your arguments contain double quotes ("), escape them
with a slash (\), and surround the escaped string with
double quotes (").
Example: to run
npm run myTask -- --users='{"foo":"bar"}' , provide
this input:
run myTask -- --users="
{&quot;foo&quot;:&quot;bar&quot;}"
.

customRegistries You can either commit a .npmrc file to your source code
Registries to use repository and set its path or select a registry from Azure
Artifacts.
useNpmrc
Select this option to use feeds specified in a .npmrc file
you've checked into source control. If no .npmrc file is
present, the task will default to using packages directly from
npmjs.
Credentials for registries outside this
organization/collection can be used to inject credentials
you've provided as an npm service connection into your
.npmrc as the build runs.
useFeed
Select this option to use one Azure Artifacts feed in the
same organization/collection as the build.

Examples
Build: gulp
Build your Node.js app with gulp

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn npm commands and arguments?
npm docs
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Package: npm Authenticate task (for task runners)
6/2/2020 • 4 minutes to read • Edit Online

Azure Pipelines
Use this task to provide npm credentials to an .npmrc file in your repository for the scope of the build. This
enables npm, as well as npm task runners like gulp and Grunt, to authenticate with private registries.

YAML snippet
# npm authenticate
# Don't use this task if you're also using the npm task. Provides npm credentials to an .npmrc file in your
repository for the scope of the build. This enables npm and npm task runners like gulp and Grunt to
authenticate with private registries.
- task: npmAuthenticate@0
inputs:
#workingFile:
#customEndpoint: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

workingFile Path to the .npmrc file that specifies the registries you want to
.npmrc file to authenticate work with. Select the file, not the folder.
For example /packages/mypackage.npmrc"

customEndpoint (Optional) Comma-separated list of npm service


Credentials for registries outside this organization/collection connectionnames for registries outside this
organization/collection. The specified .npmrc file must
contain registry entries corresponding to the service
connections. If you only need registries in this
organization/collection, leave this blank. The build’s credentials
are used automatically.

Examples
Restore npm packages for your project from a registry within your organization
If the only authenticated registries you use are Azure Artifacts registries in your organization, you only need to
specify the path to an .npmrc file to the npmAuthenticate task.
.npmrc

registry=https://pkgs.dev.azure.com/{organization}/_packaging/{feed}/npm/registry/
always-auth=true

npm
- task: npmAuthenticate@0
inputs:
workingFile: .npmrc
- script: npm ci
# ...
- script: npm publish

Restore and publish npm packages outside your organization


If your .npmrc contains Azure Artifacts registries from a different organization or use a third-party authenticated
package repository, you'll need to set up npm service connections and specify them in the customEndpoint input.
Registries within your Azure Artifacts organization will also be automatically authenticated.
.npmrc

registry=https://pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed}/npm/registry/
@{scope}:registry=https://pkgs.dev.azure.com/{otherorganization}/_packaging/{feed}/npm/registry/
@{otherscope}:registry=https://{thirdPartyRepository}/npm/registry/
always-auth=true

The registry URL pointing to an Azure Artifacts feed may or may not contain the project. An URL for a project
scoped feed must contain the project, and the URL for a organization scoped feed must not contain the project.
Learn more.
npm

- task: npmAuthenticate@0
inputs:
workingFile: .npmrc
customEndpoint: OtherOrganizationNpmConnection, ThirdPartyRepositoryNpmConnection
- script: npm ci
# ...
- script: npm publish -registry https://pkgs.dev.azure.com/{otherorganization}/_packaging/{feed}/npm/registry/

OtherOrganizationNpmConnection and ThirdPartyRepositoryNpmConnection are the names of npm service


connections that have been configured and authorized for use in your pipeline, and have URLs that match those in
the specified .npmrc file.
Control options

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
How does this task work?
This task searches the specified .npmrc file for registry entries, then appends authentication details for the
discovered registries to the end of the file. For all registries in the current organization/collection, the build's
credentials are used. For registries in a different organization or hosted by a third-party, the registry URIs will be
compared to the URIs of the npm service connections specified by the customEndpoint input, and the
corresponding credentials will be used. The .npmrc file will be reverted to its original state at the end of the
pipeline execution.
When in my pipeline should I run this task?
This task must run before you use npm, or an npm task runner, to install or push packages to an authenticated
npm repository such as Azure Artifacts. There are no other ordering requirements.
I have multiple npm projects. Do I need to run this task for each .npmrc file?
This task will only add authentication details to one .npmrc file at a time. If you need authentication for multiple
.npmrc files, you can run the task multiple times, once for each .npmrc file. Alternately, consider creating an
.npmrc file that specifies all registries used by your projects, running npmAuthenticate on this .npmrc file, then
setting an environment variable to designate this .npmrc file as the npm per-user configuration file.

- task: npmAuthenticate@0
inputs:
workingFile: $(agent.tempdirectory)/.npmrc
- script: echo ##vso[task.setvariable variable=NPM_CONFIG_USERCONFIG]$(agent.tempdirectory)/.npmrc
- script: npm ci
workingDirectory: project1
- script: npm ci
workingDirectory: project2

My agent is behind a web proxy. Will npmAuthenticate set up npm/gulp/Grunt to use my proxy?
The answer is no. While this task itself will work behind a web proxy your agent has been configured to use, it
does not configure npm or npm task runners to use the proxy.
To do so, you can either:
Set the environment variables http_proxy / https_proxy and optionally no_proxy to your proxy settings.
See npm config for details. Note that these are commonly used variables which other non-npm tools (e.g.
curl) may also use.
Add the proxy settings to the npm configuration, either manually, by using npm config set, or by setting
environment variables prefixed with NPM_CONFIG_ .

Caution:
npm task runners may not be compatible with all methods of proxy configuration supported by npm.

Specify the proxy with a command line flag when calling npm

- script: npm ci --https-proxy $(agent.proxyurl)

If your proxy requires authentication, you may need to add an additional build step to construct an authenticated
proxy uri.

- script: node -e "let u = url.parse(`$(agent.proxyurl)`); u.auth =


`$(agent.proxyusername):$(agent.proxypassword)`; console.log(`##vso[task.setvariable
variable=proxyAuthUri;issecret=true]` + url.format(u))"
- script: npm publish --https-proxy $(proxyAuthUri)
NuGet task
11/2/2020 • 10 minutes to read • Edit Online

Version 2.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
The NuGet Authenticate task is the new recommended way to authenticate with Azure Artifacts and other NuGet
repositories.

Use this task to install and update NuGet package dependencies, or package and publish NuGet packages. Uses
NuGet.exe and works with .NET Framework apps. For .NET Core and .NET Standard apps, use the .NET Core task.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

If your code depends on NuGet packages, make sure to add this step before your Visual Studio Build step. Also
make sure to clear the deprecated Restore NuGet Packages checkbox in that step.
If you are working with .NET Core or .NET Standard, use the .NET Core task, which has full support for all package
scenarios and it's currently supported by dotnet.

TIP
This version of the NuGet task uses NuGet 4.1.0 by default. To select a different version of NuGet, use the Tool Installer.

YAML snippet
# NuGet
# Restore, pack, or push NuGet packages, or run a NuGet command. Supports NuGet.org and authenticated feeds
like Azure Artifacts and MyGet. Uses NuGet.exe and works with .NET Framework apps. For .NET Core and .NET
Standard apps, use the .NET Core task.
- task: NuGetCommand@2
inputs:
#command: 'restore' # Options: restore, pack, push, custom
#restoreSolution: '**/*.sln' # Required when command == Restore
#feedsToUse: 'select' # Options: select, config
#vstsFeed: # Required when feedsToUse == Select
#includeNuGetOrg: true # Required when feedsToUse == Select
#nugetConfigPath: # Required when feedsToUse == Config
#externalFeedCredentials: # Optional
#noCache: false
#disableParallelProcessing: false
restoreDirectory:
#verbosityRestore: 'Detailed' # Options: quiet, normal, detailed
#packagesToPush:
'$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg' # Required
when command == Push
#nuGetFeedType: 'internal' # Required when command == Push# Options: internal, external
#publishVstsFeed: # Required when command == Push && NuGetFeedType == Internal
#publishPackageMetadata: true # Optional
#allowPackageConflicts: # Optional
#publishFeedCredentials: # Required when command == Push && NuGetFeedType == External
#verbosityPush: 'Detailed' # Options: quiet, normal, detailed
#packagesToPack: '**/*.csproj' # Required when command == Pack
#configuration: '$(BuildConfiguration)' # Optional
#packDestination: '$(Build.ArtifactStagingDirectory)' # Optional
#versioningScheme: 'off' # Options: off, byPrereleaseNumber, byEnvVar, byBuildNumber
#includeReferencedProjects: false # Optional
#versionEnvVar: # Required when versioningScheme == ByEnvVar
#majorVersion: '1' # Required when versioningScheme == ByPrereleaseNumber
#minorVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#patchVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#packTimezone: 'utc' # Required when versioningScheme == ByPrereleaseNumber# Options: utc, local
#includeSymbols: false # Optional
#toolPackage: # Optional
#buildProperties: # Optional
#basePath: # Optional, specify path to nuspec files
#verbosityPack: 'Detailed' # Options: quiet, normal, detailed
#arguments: # Required when command == Custom

Arguments
A RGUM EN T DESC RIP T IO N

command The NuGet command to run. Select 'Custom' to add


Command arguments or to use a different command.
Options: restore , pack , custom , push

restoreSolution The path to the solution, packages.config, or project.json file


Path to solution, packages.config, or project.json that references the packages to be restored.

feedsToUse You can either select a feed from Azure Artifacts and/or
Feeds to use NuGet.org, or commit a nuget.config file to your source code
repository and set its path here. Options: select , config .

vstsFeed Include the selected feed in the generated NuGet.config. You


Use packages from this Azure Artifacts/TFS feed must have Azure Artifacts installed and licensed to select a
feed here.
A RGUM EN T DESC RIP T IO N

includeNuGetOrg Include NuGet.org in the generated NuGet.config. Default


Use packages from NuGet.org value is true . Required when feedsToUse == Select .

nugetConfigPath The NuGet.config in your repository that specifies the feeds


Path to NuGet.config from which to restore packages. Required when feedsToUse
== Config

externalFeedCredentials Credentials to use for external registries located in the selected


Credentials for feeds outside this organization/collection NuGet.config. This is the name of your NuGet service
connection. For feeds in this organization/collection, leave this
blank; the build’s credentials are used automatically.

noCache Prevents NuGet from using packages from local machine


Disable local cache caches.

disableParallelProcessing Prevents NuGet from installing multiple packages in parallel.


Disable parallel processing

restoreDirectory Specifies the folder in which packages are installed. If no folder


Destination directory is specified, packages are restored into a packages/ folder
alongside the selected solution, packages.config, or
project.json.

verbosityRestore Specifies the amount of detail displayed in the output.


Verbosity Options: Quiet , Normal , Detailed

packagesToPush Specifies whether the target feed is and internal feed/collection


Target feed location or an external NuGet server.
Options: internal , external

publishVstsFeed Select a feed hosted in this account. You must have Azure
Target feed Artifacts installed and licensed to select a feed here.

publishPackageMetadata If you continually publish a set of packages and only change


Publish pipeline metadata the version number of the subset of packages that changed,
use this option.

allowPackageConflicts It allows the task to report success even if some of your


packages are rejected with 409 Conflict errors.
If NuGet.exe encounters a conflict, the task will fail. This option
will not work and publish will fail if you are within a proxy
environment.

publishFeedCredentials The NuGet service connection that contains the external


NuGet server NuGet server’s credentials.

verbosityPush Specifies the amount of detail displayed in the output.


Verbosity Options: Quiet , Normal , Detailed

packagesToPack Pattern to search for csproj directories to pack.


Path to csproj or nuspec file(s) to pack You can separate multiple patterns with a semicolon, and you
can make a pattern negative by prefixing it with '!'. Example:
**\\*.csproj;!**\\*.Tests.csproj
A RGUM EN T DESC RIP T IO N

configuration When using a csproj file this specifies the configuration to


Configuration to package package.

packDestination Folder where packages will be created. If empty, packages will


Package folder be created at the source root.

versioningScheme Cannot be used with include referenced projects. If you choose


Automatic package versioning 'Use the date and time', this will generate a SemVer-compliant
version formatted as X.Y.Z-ci-datetime where you choose
X, Y, and Z.
If you choose 'Use an environment variable', you must select
an environment variable and ensure it contains the version
number you want to use.
If you choose 'Use the build number', this will use the build
number to version your package. Note: Under Options set
the build number format to be ''.
$( BuildDef init ionN ame) _$( Year:yyyy) .$( Month) .$( DayO f Month) $( Rev:.r )

Options: off , byPrereleaseNumber , byEnvVar ,


byBuildNumber

includeReferencedProjects Enter the variable name without $, $env, or %.


Environment variable

majorVersion The 'X' in version X.Y.Z


Major

minorVersion The 'Y' in version X.Y.Z


Minor

patchVersion The 'Z' in version X.Y.Z


Patch

packTimezone Specifies the desired time zone used to produce the version of
Time zone the package. Selecting UTC is recommended if you're using
hosted build agents as their date and time might differ.
Options: utc , local

includeSymbols Specifies that the package contains sources and symbols.


Create symbols package When used with a .nuspec file, this creates a regular NuGet
package file and the corresponding symbols package.

toolPackage Determines if the output files of the project should be in the


Tool Package tool folder.

buildProperties Specifies a list of token=value pairs, separated by semicolons,


Additional build properties where each occurrence of $token$ in the .nuspec file will be
replaced with the given value. Values can be strings in
quotation marks.

basePath The base path of the files defined in the nuspec file.
Base path

verbosityPack Specifies the amount of detail displayed in the output.


Verbosity Options: Quiet , Normal , Detailed
A RGUM EN T DESC RIP T IO N

arguments The command and arguments which will be passed to


Command and arguments NuGet.exe for execution. If NuGet 3.5 or later is used,
authenticated commands like list, restore, and publish against
any feed in this organization/collection that the Project
Collection Build Service has access to will be automatically
authenticated.

Control options

Versioning schemes
For byPrereleaseNumber , the version will be set to whatever you choose for major, minor, and patch, plus the
date and time in the format yyyymmdd-hhmmss .
For byEnvVar , the version will be set as whatever environment variable, e.g. MyVersion (no $ , just the environment
variable name), you provide. Make sure the environment variable is set to a proper SemVer e.g. 1.2.3 or
1.2.3-beta1 .

For byBuildNumber , the version will be set to the build number, ensure that your build number is a proper
SemVer e.g. 1.0.$(Rev:r) . If you select byBuildNumber , the task will extract a dotted version, 1.2.3.4 and use
only that, dropping any label. To use the build number as is, you should use byEnvVar as described above, and set
the environment variable to BUILD_BUILDNUMBER .

Examples
Restore
Restore all your solutions with packages from a selected feed.

# Restore from a project scoped feed in the same organization


- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-project/my-project-scoped-feed'
includeNuGetOrg: false
restoreSolution: '**/*.sln'

# Restore from an organization scoped feed in the same organization


- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-organization-scoped-feed'
restoreSolution: '**/*.sln'
# Restore from a feed in a different organization
- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: config
nugetConfigPath: ./nuget.config
restoreSolution: '**/*.sln'
externalFeedCredentials: 'MyServiceConnectionName'
noCache: true
continueOnError: true

# Restore from feed(s) set in nuget.config


- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: 'config'
nugetConfigPath: 'nuget.config'

Package
Create a NuGet package in the destination folder.

# Package a project
- task: NuGetCommand@2
inputs:
command: 'pack'
packagesToPack: '**/*.csproj'
packDestination: '$(Build.ArtifactStagingDirectory)'

Push

NOTE
Pipeline artifacts are downloaded to System.ArtifactsDirectory directory. packagesToPush value can be set to
$(System.ArtifactsDirectory)/**/*.nupkg in your release pipeline.

Push/Publish a package to a feed defined in your NuGet.config.

# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg'
feedsToUse: 'config'
nugetConfigPath: '$(Build.WorkingDirectory)/NuGet.config'

Push/Publish a package to a project scoped

# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
feedsToUse: 'select'
vstsFeed: 'my-project/my-project-scoped-feed'
publishVstsFeed: 'myTestFeed'

Push/Publish a package to NuGet.org


# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
feedsToUse: 'config'
includeNugetOrg: 'true'

Custom
Run any other NuGet command besides the default ones: pack, push and restore.

# list local NuGet resources.


- task: NuGetCommand@2
displayName: 'list locals'
inputs:
command: custom
arguments: 'nuget locals all -list'

Open source
Check out the Azure Pipelines and Team Foundation Server out-of-the-box tasks on GitHub. Feedback and
contributions are welcome.

FAQ
Why should I check in a NuGet.Config?
Checking a NuGet.Config into source control ensures that a key piece of information needed to build your project,
the location of its packages, is available to every developer that checks out your code.
However, for situations where a team of developers works on a large range of projects, it's also possible to add an
Azure Artifacts feed to the global NuGet.Config on each developer's machine. In these situations, using the "Feeds I
select here" option in the NuGet task replicates this configuration.
Where can I learn about Azure Artifacts?
Azure Artifacts Documentation
Where can I learn more about NuGet?
NuGet Docs Overview
NuGet Create Packaging and publishing
NuGet Consume Setting up a solution to get dependencies
What other kinds of apps can I build?
Build and deploy your app examples
What other kinds of build tasks are available?
Build and release tasks catalog
How do we protect our codebase from build breaks?
Git: Improve code quality with branch policies with an option to require that code builds before it can be
merged to a branch. For GitHub repositories, similar policies are available in GitHub's repository settings
under Branches.
TFVC: Use gated check-in.
How do I modify other parts of my build pipeline?
Build and release tasks to run tests, scripts, and a wide range of other processes.
Specify build options such as specifying how completed builds are named, building multiple configurations,
creating work items on failure.
Supported source repositories to pick the source of the build and modify options such as how the agent
workspace is cleaned.
Set build triggers to modify how your CI builds run and to specify scheduled builds.
Specify build retention policies to automatically delete old builds.
I selected parallel multi-configuration, but only one build is running at a time.
If you're using Azure Pipelines, you might need more parallel jobs. See Parallel jobs in Azure Pipelines.
How do I see what has changed in my build pipeline?
View the change history of your build pipeline
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Package: NuGet Authenticate
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines
Configure NuGet tools to authenticate with Azure Artifacts and other NuGet repositories.

IMPORTANT
This task is only compatible with NuGet >= 4.8.0.5385, dotnet >= 2.1.400, or MSBuild >= 15.8.166.59604

YAML snippet
# Authenticate nuget.exe, dotnet, and MSBuild with Azure Artifacts and optionally other repositories
- task: NuGetAuthenticate@0
#inputs:
#nuGetServiceConnections: MyOtherOrganizationFeed, MyExternalPackageRepository # Optional
#forceReinstallCredentialProvider: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

nuGetServiceConnections (Optional) Comma-separated list of NuGet service connection


Service connection credentials for feeds outside this names for feeds outside this organization/collection to
organization additionally set up. If you only need feeds in this
organization/collection, leave this blank; the build’s credentials
are used automatically.

forceReinstallCredentialProvider (Optional) Reinstall the credential provider to the user profile


Reinstall the credential provider even if already installed directory even if already installed. This may upgrade (or
potentially downgrade) the credential provider.

Control options

Examples
Restore and push NuGet packages within your organization
If all of the Azure Artifacts feeds you use are in the same organization as your pipeline, you can use the
NuGetAuthenticate task without specifying any inputs. For project scoped feeds that are in a different project than
where the pipeline is running in, you must manually give the project and the feed access to the pipeline's project's
build service.
nuget.config
<configuration>
<packageSources>
<!--
Any Azure Artifacts feeds within your organization will automatically be authenticated. Both
dev.azure.com and visualstudio.com domains are supported.
Project scoped feed URL includes the project, organization scoped feed URL does not.
-->
<add key="MyProjectFeed1"
value="https://pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed}/nuget/v3/index.json" />
<add key="MyProjectFeed2"
value="https://{organization}.pkgs.visualstudio.com/{project}/_packaging/{feed}/nuget/v3/index.json" />
<add key="MyOtherProjectFeed1"
value="https://pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed@view}/nuget/v3/index.json" />
<add key="MyOrganizationFeed1"
value="https://pkgs.dev.azure.com/{organization}/_packaging/{feed}/nuget/v3/index.json" />
</packageSources>
</configuration>

nuget.exe

- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: OtherOrganizationFeedConnection, ThirdPartyRepositoryConnection
- task: NuGetToolInstaller@1 # Optional if nuget.exe >= 4.8.5385 is already on the path
inputs:
versionSpec: '*'
checkLatest: true
- script: nuget restore
# ...
- script: nuget push -ApiKey AzureArtifacts -Source "MyProjectFeed1" MyProject.*.nupkg

dotnet

- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: OtherOrganizationFeedConnection, ThirdPartyRepositoryConnection
- task: UseDotNet@2 # Optional if the .NET Core SDK is already installed
- script: dotnet restore
# ...
- script: dotnet nuget push --api-key AzureArtifacts --source
https://pkgs.dev.azure.com/{organization}/_packaging/{feed1}/nuget/v3/index.json MyProject.*.nupkg

In the above examples OtherOrganizationFeedConnection and ThirdPartyRepositoryConnection are the names of


NuGet service connections that have been configured and authorized for use in your pipeline, and have URLs that
match those in your nuget.config or command line argument.
The package source URL pointing to an Azure Artifacts feed may or may not contain the project. An URL for a
project scoped feed must contain the project, and a URL for a organization scoped feed must not contain the
project. Learn more.
Restore and push NuGet packages outside your organization
If you use Azure Artifacts feeds from a different organization or use a third-party authenticated package
repository, you'll need to set up NuGet service connections and specify them in the nuGetServiceConnections
input. Feeds within your Azure Artifacts organization will also be automatically authenticated.
nuget.config
<configuration>
<packageSources>
<!-- Any Azure Artifacts feeds within your organization will automatically be authenticated -->
<add key="MyProjectFeed1"
value="https://pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed}/nuget/v3/index.json" />
<add key="MyOrganizationFeed"
value="https://pkgs.dev.azure.com/{organization}/_packaging/{feed}/nuget/v3/index.json" />
<!-- Any package source listed here whose URL matches the URL of a service connection in
nuGetServiceConnections will also be authenticated.
The key name here does not need to match the name of the service connection. -->
<add key="OtherOrganizationFeed"
value="https://pkgs.dev.azure.com/{otherorganization}/_packaging/{feed}/nuget/v3/index.json" />
<add key="ThirdPartyRepository" value="https://{thirdPartyRepository}/index.json" />
</packageSources>
</configuration>

nuget.exe

- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: OtherOrganizationFeedConnection, ThirdPartyRepositoryConnection
- task: NuGetToolInstaller@1 # Optional if nuget.exe >= 4.8.5385 is already on the path
inputs:
versionSpec: '*'
checkLatest: true
- script: nuget restore
# ...
- script: nuget push -ApiKey AzureArtifacts -Source "MyProjectFeed1" MyProject.*.nupkg

dotnet

- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: OtherOrganizationFeedConnection, ThirdPartyRepositoryConnection
- task: UseDotNet@2 # Optional if the .NET Core SDK is already installed
- script: dotnet restore
# ...
- script: dotnet nuget push --api-key AzureArtifacts --source "MyProjectFeed1" MyProject.*.nupkg

OtherOrganizationFeedConnection and ThirdPartyRepositoryConnection are the names of NuGet service


connections that have been configured and authorized for use in your pipeline, and have URLs that match those in
your nuget.config or command line argument.
The package source URL pointing to an Azure Artifacts feed may or may not contain the project. An URL for a
project scoped feed must contain the project, and a URL for a organization scoped feed must not contain the
project. Learn more.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
What tools are compatible with this task?
This task will configure tools that support NuGet cross platform plugins. Today, that includes nuget.exe, dotnet, and
recent versions of MSBuild with built-in support for restoring NuGet packages.
Specifically, this task will configure:
nuget.exe, version 4.8.5385 or higher
dotnet / .NET Core SDK, version 2.1.400 or higher
MSBuild, version 15.8.166.59604 or higher
However, upgrading to the latest stable version is recommended if you encounter any issues.
I get "A task was canceled" errors during a package restore. What should I do?
Known issues in NuGet and in the Azure Artifacts Credential Provider can cause this type of error and updating to
the latest nuget may help.
A known issue in some versions of nuget/dotnet can cause this error, especially during large restores on resource
constrained machines. This issue is fixed in NuGet 5.2, as well as .NET Core SDK 2.1.80X and 2.2.40X. If you are
using an older version, try upgrading your version of NuGet or dotnet. The .NET Core Tool Installer task can be
used to install a newer version of the .NET Core SDK.
There are also known issues with the Azure Artifacts Credential Provider (installed by this task), including artifacts-
credprovider/#77 and artifacts-credprovider/#108. If you experience these issues, ensure you have the latest
credential provider by setting the input forceReinstallCredentialProvider to true in the NuGet Authenticate task.
This will also ensure your credential provider is automatically updated as issues are resolved.
If neither of the above resolves the issue, please enable Plugin Diagnostic Logging and report the issue to NuGet
and the Azure Artifacts Credential Provider.
How is this task different than the NuGetCommand and DotNetCoreCLI tasks?
This task configures nuget.exe, dotnet, and MSBuild to authenticate with Azure Artifacts or other repositories that
require authentication. After this task runs, you can then invoke the tools in a later step (either directly or via a
script) to restore or push packages.
The NuGetCommand and DotNetCoreCLI tasks require using the task to restore or push packages, as
authentication to Azure Artifacts is only configured within the lifetime of the task. This can prevent you from
restoring or pushing packages within your own script. It may also prevent you from passing specific command
line arguments to the tool.
The NuGetAuthenticate task is the recommended way to use authenticated feeds within a pipeline.
When in my pipeline should I run this task?
This task must run before you use a NuGet tool to restore or push packages to an authenticated package source
such as Azure Artifacts. There are no other ordering requirements. For example, this task can safely run either
before or after a NuGet or .NET Core tool installer task.
How do I configure a NuGet package source that uses ApiKey ("NuGet API keys"), such as nuget.org?
Some package sources such as nuget.org use API keys for authentication when pushing packages, rather than
username/password credentials. Due to limitations in NuGet, this task cannot be used to set up a NuGet service
connection that uses an API key.
Instead:
1. Configure a secret variable containing the ApiKey
2. Perform the package push using nuget push -ApiKey $(myNuGetApiKey) or
dotnet nuget push --api-key $(myNuGetApiKey) , assuming you named the variable myNuGetApiKey

My agent is behind a web proxy. Will NuGetAuthenticate set up nuget.exe, dotnet, and MSBuild to use my
proxy?
No. While this task itself will work behind a web proxy your agent has been configured to use, it does not
configure NuGet tools to use the proxy.
To do so, you can either:
Set the environment variable http_proxy and optionally no_proxy to your proxy settings. See NuGet CLI
environment variables for details. Please understand that these are commonly used variables which other
non-NuGet tools (e.g. curl) may also use.

Caution:
The http_proxy and no_proxy variables are case-sensitive on Linux and Mac operating systems and
must be lowercase. Attempting to use an Azure Pipelines variable to set the environment variable will
not work, as it will be converted to uppercase. Instead, set the environment variables on the self-hosted
agent's machine and restart the agent.

Add the proxy settings to the user-level nuget.config file, either manually or using nuget config -set as
described in the nuget.config reference documentation.

Caution:
The proxy settings (such as http_proxy ) must be added to the user-level config. They will be ignored if
specified in a different nuget.config file.

How do I debug if I have issues with this task?


To get verbose logs from the pipeline, add a pipeline variable system.debug to true.
How does this task work?
This task installs the Azure Artifacts Credential Provider into the NuGet plugins directory if it is not already
installed.
It then sets environment variables such as VSS_NUGET_URI_PREFIXES , VSS_NUGET_ACCESSTOKEN , and
VSS_NUGET_EXTERNAL_FEED_ENDPOINTS to configure the credential provider. These variables remain set for the lifetime
of the job.
When restoring or pushing packages, a NuGet tool executes the credential provider, which uses the above
variables to determine if it should return credentials back to the tool.
See the credential provider documentation for more details.
Package: PyPI Publisher task (deprecated)
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to create and upload an sdist or wheel to a PyPI-compatible index using Twine.
This task builds an sdist package by running python setup.py sdist using the Python instance in PATH . It can
optionally build a universal wheel in addition to the sdist. Then, it will upload the package to a PyPI index using
twine . The task will install the wheel and twine packages with python -m pip install --user .

Deprecated
WARNING
The PyPI Publisher task has been deprecated. You can now publish PyPI packages using twine authentication and custom
scripts.

Demands
None

Prerequisites
A generic service connection for a PyPI index.

TIP
To configure a new generic service connection, go to Settings -> Services -> New service connection -> Generic.
Connection Name : A friendly connection name of your choice
Ser ver URL : PyPI package server (for example: https://upload.pypi.org/legacy/)
User name : username for your PyPI account
Password/Token Key : password for your PyPI account

YAML snippet
# PyPI publisher
# Create and upload an sdist or wheel to a PyPI-compatible index using Twine
- task: PyPIPublisher@0
inputs:
pypiConnection:
packageDirectory:
#alsoPublishWheel: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

PyPI connection A generic service connection for connecting to the package


index.

Python package directory The directory of the Python package to be created and
published, where setup.py is present.

Also publish a wheel Select whether to create and publish a universal wheel
package (platform independent) in addition to an sdist
package.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Package: Python Pip Authenticate
11/2/2020 • 4 minutes to read • Edit Online

Version 1.* | Other versions


Azure Pipelines
Provides authentication for the pip client that can be used to install Python distributions.

YAML snippet
# Python pip authenticate V1
# Authentication task for the pip client used for installing Python distributions
- task: PipAuthenticate@1
inputs:
#artifactFeeds: 'MyFeed, MyTestFeed' # Optional
#pythonDownloadServiceConnections: pypiOrgFeed, OtherOrganizationFeed # Optional
#onlyAddExtraIndex: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

artifactFeeds (Optional) Comma-separated list of Azure Artifacts feeds to


My feeds authenticate with pip.

pythonDownloadServiceConnections (Optional) Comma-separated list of pip service connection


Feeds from external organizations names from external organizations to authenticate with pip.

onlyAddExtraIndex (Optional) Boolean value, if set to true will force pip to get
Don't set primary index URL distributions from official python registry first. By default, it's
false

C O N T RO L O P T IO N S

Examples
Download python distributions from Azure Artifacts feeds without consulting official python registry
In this example, we are setting authentication for downloading from private Azure Artifacts feeds. The authenticate
task creates environment variables PIP_INDEX_URL and PIP_EXTRA_INDEX_URL that are required to download the
distributions. The task sets the variables with auth credentials the task generates for the provided Artifacts feeds.
'HelloTestPackage' has to be present in either 'myTestFeed1' or 'myTestFeed2', otherwise install will fail hard.
For project scoped feeds that are in a different project than where the pipeline is running in, you must manually
give the project and the feed access to the pipeline's project's build service.
- task: PipAuthenticate@1
displayName: 'Pip Authenticate'
inputs:
# Provide list of feed names which you want to authenticate.
# Project scoped feeds must include the project name in addition to the feed name.
artifactFeeds: 'project1/myTestFeed1, myTestFeed2'

# Use command line tool to 'pip install'.


- script: |
pip install HelloTestPackage

Download python distributions from Azure Artifacts feeds consulting official python registry first
In this example, we are setting authentication for downloading from private Azure Artifacts feedbut pypi is
consulted first. The authenticate task creates an environment variable PIP_EXTRA_INDEX_URL which contain auth
credentials required to download the distributions. 'HelloTestPackage' will be downloaded from the authenticated
feeds only if it's not present in pypi.
For project scoped feeds that are in a different project than where the pipeline is running in, you must manually
give the project and the feed access to the pipeline's project's build service.

- task: PipAuthenticate@1
displayName: 'Pip Authenticate'
inputs:
# Provide list of feed names which you want to authenticate.
# Project scoped feeds must include the project name in addition to the feed name.
artifactFeeds: 'project1/myTestFeed1, myTestFeed2'
# Setting this variable to "true" will force pip to get distributions from official python registry first
and fallback to feeds mentioned above if distributions are not found there.
onlyAddExtraIndex: true

# Use command line tool to 'pip install'.


- script: |
pip install HelloTestPackage

Download python distributions from other private python servers


In this example, we are setting authentication for downloading from a external python distribution server. Create a
pip service connection entry for the external service. The authenticate task uses the service connection to create an
environment variable PIP_INDEX_URL which contain auth credentials required to download the distributions.
'HelloTestPackage' has to be present in 'pypitest' service connection, otherwise install will fail. If you want pypi to
be consulted first, set onlyAddExtraIndex to true .

- task: PipAuthenticate@1
displayName: 'Pip Authenticate'
inputs:
# In this case, name of the service connection is "pypitest".
pythonDownloadServiceConnections: pypitest

# Use command line tool to 'pip install'.


- script: |
pip install HelloTestPackage

Task versions
Task: Pip Authenticate
TA SK VERSIO N A Z URE P IP EL IN ES T FS

1.* Available Not supported

0.* Available Not supported

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
When in my pipeline should I run this task?
This task must run before you use pip to download python distributions to an authenticated package source such
as Azure Artifacts. There are no other ordering requirements. Multiple invocation of this task will not stack
credentials. Every run of the task will erase any previously stored credentials.
My agent is behind a web proxy. Will PipAuthenticate set up pip to use my proxy?
No. While this task itself will work behind a web proxy your agent has been configured to use, it does not configure
pip to use the proxy.
To do so, you can either:
Set the environment variable http_proxy , https_proxy and optionally no_proxy to your proxy settings. See Pip
official guidelines for details. These are commonly used variables which other non-Python tools (e.g. curl) may
also use.

Caution: The http_proxy and no_proxy variables are case-sensitive on Linux and Mac operating
systems and must be lowercase. Attempting to use an Azure Pipelines variable to set the environment
variable will not work, as it will be converted to uppercase. Instead, set the environment variables on the
self-hosted agent's machine and restart the agent.

Add the proxy settings to the pip config file file using proxy key.
Use the --proxy command-line option to specify proxy in the form [user:passwd@]proxy.server:port .
Package: Python Twine Upload Authenticate
11/2/2020 • 3 minutes to read • Edit Online

Version 1.* | Other versions


Azure Pipelines
Provides twine credentials to a PYPIRC_PATH environment variable for the scope of the build. This enables you to
publish Python packages to feeds with twine from your build.

YAML snippet
# Python twine upload authenticate V1
# Authenticate for uploading Python distributions using twine. Add '-r FeedName/EndpointName --config-file
$(PYPIRC_PATH)' to your twine upload command. For feed present in this organization, use the feed name as the
repository (-r). Otherwise, use the endpoint name defined in the service connection.
- task: TwineAuthenticate@1
inputs:
#artifactFeed: MyTestFeed # Optional
#pythonUploadServiceConnection: OtherOrganizationFeed # Optional

Arguments
A RGUM EN T DESC RIP T IO N

artifactFeed (Optional) An Azure Artifacts feed name to authenticate with


My feed twine .

pythonUploadServiceConnection (Optional) A twine service connection name from external


Feed from external organizations organization to authenticate with twine . The credentials
stored in the endpoint must have package upload
permissions.

C O N T RO L O P T IO N S

Examples
Publish python distribution to Azure Artifacts feed
In this example, we are setting authentication for publishing to a private Azure Artifacts Feed. The authenticate task
creates a .pypirc file which contains the auth credentials required to publish a distribution to the feed.
# Install python distributions like wheel, twine etc
- script: |
pip install wheel
pip install twine

# Build the python distribution from source


- script: |
python setup.py bdist_wheel

- task: TwineAuthenticate@1
displayName: 'Twine Authenticate'
inputs:
# In this case, name of the feed is 'myTestFeed' in the project 'myTestProject'. Project is needed because
the feed is project scoped.
artifactFeed: myTestProject/myTestFeed

# Use command line script to 'twine upload', use -r to pass the repository name and --config-file to pass the
environment variable set by the authenticate task.
- script: |
python -m twine upload -r myTestFeed --config-file $(PYPIRC_PATH) dist/*.whl

The 'artifactFeed' input will contain the project and the feed name if the feed is project scoped. If the feed is
organization scoped, only the feed name must be provided. Learn more.
Publish python distribution to official python registry
In this example, we are setting authentication for publishing to official python registry. Create a twine service
connection entry for pypi. The authenticate task uses that service connection to create a .pypirc file which
contains the auth credentials required to publish the distribution.

# Install python distributions like wheel, twine etc


- script: |
pip install wheel
pip install twine

# Build the python distribution from source


- script: |
python setup.py bdist_wheel

- task: TwineAuthenticate@1
displayName: 'Twine Authenticate'
inputs:
# In this case, name of the service connection is "pypitest".
pythonUploadServiceConnection: pypitest

# Use command line script to 'twine upload', use -r to pass the repository name and --config-file to pass the
environment variable set by the authenticate task.
- script: |
python -m twine upload -r "pypitest" --config-file $(PYPIRC_PATH) dist/*.whl

Task versions
Task: Twine Authenticate
TA SK VERSIO N A Z URE P IP EL IN ES T FS

1.* Available Not supported

0.* Available Not supported


Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
When in my pipeline should I run this task?
This task must run before you use twine to upload python distributions to an authenticated package source such
as Azure Artifacts. There are no other ordering requirements. Multiple invocation of this task will not stack
credentials. Every run of the task will erase any previously stored credentials.
My agent is behind a web proxy. Will TwineAuthenticate set up twine to use my proxy?
No. While this task itself will work behind a web proxy your agent has been configured to use, it does not
configure twine to use the proxy.
Universal Package task
6/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
Use this task to download, or package and publish Universal Packages.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

YAML snippet
# Universal packages
# Download or publish Universal Packages
- task: UniversalPackages@0
inputs:
#command: 'download' # Options: download, publish
#downloadDirectory: '$(System.DefaultWorkingDirectory)' # Required when command == Download
#feedsToUse: 'internal' # Options: internal, external
#externalFeedCredentials: # Optional
#vstsFeed: # Required when feedsToUse == Internal
#vstsFeedPackage: # Required when feedsToUse == Internal
#vstsPackageVersion: # Required when feedsToUse == Internal
#feedDownloadExternal: # Required when feedsToUse == External
#packageDownloadExternal: # Required when feedsToUse == External
#versionDownloadExternal: # Required when feedsToUse == External
#publishDirectory: '$(Build.ArtifactStagingDirectory)' # Required when command == Publish
#feedsToUsePublish: 'internal' # Options: internal, external
#publishFeedCredentials: # Required when feedsToUsePublish == External
#vstsFeedPublish: # Required when feedsToUsePublish == Internal
#publishPackageMetadata: true # Optional
#vstsFeedPackagePublish: # Required when feedsToUsePublish == Internal
#feedPublishExternal: # Required when feedsToUsePublish == External
#packagePublishExternal: # Required when feedsToUsePublish == External
#versionOption: 'patch' # Options: major, minor, patch, custom
#versionPublish: # Required when versionOption == Custom
packagePublishDescription:
#verbosity: 'None' # Options: none, trace, debug, information, warning, error, critical
#publishedPackageVar: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

command The NuGet command to run.


Command Options: download , publish

downloadDirectory Folder path where the package's contents download.


Destination directory
A RGUM EN T DESC RIP T IO N

feedsToUse You can select a feed from either this collection or any other
Feed location collection in Azure Artifacts.
Options: internal , external

externalFeedCredentials Credentials to use for external registries located in the selected


Credentials for feeds outside this organization (collection) NuGet.config. For feeds in this organization (collection), leave
this blank; the build's credentials are used automatically.

vstsFeed Include the selected feed. You must have Azure Artifacts
Use packages from this Azure Artifacts/TFS feed installed and licensed to select a feed here.

vstsFeedPackage Name of package to download.


Package name

vstsPackageVersion Select the package version or use a variable containing the


Package version version to download. This entry can also be a wildcard
expression such as * to get the highest version, 1.* to get
the highest version with major version 1, or 1.2.* to get the
highest patch release with major version 1 and minor version
2.

feedDownloadExternal Specifies the name of an external feed from which to


Feed download.

packageDownloadExternal Specifies the package name to download.


Package name

versionDownloadExternal Select the package version or use a variable containing the


Package version version to download. This entry can also be a wildcard
expression, such as * , to get the highest version, 1.* to
get the highest version with major version 1, or 1.2.* to get
the highest patch release with major version 1 and minor
version 2.

publishDirectory Specifies the path to list of files to be published.


Path to files to publish

feedsToUsePublish You can select a feed from either this collection or any other
Feed location collection in Azure Artifacts.
Options: internal , external

publishFeedCredentials Credentials to use for external feeds.


organization/collection connection

vstsFeedPublish Specifies the project and feed's name/GUID to publish to.


Destination Feed

publishPackageMetadata Associate this build and release pipeline's metadata (run #,


Publish pipeline metadata source code information) with the package.

vstsFeedPackagePublish Select a package ID to publish or type a new package ID, if


Package name you've never published a version of this package before.
Package names must be lower case and can only use letters,
numbers, and dashes(-).
A RGUM EN T DESC RIP T IO N

feedPublishExternal External feed name to publish to.


Feed

packagePublishExternal Package name.


Package name

versionOption Select a version increment strategy, or select Custom to input


Version your package version manually. For new packages, the first
version is 1.0.0 if you select "Next major". The first version is
0.1.0 if you select "Next minor". The first version is 0.0.1 if you
select "Next patch". For more information, see the Semantic
Versioning spec.
Options: major , minor , patch , custom

versionPublish Select the custom package version.


Custom version

packagePublishDescription Description of the contents of this package and the changes


Description made in this version of the package.

verbosity Specifies the amount of detail displayed in the output.


Verbosity Options: None , Trace , Debug , Information , Warning ,
Error , Critical

publishedPackageVar Provide a name for the variable that contains the published
Package Output Variable package name and version.

Control options

Example
The simplest way to get started with the Universal Package task is to use the Pipelines task editor to generate the
YAML. You can then copy the generated code into your project's azure-pipelines.yml file. In this example, the
sample demonstrates how to quickly generate the YAML using a pipeline that builds a GatsbyJS progressive web
app (PWA).
Universal Packages are a useful way to both encapsulate and version a web app. Packaging a web app into a
Universal Package enables quick rollbacks to a specific version of your site and eliminates the need to build the site
in the deployment pipeline.
This example pipeline demonstrates how to fetch a tool from a feed within your project. The Universal Package task
is used to download the tool, run a build, and again uses the Universal Package task to publish the entire compiled
GatsbyJS PWA to a feed as a versioned Universal Package.
Download a package with the Universal Package task
The second task in the sample project uses the Universal Package task to fetch a tool, imagemagick, from a feed
that is within a different project in the same organization. The tool, imagemagick, is required by the subsequent
build step to resize images.
1. Add the Universal Package task by clicking the plus icon, typing "universal" in the search box, and clicking the
"Add" button to add the task to your pipeline.

2. Click the newly added Universal Package task and the Command to Download .
3. Choose the Destination director y to use for the tool download.
4. Select a source Feed that contains the tool, set the Package name , and choose Version of the imagemagick
tool from the source Feed*.
5. After completing the fields, click View YAML to see the generated YAML.
6. The Universal Package task builder generates simplified YAML that contains non-default values. Copy the
generated YAML into your azure-pipelines.yml file at the root of your project's git repo as defined here.

# Download Universal Package


steps:
- task: UniversalPackages@0
displayName: 'Universal download'
inputs:
downloadDirectory: Application
vstsFeed: '00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000001'
vstsFeedPackage: imagemagick
vstsPackageVersion: 1.0.0

Publish a package with the Universal Package task


The last step in this sample pipeline uses the Universal Package task to upload the production-ready Gatsby PWA
that was produced by the Run gatsby build step to a feed as a versioned Universal Package. Once in a feed, you
have a permanent copy of your complete site that can be deployed to hosting provider and started with
gatsby serve .

1. Add another Universal Package task to the end of the pipeline by clicking the plus icon, typing "universal" in the
search box, and clicking the "Add" button to add the task to your pipeline. This task gathers all of the
production-ready assets produced by the Run gatsby build step, produce a versioned Universal Package, and
publish the package to a feed.

2. Set the Command to Publish .


3. Set Path to file(s) to publish to the directory containing your GatsbyJS project's package.json .
4. Choose a destination feed, a package name, and set your versioning strategy.
5. After completing the required fields, click View YAML .
6. Copy the resulting YAML into you your azure-pipelines.yml file as before. The YAML for this sample project
displays below.

# Download Universal Package


steps:
- task: UniversalPackages@0
displayName: 'Universal publish'
inputs:
command: publish
publishDirectory: Application
vstsFeedPublish: '00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000002'
vstsFeedPackagePublish: mygatsbysite
packagePublishDescription: 'A test package'

This example demonstrated how to use the Pipelines task builder to quickly generate the YAML for the Universal
Package task, which can then be placed into your azure-pipelines.yml file. The Universal Package task builder
supports all of the advanced configurations that can be created with Universal Package task's arguments.

NOTE
All feeds created through the classic user interface are project-scoped feeds. For the vstsFeedPublish parameter, you can
also use the project and feed's names instead of their GUIDs like the following: '<projectName>/<feedName>' . See Publish
your Universal packages for more details.

Open-source on GitHub
These tasks are open source on GitHub. Feedback and contributions are welcome.
NuGet Installer task version 0.*
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines (deprecated) | TFS 2017 (deprecated in 2017 Update 2)


Use this task to install and update NuGet package dependencies.

Demands
If your code depends on NuGet packages, make sure to add this task before your Visual Studio Build task. Also
make sure to clear the deprecated Restore NuGetPackages checkbox in that task.

Arguments
A RGUM EN T DESC RIP T IO N

Path to Solution Copy the value from the Solution argument in your Visual
Studio Build task and paste it here.

Path to NuGet.config If you are using a package source other than NuGet.org, you
must check in a NuGet.config file and specify the path to it
here.

Disable local cache Equivalent to nuget restore with the -NoCache option.

NuGet Arguments Additional arguments passed to nuget restore.

A DVA N C ED

Path to NuGet.exe (Optional) Path to your own instance of NuGet.exe. If you


specify this argument, you must have your own strategy to
handle authentication.

C O N T RO L O P T IO N S

Examples
Install NuGet dependencies
You're building a Visual Studio solution that depends on a NuGet feed.

`-- ConsoleApplication1
|-- ConsoleApplication1.sln
|-- NuGet.config
`-- ConsoleApplication1
|-- ConsoleApplication1.csproj

Build tasks
Install your NuGet package dependencies.
Path to Solution: *.sln
Package: NuGet Installer Path to NuGet.config:
ConsoleApplication1/NuGet.config

Build your solution.


Solution: *.sln
Build: Visual Studio Build Restore NuGet Packages: (Impor tant) Make sure this
option is cleared.
NuGet task
11/2/2020 • 10 minutes to read • Edit Online

Version 2.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
The NuGet Authenticate task is the new recommended way to authenticate with Azure Artifacts and other NuGet
repositories.

Use this task to install and update NuGet package dependencies, or package and publish NuGet packages. Uses
NuGet.exe and works with .NET Framework apps. For .NET Core and .NET Standard apps, use the .NET Core task.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

If your code depends on NuGet packages, make sure to add this step before your Visual Studio Build step. Also
make sure to clear the deprecated Restore NuGet Packages checkbox in that step.
If you are working with .NET Core or .NET Standard, use the .NET Core task, which has full support for all package
scenarios and it's currently supported by dotnet.

TIP
This version of the NuGet task uses NuGet 4.1.0 by default. To select a different version of NuGet, use the Tool Installer.

YAML snippet
# NuGet
# Restore, pack, or push NuGet packages, or run a NuGet command. Supports NuGet.org and authenticated feeds
like Azure Artifacts and MyGet. Uses NuGet.exe and works with .NET Framework apps. For .NET Core and .NET
Standard apps, use the .NET Core task.
- task: NuGetCommand@2
inputs:
#command: 'restore' # Options: restore, pack, push, custom
#restoreSolution: '**/*.sln' # Required when command == Restore
#feedsToUse: 'select' # Options: select, config
#vstsFeed: # Required when feedsToUse == Select
#includeNuGetOrg: true # Required when feedsToUse == Select
#nugetConfigPath: # Required when feedsToUse == Config
#externalFeedCredentials: # Optional
#noCache: false
#disableParallelProcessing: false
restoreDirectory:
#verbosityRestore: 'Detailed' # Options: quiet, normal, detailed
#packagesToPush:
'$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg' # Required
when command == Push
#nuGetFeedType: 'internal' # Required when command == Push# Options: internal, external
#publishVstsFeed: # Required when command == Push && NuGetFeedType == Internal
#publishPackageMetadata: true # Optional
#allowPackageConflicts: # Optional
#publishFeedCredentials: # Required when command == Push && NuGetFeedType == External
#verbosityPush: 'Detailed' # Options: quiet, normal, detailed
#packagesToPack: '**/*.csproj' # Required when command == Pack
#configuration: '$(BuildConfiguration)' # Optional
#packDestination: '$(Build.ArtifactStagingDirectory)' # Optional
#versioningScheme: 'off' # Options: off, byPrereleaseNumber, byEnvVar, byBuildNumber
#includeReferencedProjects: false # Optional
#versionEnvVar: # Required when versioningScheme == ByEnvVar
#majorVersion: '1' # Required when versioningScheme == ByPrereleaseNumber
#minorVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#patchVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#packTimezone: 'utc' # Required when versioningScheme == ByPrereleaseNumber# Options: utc, local
#includeSymbols: false # Optional
#toolPackage: # Optional
#buildProperties: # Optional
#basePath: # Optional, specify path to nuspec files
#verbosityPack: 'Detailed' # Options: quiet, normal, detailed
#arguments: # Required when command == Custom

Arguments
A RGUM EN T DESC RIP T IO N

command The NuGet command to run. Select 'Custom' to add


Command arguments or to use a different command.
Options: restore , pack , custom , push

restoreSolution The path to the solution, packages.config, or project.json file


Path to solution, packages.config, or project.json that references the packages to be restored.

feedsToUse You can either select a feed from Azure Artifacts and/or
Feeds to use NuGet.org, or commit a nuget.config file to your source code
repository and set its path here. Options: select , config .

vstsFeed Include the selected feed in the generated NuGet.config. You


Use packages from this Azure Artifacts/TFS feed must have Azure Artifacts installed and licensed to select a
feed here.
A RGUM EN T DESC RIP T IO N

includeNuGetOrg Include NuGet.org in the generated NuGet.config. Default


Use packages from NuGet.org value is true . Required when feedsToUse == Select .

nugetConfigPath The NuGet.config in your repository that specifies the feeds


Path to NuGet.config from which to restore packages. Required when feedsToUse
== Config

externalFeedCredentials Credentials to use for external registries located in the selected


Credentials for feeds outside this organization/collection NuGet.config. This is the name of your NuGet service
connection. For feeds in this organization/collection, leave this
blank; the build’s credentials are used automatically.

noCache Prevents NuGet from using packages from local machine


Disable local cache caches.

disableParallelProcessing Prevents NuGet from installing multiple packages in parallel.


Disable parallel processing

restoreDirectory Specifies the folder in which packages are installed. If no folder


Destination directory is specified, packages are restored into a packages/ folder
alongside the selected solution, packages.config, or
project.json.

verbosityRestore Specifies the amount of detail displayed in the output.


Verbosity Options: Quiet , Normal , Detailed

packagesToPush Specifies whether the target feed is and internal feed/collection


Target feed location or an external NuGet server.
Options: internal , external

publishVstsFeed Select a feed hosted in this account. You must have Azure
Target feed Artifacts installed and licensed to select a feed here.

publishPackageMetadata If you continually publish a set of packages and only change


Publish pipeline metadata the version number of the subset of packages that changed,
use this option.

allowPackageConflicts It allows the task to report success even if some of your


packages are rejected with 409 Conflict errors.
If NuGet.exe encounters a conflict, the task will fail. This option
will not work and publish will fail if you are within a proxy
environment.

publishFeedCredentials The NuGet service connection that contains the external


NuGet server NuGet server’s credentials.

verbosityPush Specifies the amount of detail displayed in the output.


Verbosity Options: Quiet , Normal , Detailed

packagesToPack Pattern to search for csproj directories to pack.


Path to csproj or nuspec file(s) to pack You can separate multiple patterns with a semicolon, and you
can make a pattern negative by prefixing it with '!'. Example:
**\\*.csproj;!**\\*.Tests.csproj
A RGUM EN T DESC RIP T IO N

configuration When using a csproj file this specifies the configuration to


Configuration to package package.

packDestination Folder where packages will be created. If empty, packages will


Package folder be created at the source root.

versioningScheme Cannot be used with include referenced projects. If you choose


Automatic package versioning 'Use the date and time', this will generate a SemVer-compliant
version formatted as X.Y.Z-ci-datetime where you choose
X, Y, and Z.
If you choose 'Use an environment variable', you must select
an environment variable and ensure it contains the version
number you want to use.
If you choose 'Use the build number', this will use the build
number to version your package. Note: Under Options set
the build number format to be ''.
$( BuildDef init ionN ame) _$( Year:yyyy) .$( Month) .$( DayO f Month) $( Rev:.r )

Options: off , byPrereleaseNumber , byEnvVar ,


byBuildNumber

includeReferencedProjects Enter the variable name without $, $env, or %.


Environment variable

majorVersion The 'X' in version X.Y.Z


Major

minorVersion The 'Y' in version X.Y.Z


Minor

patchVersion The 'Z' in version X.Y.Z


Patch

packTimezone Specifies the desired time zone used to produce the version of
Time zone the package. Selecting UTC is recommended if you're using
hosted build agents as their date and time might differ.
Options: utc , local

includeSymbols Specifies that the package contains sources and symbols.


Create symbols package When used with a .nuspec file, this creates a regular NuGet
package file and the corresponding symbols package.

toolPackage Determines if the output files of the project should be in the


Tool Package tool folder.

buildProperties Specifies a list of token=value pairs, separated by semicolons,


Additional build properties where each occurrence of $token$ in the .nuspec file will be
replaced with the given value. Values can be strings in
quotation marks.

basePath The base path of the files defined in the nuspec file.
Base path

verbosityPack Specifies the amount of detail displayed in the output.


Verbosity Options: Quiet , Normal , Detailed
A RGUM EN T DESC RIP T IO N

arguments The command and arguments which will be passed to


Command and arguments NuGet.exe for execution. If NuGet 3.5 or later is used,
authenticated commands like list, restore, and publish against
any feed in this organization/collection that the Project
Collection Build Service has access to will be automatically
authenticated.

Control options

Versioning schemes
For byPrereleaseNumber , the version will be set to whatever you choose for major, minor, and patch, plus the
date and time in the format yyyymmdd-hhmmss .
For byEnvVar , the version will be set as whatever environment variable, e.g. MyVersion (no $ , just the environment
variable name), you provide. Make sure the environment variable is set to a proper SemVer e.g. 1.2.3 or
1.2.3-beta1 .

For byBuildNumber , the version will be set to the build number, ensure that your build number is a proper
SemVer e.g. 1.0.$(Rev:r) . If you select byBuildNumber , the task will extract a dotted version, 1.2.3.4 and use
only that, dropping any label. To use the build number as is, you should use byEnvVar as described above, and set
the environment variable to BUILD_BUILDNUMBER .

Examples
Restore
Restore all your solutions with packages from a selected feed.

# Restore from a project scoped feed in the same organization


- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-project/my-project-scoped-feed'
includeNuGetOrg: false
restoreSolution: '**/*.sln'

# Restore from an organization scoped feed in the same organization


- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-organization-scoped-feed'
restoreSolution: '**/*.sln'
# Restore from a feed in a different organization
- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: config
nugetConfigPath: ./nuget.config
restoreSolution: '**/*.sln'
externalFeedCredentials: 'MyServiceConnectionName'
noCache: true
continueOnError: true

# Restore from feed(s) set in nuget.config


- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: 'config'
nugetConfigPath: 'nuget.config'

Package
Create a NuGet package in the destination folder.

# Package a project
- task: NuGetCommand@2
inputs:
command: 'pack'
packagesToPack: '**/*.csproj'
packDestination: '$(Build.ArtifactStagingDirectory)'

Push

NOTE
Pipeline artifacts are downloaded to System.ArtifactsDirectory directory. packagesToPush value can be set to
$(System.ArtifactsDirectory)/**/*.nupkg in your release pipeline.

Push/Publish a package to a feed defined in your NuGet.config.

# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg'
feedsToUse: 'config'
nugetConfigPath: '$(Build.WorkingDirectory)/NuGet.config'

Push/Publish a package to a project scoped

# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
feedsToUse: 'select'
vstsFeed: 'my-project/my-project-scoped-feed'
publishVstsFeed: 'myTestFeed'

Push/Publish a package to NuGet.org


# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
feedsToUse: 'config'
includeNugetOrg: 'true'

Custom
Run any other NuGet command besides the default ones: pack, push and restore.

# list local NuGet resources.


- task: NuGetCommand@2
displayName: 'list locals'
inputs:
command: custom
arguments: 'nuget locals all -list'

Open source
Check out the Azure Pipelines and Team Foundation Server out-of-the-box tasks on GitHub. Feedback and
contributions are welcome.

FAQ
Why should I check in a NuGet.Config?
Checking a NuGet.Config into source control ensures that a key piece of information needed to build your project,
the location of its packages, is available to every developer that checks out your code.
However, for situations where a team of developers works on a large range of projects, it's also possible to add an
Azure Artifacts feed to the global NuGet.Config on each developer's machine. In these situations, using the "Feeds I
select here" option in the NuGet task replicates this configuration.
Where can I learn about Azure Artifacts?
Azure Artifacts Documentation
Where can I learn more about NuGet?
NuGet Docs Overview
NuGet Create Packaging and publishing
NuGet Consume Setting up a solution to get dependencies
What other kinds of apps can I build?
Build and deploy your app examples
What other kinds of build tasks are available?
Build and release tasks catalog
How do we protect our codebase from build breaks?
Git: Improve code quality with branch policies with an option to require that code builds before it can be
merged to a branch. For GitHub repositories, similar policies are available in GitHub's repository settings
under Branches.
TFVC: Use gated check-in.
How do I modify other parts of my build pipeline?
Build and release tasks to run tests, scripts, and a wide range of other processes.
Specify build options such as specifying how completed builds are named, building multiple configurations,
creating work items on failure.
Supported source repositories to pick the source of the build and modify options such as how the agent
workspace is cleaned.
Set build triggers to modify how your CI builds run and to specify scheduled builds.
Specify build retention policies to automatically delete old builds.
I selected parallel multi-configuration, but only one build is running at a time.
If you're using Azure Pipelines, you might need more parallel jobs. See Parallel jobs in Azure Pipelines.
How do I see what has changed in my build pipeline?
View the change history of your build pipeline
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
NuGet Packager task version 0.*
4/10/2020 • 4 minutes to read • Edit Online

Azure Pipelines (deprecated) | TFS 2017 Update 2 and below (deprecated in TFS 2018)
Use this task to create a NuGet package from either a .csproj or .nuspec file.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are
called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Demands
None

YAML snippet
# NuGet packager
# Deprecated: use the “NuGet” task instead. It works with the new Tool Installer framework so you can easily use new
versions of NuGet without waiting for a task update, provides better support for authenticated feeds outside this
organization/collection, and uses NuGet 4 by default.
- task: NuGetPackager@0
inputs:
#searchPattern: '**\*.csproj'
#outputdir: # Optional
#includeReferencedProjects: false # Optional
#versionByBuild: 'false' # Options: false, byPrereleaseNumber, byEnvVar, true
#versionEnvVar: # Required when versionByBuild == ByEnvVar
#requestedMajorVersion: '1' # Required when versionByBuild == ByPrereleaseNumber
#requestedMinorVersion: '0' # Required when versionByBuild == ByPrereleaseNumber
#requestedPatchVersion: '0' # Required when versionByBuild == ByPrereleaseNumber
#configurationToPack: '$(BuildConfiguration)' # Optional
#buildProperties: # Optional
#nuGetAdditionalArgs: # Optional
#nuGetPath: # Optional

Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N

Path/Pattern to nuspec files Specify .csproj files (for example, ***.csproj ) for simple
projects. In this case:
The packager compiles the .csproj files for packaging.
You must specify Configuration to Package (see below).
You do not have to check in a .nuspec file. If you do check
one in, the packager honors its settings and replaces
tokens such as $id$ and $description$.
Specify .nuspec files (for example, *.nuspec ) for more
complex projects, such as multi-platform scenarios in which
you need to compile and package in separate steps. In this
case:
The packager does not compile the .csproj files for
packaging.
Each project is packaged only if it has a .nuspec file
checked in.
The packager does not replace tokens in the .nuspec file
(except the <version/> element, see Use build
number to version package , below). You must supply
values for elements such as <id/> and <description/>
. The most common way to do this is to hardcode the
values in the .nuspec file.
To package a single file, click the ... button and select the file.
To package multiple files, use single-folder wildcards ( * ) and
recursive wildcards ( ). For example, specify *.csproj to
package all .csproj files in all subdirectories in the repo.
You can use multiple patterns separated by a semicolon to
create more complex queries. You can negate a pattern by
prefixing it with "-:". For example, specify
*.csproj;-:***Tests.csproj to package all .csproj files
except those ending in 'Tests' in all subdirectories in the repo.

Use build number to version package Select if you want to use the build number to version your
package. If you select this option, for the pipeline options, set the
build number format to something like
$(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r)
The build number format must be
{some_characters}_0.0.0.0 . The characters and the
underscore character are omitted from the output. The
version number at the end must be a unique number in a
format such as 0.0.0.0 that is higher than the last
published number.
The version number is passed to nuget pack with the
-Version option.

Versions are shown prominently on NuGet servers. For


example they are listed on the Azure Artifacts feeds page and
on the NuGet.org package page.

Package Folder (Optional) Specify the folder where you want to put the packages.
You can use a variable such as
$(Build.StagingDirectory)\packages
If you leave it empty, the package will be created in the same
directory that contains the .csproj or .nuspec file.

A DVA N C ED
A RGUM EN T DESC RIP T IO N

Configuration to Package If you are packaging a .csproj file, you must specify a configuration
that you are building and that you want to package. For example:
Release

Additional build properties Semicolon delimited list of properties used to build the package.
For example, you could replace
<description>$description$</description> in the .nuspec file
this way: Description="This is a great package"
Using this argument is equivalent to supplying properties
from nuget pack with the -Properties option.

NuGet Arguments (Optional) Additional arguments passed nuget pack.

Path to NuGet.exe (Optional) Path to your own instance of NuGet.exe. If you specify
this argument, you must have your own strategy to handle
authentication.

C O N T RO L O P T IO N S

Examples
You want to package and publish some projects in a C# class library to your Azure Artifacts feed.

`-- Message
|-- Message.sln
`-- ShortGreeting
|-- ShortGreeting.csproj
|-- Class1.cs
`-- Properties
|-- AssemblyInfo.cs
`-- LongGreeting
|-- LongGreeting.csproj
|-- Class1.cs
`-- Properties
|-- AssemblyInfo.cs

Prepare
AssemblyInfo.cs
Make sure your AssemblyInfo.cs files contain the information you want shown in your packages. For example,
AssemblyCompanyAttribute will be shown as the author, and AssemblyDescriptionAttribute will be shown as the
description.
Variables tab

NAME VA L UE

$(BuildConfiguration) release

$(BuildPlatform) any cpu

Options

SET T IN G VA L UE

Build number format $(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r)

Publish to Azure Artifacts


Make sure you've prepared the build as described above.
Create the feed
See Create a feed.
Build tasks

Build your solution.


Solution: *.sln
Build: Visual Studio Build
Platform: $(BuildPlatform)
Configuration: $(BuildConfiguration)

Package your projects.


Path/Pattern to nuspec files: *.csproj
Package: NuGet Packager
Use Build number to version package: Selected
Advanced, Configuration to Package: Release

Publish your packages to Azure Artifacts.


Path/Pattern to nupkg: ***.nupkg
Package: NuGet Publisher
Feed type: Internal NuGet Feed
Internal feed URL: See Find your NuGet package source
URL.

Publish to NuGet.org
Make sure you've prepared the build as described above.
Register with NuGet.org
If you haven't already, register with NuGet.org.
Build tasks

Build your solution.


Solution: *.sln
Build: Visual Studio Build
Platform: $(BuildPlatform)
Configuration: $(BuildConfiguration)

Package your projects.


Path/Pattern to nuspec files: *.csproj
Package: NuGet Packager
Use Build number to version package: Selected
Advanced, Configuration to Package: Release
Publish your packages to NuGet.org.
Path/Pattern to nupkg: ***.nupkg
Package: NuGet Publisher
Feed type: External NuGet Feed
NuGet Server Endpoint:
1. Click "New service connection", and then click
Generic.
2. On the Add New Generic Connection dialog
box:
Connection Name: NuGet
Server URL: https://nuget.org/
User name: {your-name}
Password/Token Key: Paste API Key from
your NuGet account.
Package: NuGet Publisher task version 0.*
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines (deprecated) | TFS 2017 Update 2 and below (deprecated in TFS 2018)
Use this task to publish your NuGet package to a server and update your feed.

Demands
None

YAML snippet
# NuGet publisher
# Deprecated: use the “NuGet” task instead. It works with the new Tool Installer framework so you can easily use new
versions of NuGet without waiting for a task update, provides better support for authenticated feeds outside this
organization/collection, and uses NuGet 4 by default.
- task: NuGetPublisher@0
inputs:
#searchPattern: '**/*.nupkg;-:**/packages/**/*.nupkg;-:**/*.symbols.nupkg'
#nuGetFeedType: 'external' # Options: external, internal
#connectedServiceName: # Required when nuGetFeedType == External
#feedName: # Required when nuGetFeedType == Internal
#nuGetAdditionalArgs: # Optional
#verbosity: '-' # Options: -, quiet, normal, detailed
#nuGetVersion: '3.3.0' # Options: 3.3.0, 3.5.0.1829, 4.0.0.2283, custom
#nuGetPath: # Optional
#continueOnEmptyNupkgMatch: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Path/Pattern to nupkg Specify the packages you want to publish.


Default value: *.nupkg;-:\packages**.nupkg
To publish a single package, click the ... button and select
the file.
Use single-folder wildcards ( ) and recursive wildcards ( )
to publish multiple packages.
Use variables to specify directories. For example, if you
specified $(Build.StagingDirectory)\packages as the
package folder in the NuGet Packager task, you could
specify $(Build.StagingDirectory)\packages\*.nupkg
here.

Feed type External NuGetFeed publishes to an external server


such as NuGet or MyGet. After you select this option, you
create and select a NuGet ser ver endpoint .
Internal NuGet Feed publishes to an internal or Azure
Artifacts feed. After you select this option, you specify the
internal feed URL .

A DVA N C ED

NuGet Arguments (Optional) Additional arguments passed to nuget push.


A RGUM EN T DESC RIP T IO N

Path to NuGet.exe (Optional) Path to your own instance of NuGet.exe. If you specify
this argument, you must have your own strategy to handle
authentication.

C O N T RO L O P T IO N S

Examples
You want to package and publish some projects in a C# class library to your Azure Artifacts feed.

`-- Message
|-- Message.sln
`-- ShortGreeting
|-- ShortGreeting.csproj
|-- Class1.cs
`-- Properties
|-- AssemblyInfo.cs
`-- LongGreeting
|-- LongGreeting.csproj
|-- Class1.cs
`-- Properties
|-- AssemblyInfo.cs

Prepare
AssemblyInfo.cs
Make sure your AssemblyInfo.cs files contain the information you want shown in your packages. For example,
AssemblyCompanyAttribute will be shown as the author, and AssemblyDescriptionAttribute will be shown as the
description.
Variables tab

NAME VA L UE

$(BuildConfiguration) release

$(BuildPlatform) any cpu

Options

SET T IN G VA L UE

Build number format $(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r)

Publish to Azure Artifacts


Make sure you've prepared the build as described above.
Create the feed
See Create a feed.
Build tasks
Build your solution.
Solution: *.sln
Build: Visual Studio Build
Platform: $(BuildPlatform)
Configuration: $(BuildConfiguration)

Package your projects.


Path/Pattern to nuspec files: *.csproj
Package: NuGet Packager
Use Build number to version package: Selected
Advanced, Configuration to Package: Release

Publish your packages to Azure Artifacts.


Path/Pattern to nupkg: ***.nupkg
Package: NuGet Publisher
Feed type: Internal NuGet Feed
Internal feed URL: See Find your NuGet package source
URL.

Publish to NuGet.org
Make sure you've prepared the build as described above.
Register with NuGet.org
If you haven't already, register with NuGet.org.
Build tasks

Build your solution.


Solution: *.sln
Build: Visual Studio Build
Platform: $(BuildPlatform)
Configuration: $(BuildConfiguration)

Package your projects.


Path/Pattern to nuspec files: *.csproj
Package: NuGet Packager
Use Build number to version package: Selected
Advanced, Configuration to Package: Release

Publish your packages to NuGet.org.


Path/Pattern to nupkg: ***.nupkg
Package: NuGet Publisher
Feed type: External NuGet Feed
NuGet Server Endpoint:
1. Click "New service connection", and then click
Generic.
2. On the Add New Generic Connection dialog
box:
Connection Name: NuGet
Server URL: https://nuget.org/
User name: {your-name}
Password/Token Key: Paste API Key from
your NuGet account.
NuGet task
11/2/2020 • 10 minutes to read • Edit Online

Version 2.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018

NOTE
The NuGet Authenticate task is the new recommended way to authenticate with Azure Artifacts and other NuGet
repositories.

Use this task to install and update NuGet package dependencies, or package and publish NuGet packages. Uses
NuGet.exe and works with .NET Framework apps. For .NET Core and .NET Standard apps, use the .NET Core task.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

If your code depends on NuGet packages, make sure to add this step before your Visual Studio Build step. Also
make sure to clear the deprecated Restore NuGet Packages checkbox in that step.
If you are working with .NET Core or .NET Standard, use the .NET Core task, which has full support for all package
scenarios and it's currently supported by dotnet.

TIP
This version of the NuGet task uses NuGet 4.1.0 by default. To select a different version of NuGet, use the Tool Installer.

YAML snippet
# NuGet
# Restore, pack, or push NuGet packages, or run a NuGet command. Supports NuGet.org and authenticated feeds
like Azure Artifacts and MyGet. Uses NuGet.exe and works with .NET Framework apps. For .NET Core and .NET
Standard apps, use the .NET Core task.
- task: NuGetCommand@2
inputs:
#command: 'restore' # Options: restore, pack, push, custom
#restoreSolution: '**/*.sln' # Required when command == Restore
#feedsToUse: 'select' # Options: select, config
#vstsFeed: # Required when feedsToUse == Select
#includeNuGetOrg: true # Required when feedsToUse == Select
#nugetConfigPath: # Required when feedsToUse == Config
#externalFeedCredentials: # Optional
#noCache: false
#disableParallelProcessing: false
restoreDirectory:
#verbosityRestore: 'Detailed' # Options: quiet, normal, detailed
#packagesToPush:
'$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg' #
Required when command == Push
#nuGetFeedType: 'internal' # Required when command == Push# Options: internal, external
#publishVstsFeed: # Required when command == Push && NuGetFeedType == Internal
#publishPackageMetadata: true # Optional
#allowPackageConflicts: # Optional
#publishFeedCredentials: # Required when command == Push && NuGetFeedType == External
#verbosityPush: 'Detailed' # Options: quiet, normal, detailed
#packagesToPack: '**/*.csproj' # Required when command == Pack
#configuration: '$(BuildConfiguration)' # Optional
#packDestination: '$(Build.ArtifactStagingDirectory)' # Optional
#versioningScheme: 'off' # Options: off, byPrereleaseNumber, byEnvVar, byBuildNumber
#includeReferencedProjects: false # Optional
#versionEnvVar: # Required when versioningScheme == ByEnvVar
#majorVersion: '1' # Required when versioningScheme == ByPrereleaseNumber
#minorVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#patchVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#packTimezone: 'utc' # Required when versioningScheme == ByPrereleaseNumber# Options: utc, local
#includeSymbols: false # Optional
#toolPackage: # Optional
#buildProperties: # Optional
#basePath: # Optional, specify path to nuspec files
#verbosityPack: 'Detailed' # Options: quiet, normal, detailed
#arguments: # Required when command == Custom

Arguments
A RGUM EN T DESC RIP T IO N

command The NuGet command to run. Select 'Custom' to add


Command arguments or to use a different command.
Options: restore , pack , custom , push

restoreSolution The path to the solution, packages.config, or project.json file


Path to solution, packages.config, or project.json that references the packages to be restored.

feedsToUse You can either select a feed from Azure Artifacts and/or
Feeds to use NuGet.org, or commit a nuget.config file to your source code
repository and set its path here. Options: select , config .

vstsFeed Include the selected feed in the generated NuGet.config. You


Use packages from this Azure Artifacts/TFS feed must have Azure Artifacts installed and licensed to select a
feed here.
A RGUM EN T DESC RIP T IO N

includeNuGetOrg Include NuGet.org in the generated NuGet.config. Default


Use packages from NuGet.org value is true . Required when feedsToUse == Select .

nugetConfigPath The NuGet.config in your repository that specifies the feeds


Path to NuGet.config from which to restore packages. Required when feedsToUse
== Config

externalFeedCredentials Credentials to use for external registries located in the


Credentials for feeds outside this organization/collection selected NuGet.config. This is the name of your NuGet
service connection. For feeds in this organization/collection,
leave this blank; the build’s credentials are used automatically.

noCache Prevents NuGet from using packages from local machine


Disable local cache caches.

disableParallelProcessing Prevents NuGet from installing multiple packages in parallel.


Disable parallel processing

restoreDirectory Specifies the folder in which packages are installed. If no


Destination directory folder is specified, packages are restored into a packages/
folder alongside the selected solution, packages.config, or
project.json.

verbosityRestore Specifies the amount of detail displayed in the output.


Verbosity Options: Quiet , Normal , Detailed

packagesToPush Specifies whether the target feed is and internal


Target feed location feed/collection or an external NuGet server.
Options: internal , external

publishVstsFeed Select a feed hosted in this account. You must have Azure
Target feed Artifacts installed and licensed to select a feed here.

publishPackageMetadata If you continually publish a set of packages and only change


Publish pipeline metadata the version number of the subset of packages that changed,
use this option.

allowPackageConflicts It allows the task to report success even if some of your


packages are rejected with 409 Conflict errors.
If NuGet.exe encounters a conflict, the task will fail. This
option will not work and publish will fail if you are within a
proxy environment.

publishFeedCredentials The NuGet service connection that contains the external


NuGet server NuGet server’s credentials.

verbosityPush Specifies the amount of detail displayed in the output.


Verbosity Options: Quiet , Normal , Detailed

packagesToPack Pattern to search for csproj directories to pack.


Path to csproj or nuspec file(s) to pack You can separate multiple patterns with a semicolon, and you
can make a pattern negative by prefixing it with '!'. Example:
**\\*.csproj;!**\\*.Tests.csproj
A RGUM EN T DESC RIP T IO N

configuration When using a csproj file this specifies the configuration to


Configuration to package package.

packDestination Folder where packages will be created. If empty, packages will


Package folder be created at the source root.

versioningScheme Cannot be used with include referenced projects. If you


Automatic package versioning choose 'Use the date and time', this will generate a SemVer-
compliant version formatted as X.Y.Z-ci-datetime where
you choose X, Y, and Z.
If you choose 'Use an environment variable', you must select
an environment variable and ensure it contains the version
number you want to use.
If you choose 'Use the build number', this will use the build
number to version your package. Note: Under Options set
the build number format to be ''.
$( BuildDef init ionN ame) _$( Year:yyyy) .$( Month) .$( DayO f Month) $( Rev:.r )

Options: off , byPrereleaseNumber , byEnvVar ,


byBuildNumber

includeReferencedProjects Enter the variable name without $, $env, or %.


Environment variable

majorVersion The 'X' in version X.Y.Z


Major

minorVersion The 'Y' in version X.Y.Z


Minor

patchVersion The 'Z' in version X.Y.Z


Patch

packTimezone Specifies the desired time zone used to produce the version
Time zone of the package. Selecting UTC is recommended if you're using
hosted build agents as their date and time might differ.
Options: utc , local

includeSymbols Specifies that the package contains sources and symbols.


Create symbols package When used with a .nuspec file, this creates a regular NuGet
package file and the corresponding symbols package.

toolPackage Determines if the output files of the project should be in the


Tool Package tool folder.

buildProperties Specifies a list of token=value pairs, separated by semicolons,


Additional build properties where each occurrence of $token$ in the .nuspec file will be
replaced with the given value. Values can be strings in
quotation marks.

basePath The base path of the files defined in the nuspec file.
Base path

verbosityPack Specifies the amount of detail displayed in the output.


Verbosity Options: Quiet , Normal , Detailed
A RGUM EN T DESC RIP T IO N

arguments The command and arguments which will be passed to


Command and arguments NuGet.exe for execution. If NuGet 3.5 or later is used,
authenticated commands like list, restore, and publish against
any feed in this organization/collection that the Project
Collection Build Service has access to will be automatically
authenticated.

Control options

Versioning schemes
For byPrereleaseNumber , the version will be set to whatever you choose for major, minor, and patch, plus the
date and time in the format yyyymmdd-hhmmss .
For byEnvVar , the version will be set as whatever environment variable, e.g. MyVersion (no $ , just the
environment variable name), you provide. Make sure the environment variable is set to a proper SemVer e.g.
1.2.3 or 1.2.3-beta1 .

For byBuildNumber , the version will be set to the build number, ensure that your build number is a proper
SemVer e.g. 1.0.$(Rev:r) . If you select byBuildNumber , the task will extract a dotted version, 1.2.3.4 and use
only that, dropping any label. To use the build number as is, you should use byEnvVar as described above, and
set the environment variable to BUILD_BUILDNUMBER .

Examples
Restore
Restore all your solutions with packages from a selected feed.

# Restore from a project scoped feed in the same organization


- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-project/my-project-scoped-feed'
includeNuGetOrg: false
restoreSolution: '**/*.sln'

# Restore from an organization scoped feed in the same organization


- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-organization-scoped-feed'
restoreSolution: '**/*.sln'
# Restore from a feed in a different organization
- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: config
nugetConfigPath: ./nuget.config
restoreSolution: '**/*.sln'
externalFeedCredentials: 'MyServiceConnectionName'
noCache: true
continueOnError: true

# Restore from feed(s) set in nuget.config


- task: NuGetCommand@2
inputs:
command: 'restore'
feedsToUse: 'config'
nugetConfigPath: 'nuget.config'

Package
Create a NuGet package in the destination folder.

# Package a project
- task: NuGetCommand@2
inputs:
command: 'pack'
packagesToPack: '**/*.csproj'
packDestination: '$(Build.ArtifactStagingDirectory)'

Push

NOTE
Pipeline artifacts are downloaded to System.ArtifactsDirectory directory. packagesToPush value can be set to
$(System.ArtifactsDirectory)/**/*.nupkg in your release pipeline.

Push/Publish a package to a feed defined in your NuGet.config.

# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg'
feedsToUse: 'config'
nugetConfigPath: '$(Build.WorkingDirectory)/NuGet.config'

Push/Publish a package to a project scoped

# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
feedsToUse: 'select'
vstsFeed: 'my-project/my-project-scoped-feed'
publishVstsFeed: 'myTestFeed'

Push/Publish a package to NuGet.org


# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
feedsToUse: 'config'
includeNugetOrg: 'true'

Custom
Run any other NuGet command besides the default ones: pack, push and restore.

# list local NuGet resources.


- task: NuGetCommand@2
displayName: 'list locals'
inputs:
command: custom
arguments: 'nuget locals all -list'

Open source
Check out the Azure Pipelines and Team Foundation Server out-of-the-box tasks on GitHub. Feedback and
contributions are welcome.

FAQ
Why should I check in a NuGet.Config?
Checking a NuGet.Config into source control ensures that a key piece of information needed to build your
project, the location of its packages, is available to every developer that checks out your code.
However, for situations where a team of developers works on a large range of projects, it's also possible to add an
Azure Artifacts feed to the global NuGet.Config on each developer's machine. In these situations, using the "Feeds
I select here" option in the NuGet task replicates this configuration.
Where can I learn about Azure Artifacts?
Azure Artifacts Documentation
Where can I learn more about NuGet?
NuGet Docs Overview
NuGet Create Packaging and publishing
NuGet Consume Setting up a solution to get dependencies
What other kinds of apps can I build?
Build and deploy your app examples
What other kinds of build tasks are available?
Build and release tasks catalog
How do we protect our codebase from build breaks?
Git: Improve code quality with branch policies with an option to require that code builds before it can be
merged to a branch. For GitHub repositories, similar policies are available in GitHub's repository settings
under Branches.
TFVC: Use gated check-in.
How do I modify other parts of my build pipeline?
Build and release tasks to run tests, scripts, and a wide range of other processes.
Specify build options such as specifying how completed builds are named, building multiple
configurations, creating work items on failure.
Supported source repositories to pick the source of the build and modify options such as how the agent
workspace is cleaned.
Set build triggers to modify how your CI builds run and to specify scheduled builds.
Specify build retention policies to automatically delete old builds.
I selected parallel multi-configuration, but only one build is running at a time.
If you're using Azure Pipelines, you might need more parallel jobs. See Parallel jobs in Azure Pipelines.
How do I see what has changed in my build pipeline?
View the change history of your build pipeline
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Package: Python Pip Authenticate version 0.*
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Provides authentication for the pip client that can be used to install Python distributions.

YAML snippet
# Python pip authenticate
# Authentication task for the pip client used for installing Python distributions
- task: PipAuthenticate@0
inputs:
artifactFeeds:
#externalFeeds: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

artifactFeeds List of Azure Artifacts feeds to authenticate with pip.

externalFeeds List of service connections from external organizations to


authenticate with pip.

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Package: Python Twine Upload Authenticate version
0.*
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Provides twine credentials to a PYPIRC_PATH environment variable for the scope of the build. This enables you to
publish Python packages to feeds with twine from your build.

YAML snippet
# Python twine upload authenticate
# Authenticate for uploading Python distributions using twine. Add '-r FeedName/EndpointName --config-file
$(PYPIRC_PATH)' to your twine upload command. For feeds present in this organization, use the feed name as the
repository (-r). Otherwise, use the endpoint name defined in the service connection.
- task: TwineAuthenticate@0
inputs:
artifactFeeds:
#externalFeeds: # Optional
#publishPackageMetadata: true # Optional

Arguments
A RGUM EN T DESC RIP T IO N

artifactFeeds List of Azure Artifacts feeds to authenticate with twine .

externalFeeds List of service connections from external organizations to


authenticate with twine . The credentials stored in the
endpoint must have package upload permissions.

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
App Center Distribute task
4/22/2020 • 4 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to distribute app builds to testers and users through App Center.
Sign up with App Center first.
For details about using this task, see the App Center documentation article Deploy Azure DevOps Builds with
App Center.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

YAML snippet
# App Center distribute
# Distribute app builds to testers and users via Visual Studio App Center
- task: AppCenterDistribute@3
inputs:
serverEndpoint:
appSlug:
appFile:
#symbolsOption: 'Apple' # Optional. Options: apple, android
#symbolsPath: # Optional
#symbolsPdbFiles: '**/*.pdb' # Optional
#symbolsDsymFiles: # Optional
#symbolsIncludeParentDirectory: # Optional
#releaseNotesOption: 'input' # Options: input, file
#releaseNotesInput: # Required when releaseNotesOption == Input
#releaseNotesFile: # Required when releaseNotesOption == File
#isMandatory: false # Optional
#destinationType: 'groups' # Options: groups, store
#distributionGroupId: # Optional
#destinationStoreId: # Required when destinationType == store
#isSilent: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

serverEndpoint (Required) Select the service connection for App Center.


App Center service connection Create a new App Center service connection in Azure DevOps
project settings.
A RGUM EN T DESC RIP T IO N

appSlug (Required) The app slug is in the format of


App slug {username}/{app_identifier} . To locate {username} and
{app_identifier} for an app, click on its name from
https://appcenter.ms/apps, and the resulting URL is in the
format of
https://appcenter.ms/users/{username}/apps/{app_identifier}. If
you are using orgs, the app slug is of the format
{orgname}/{app_identifier}.

app (Required) Relative path from the repo root to the APK or IPA
Binary file path file you want to publish
Argument alias: appFile

buildVersion (Optional) The build version of the uploading binary which


Build version needs to be specified for .zip and .msi . This value will be
ignored unless the platform is WPF or WinForms.

symbolsType (Optional) Include symbol files to receive symbolicated stack


Symbols type traces in App Center Diagnostics. Options:
Android, Apple, UWP .
Argument alias: symbolsOption

symbolsPath (Optional) Relative path from the repo root to the symbols
Symbols path folder.

appxsymPath (Optional) Relative path from the repo root to PDB symbols
Symbols path (*.appxsym) files. Path may contain wildcards.

dsymPath (Optional) Relative path from the repo root to dSYM folder.
dSYM path Path may contain wildcards.
Argument alias: symbolsDsymFiles

mappingTxtPath (Optional) Relative path from the repo root to Android's


Mapping file mapping.txt file
Argument alias: symbolsMappingTxtFile

nativeLibrariesPath (Optional) Relative path from the repo root to the additional
Native Library File Path native libraries you want to publish (e.g. .so files)

packParentFolder (Optional) Upload the selected symbols file or folder and all
Include all items in parent folder other items inside the same parent folder. This is required for
React Native apps.
Argument alias: symbolsIncludeParentDirectory

releaseNotesSelection (Required) Release notes will be attached to the release and


Create release notes shown to testers on the installation page. Options:
input, file .
Default value: input
Argument alias: releaseNotesOption

releaseNotesInput (Required) Release notes for this version.


Release notes
A RGUM EN T DESC RIP T IO N

releaseNotesFile (Required) Select a UTF-8 encoded text file which contains the
Release notes file Release Notes for this version.

isMandatory (Optional) App Center Distribute SDK required to mandate


Require users to update to this release update. Testers will automatically be prompted to update.
Default value: false

destinationType (Required) Each release will be distributed to either groups or


Release destination a store. Options: groups, store .

destinationGroupIds (Optional) IDs of the distribution groups to release to. Leave it


Destination IDs empty to use the default group and use commas or
semicolons to separate multiple IDs.
Argument alias: distributionGroupId

destinationStoreId (Required) ID of the distribution store to deploy to.


Destination ID

isSilent (Optional) Testers will not receive an email for new releases.
Do not notify testers. Release will still be available to install.

Example
This example pipeline builds an Android app, runs tests, and publishes the app using App Center Distribute.
# Android
# Build your Android project with Gradle.
# Add steps that test, sign, and distribute the APK, save build artifacts, and more:
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/android

pool:
vmImage: 'macOS-latest'
steps:

- script: sudo npm install -g appcenter-cli


- script: appcenter login --token {YOUR_TOKEN}

- task: Gradle@2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'
tasks: build

- task: CopyFiles@2
inputs:
contents: '**/*.apk'
targetFolder: '$(build.artifactStagingDirectory)'

- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(build.artifactStagingDirectory)'
artifactName: 'outputs'
artifactType: 'container'

# Run tests using the App Center CLI


- script: appcenter test run espresso --app "{APP_CENTER_SLUG}" --devices "{DEVICE}" --app-path {APP_FILE} --
test-series "master" --locale "en_US" --build-dir {PAT_ESPRESSO} --debug

# Distribute the app


- task: AppCenterDistribute@3
inputs:
serverEndpoint: 'AppCenter'
appSlug: '$(APP_CENTER_SLUG)'
appFile: '$(APP_FILE)' # Relative path from the repo root to the APK or IPA file you want to publish
symbolsOption: 'Android'
releaseNotesOption: 'input'
releaseNotesInput: 'Here are the release notes for this version.'
destinationType: 'groups'

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure App Service Deploy task
11/2/2020 • 17 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy to a range of App Services on Azure. The task works on cross-platform agents running
Windows, Linux, or Mac and uses several different underlying deployment technologies.
The task works for ASP.NET, ASP.NET Core, PHP, Java, Python, Go, and Node.js based web applications.
The task can be used to deploy to a range of Azure App Services such as:
Web Apps on both Windows and Linux
Web Apps for Containers
Function Apps on both Windows and Linux
Function Apps for Containers
WebJobs
Apps configured under Azure App Service Environments

Prerequisites for the task


The following prerequisites must be set up in the target machine(s) for the task to work correctly.
App Ser vice instance . The task is used to deploy a Web App project or Azure Function project to an
existing Azure App Service instance, which must exist before the task runs. The App Service instance can be
created from the Azure portal and configured there. Alternatively, the Azure PowerShell task can be used to
run AzureRM PowerShell scripts to provision and configure the Web App.
Azure Subscription . To deploy to Azure, an Azure subscription must be linked to the pipeline. The task
does not work with the Azure Classic service connection, and it will not list these connections in the settings
of the task.

PA RA M ET ERS DESC RIP T IO N

ConnectionType (Required) Select the service connection type to use to deploy


(Connection type) the Web App.
Default value: AzureRM

Connected (Required if ConnectionType = AzureRM) Select the Azure


ServiceName Resource Manager subscription for the deployment.
(Azure subscription) Argument aliases: azureSubscription

PublishProfilePath (Required if ConnectionType = PublishProfile) The path to the


(Publish profile path) file containing the publishing information.
Default value:
$(System.DefaultWorkingDirectory)/**/*.pubxml

PublishProfilePassword (Required if ConnectionType = PublishProfile) The password


(Publish profile password) for the profile file. Consider storing the password in a secret
variable and using that variable here. Example: $(Password)
PA RA M ET ERS DESC RIP T IO N

WebAppKind (Required if ConnectionType = AzureRM) Choose from Web


(App Service type) App On Windows, Web App On Linux, Web App for
Containers, Function App, Function App on Linux, Function
App for Containers and Mobile App
Default value: webApp
Argument aliases: appType

WebAppName (Required if ConnectionType = AzureRM) Enter or select the


(App Service name) name of an existing Azure App Service. Only App Services
based on the selected app type will be listed.

DeployTo (Optional) Select the option to deploy to an existing


SlotOrASEFlag deployment slot or Azure App Service environment. For both
(Deploy to Slot or App Service Environment) the targets, the task requires a Resource Group name.
If the deployment target is a slot, by default the deployment
is to the production slot. Any other existing slot name can
be provided.
If the deployment target is an Azure App Service
environment, leave the slot name as production and specify
just the Resource Group name.
Default value: false
Argument aliases: deployToSlotOrASE

ResourceGroupName (Required if DeployToSlotOrASEFlag = true) The Resource


(Resource group) Group name is required when the deployment target is either
a deployment slot or an App Service environment. Enter or
select the Azure Resource Group that contains the Azure App
Service specified above.

SlotName (Required, if DeployToSlotOrASEFlag = true) Enter or select an


(Slot) existing slot other than the production slot.
Default value: production

DockerNamespace (Required if WebAppKind = webAppContainer


(Registry or Namespace) or WebAppkind = functionAppContainer) A globally unique
top-level domain name for your specific registry or
namespace. Note : the fully-qualified image name will be of
the format: {registr y or namespace}/{repositor y}:{tag} .
For example, myregistr y.azurecr.io/nginx:latest

DockerRepository (Required if WebAppKind = webAppContainer


(Image) or WebAppkind = functionAppContainer) Name of the
repository where the container images are stored. Note: the
fully-qualified image name will be of the format: {registr y or
namespace}/{repositor y}:{tag} . For example,
myregistr y.azurecr.io/nginx:latest

DockerImageTag (Optional) Tags are optional, but are the mechanism that
(Tag) registries use to apply version information to Docker images.
Note: the fully-qualified image name will be of the format:
{registr y or namespace}/{repositor y}:{tag} . For example,
myregistr y.azurecr.io/nginx:latest
PA RA M ET ERS DESC RIP T IO N

VirtualApplication (Optional) Specify the name of the Virtual Application that has
(Virtual application) been configured in the Azure portal. This option is not
required for deployments to the website root. The Virtual
Application must have been configured before deployment of
the web project.

Package (Required if ConnectionType = PublishProfile or WebAppKind


(Package or folder) = webApp, apiApp, functionApp, mobileApp, webAppLinux, or
functionAppLinux) File path to the package, or to a folder
containing App Service contents generated by MSBuild, or to
a compressed zip or war file.
Build variables or release variables) and wildcards are
supported. For example,
$(System.DefaultWorkingDirectory)/**/*.zip or
$(System.DefaultWorkingDirectory)/**/*.war
Default value:
$(System.DefaultWorkingDirectory)/**/*.zip
Argument aliases: packageForLinux

RuntimeStack (Optional) Select the framework and version. This is for


(Runtime Stack) WebApp for Linux.

RuntimeStackFunction (Optional) Select the framework and version. This is for


(Runtime Stack) Function App on Linux.

StartupCommand (Optional) Enter the start up command.


(Startup command)

ScriptType (Optional) Customize the deployment by providing a script


(Deployment script type) that runs on the Azure App Service after successful
deployment. Choose inline deployment script or the path and
name of a script file. Learn more.

InlineScript (Required if ScriptType == Inline Script) The script to execute.


(Inline Script) You can provide your deployment commands here, one
command per line. See this example.

ScriptPath (Required if ScriptType == File Path) The path and name of the
(Deployment script path) script to execute.

Web (Optional) A standard web.config will be generated and


ConfigParameters deployed to Azure App Service if the application does not
(Generate web.config parameters for Python, Node.js, Go and have one. The values in web.config can be edited and will vary
Java apps) based on the application framework. For example for Node.js
applications, web.config will have startup file and iis_node
module values. This edit feature is only for the generated
web.config file. Learn more.

AppSettings (Optional) Edit web app Application settings using the


(App settings) syntax -key value . Values containing spaces must be
enclosed in double quotes. Examples: -Por t 5000 -
RequestTimeout 5000 and -WEBSITE_TIME_ZONE
"Eastern Standard Time".
PA RA M ET ERS DESC RIP T IO N

ConfigurationSettings (Optional) Edit web app configuration settings using the


(Configuration settings) syntax -key value . Values containing spaces must be
enclosed in double quotes. Example: -phpVersion 5.6 -
linuxFxVersion: node|6.11

UseWebDeploy (Optional) If unchecked, the task auto-detects the best


(Select deployment method) deployment method based on the app type, package format,
and other parameters. Select the option to view the
supported deployment methods, and choose one for
deploying your app.
Argument aliases: enableCustomDeployment

DeploymentType (Required if UseWebDeploy == true) Choose the deployment


(Deployment method) method for the app.
Default value: webDeploy

TakeAppOfflineFlag (Optional) Select this option to take the Azure App Service
(Take App Offline) offline by placing an app_offline.htm file in the root
directory before the synchronization operation begins. The file
will be removed after the synchronization completes
successfully.
Default value: true

SetParametersFile (Optional) location of the SetParameters.xml file to be used.


(SetParameters file)

RemoveAdditional (Optional) Select the option to delete files on the Azure App
FilesFlag Service that have no matching files in the App Service
(Remove additional files at destination) package or folder.
Note: This will also remove all files related to any extensions
installed on this Azure App Service. To prevent this, set the
Exclude files from App_Data folder checkbox.
Default value: false

ExcludeFiles (Optional) Select the option to prevent files in the App_Data


FromAppDataFlag folder from being deployed to or deleted from the Azure App
(Exclude files from the App_Data folder) Service.
Default value: true

AdditionalArguments (Optional) Additional Web Deploy arguments following the


(Additional arguments) syntax -key:value . These will be applied when deploying the
Azure App Service. Example: -
disableLink :AppPoolExtension -
disableLink :ContentExtension. More examples.
Default value: -retr yAttempts:6 -retr yInter val:10000

RenameFilesFlag (Optional) Select this option to enable the MSDeploy flag


(Rename locked files) MSDEPLOY_RENAME_LOCKED_FILES=1 in the Azure App
Service application settings. When set, it enables MSDeploy to
rename files that are locked during app deployment.
Default value: true
PA RA M ET ERS DESC RIP T IO N

XmlTransformation (Optional) The configuration transformations will be run for


(XML transformation) .Release.config and .{EnvironmentName}.config on the
*.config files. Configuration transformations run before
variable substitution. XML transformations are supported
only for the Windows platform. Learn more.
Default value: false
Argument aliases: enableXmlTransform

XmlVariable (Optional) Variables defined in the build or release pipeline will


Substitution be matched against the key or name entries in the
(XML variable substitution) appSettings , applicationSettings , and
connectionStrings sections of any configuration file and
parameters.xml file. Variable substitution runs after
configuration transformations.
Note: if the same variables are defined in the release pipeline
and in the stage, the stage variables will supersede the release
pipeline variables. Learn more
Default value: false
Argument aliases: enableXmlVariableSubstitution

JSONFiles (Optional) Provide a newline-separated list of JSON files to


(JSON variable substitution) substitute the variable values. Filenames must be relative to
the root folder. To substitute JSON variables that are nested or
hierarchical, specify them using JSONPath expressions. For
example, to replace the value of ConnectionString in the
sample below, define a variable named
Data.DefaultConnection.ConnectionString in the build
or release pipeline (or release pipelines stage).

{
"Data": {
"DefaultConnection": {
"ConnectionString": "Server=
(localdb)\SQLEXPRESS;Database=MyDB;Trusted_Connection=
True"
}
}
}

Variable substitution runs after configuration transformations.


Note : build and release pipeline variables are excluded from
substitution. Learn more.

This YAML example deploys to an Azure Web App container (Linux).


pool:
vmImage: Ubuntu-16.04

variables:
azureSubscriptionEndpoint: Contoso
DockerNamespace: contoso.azurecr.io
DockerRepository: aspnetcore
WebAppName: containersdemoapp

steps:

- task: AzureRMWebAppDeployment@4
displayName: Azure App Service Deploy
inputs:
appType: webAppContainer
ConnectedServiceName: $(azureSubscriptionEndpoint)
WebAppName: $(WebAppName)
DockerNamespace: $(DockerNamespace)
DockerRepository: $(DockerRepository)
DockerImageTag: $(Build.BuildId)

To deploy to a specific app type, set appType to any of the following accepted values: webApp (Web App on
Windows), webAppLinux (Web App on Linux), webAppContainer (Web App for Containers - Linux),
functionApp (Function App on Windows), functionAppLinux (Function App on Linux),
functionAppContainer (Function App for Containers - Linux), apiApp (API App), mobileApp (Mobile App). If
not mentioned, webApp is taken as the default value.
To enable any advance deployment options, add the parameter enableCustomDeployment: true and include
the below parameters as needed.

# deploymentMethod: 'runFromPackage' # supports zipDeploy as well


# appOffline: boolean # Not applicable for 'runFromPackage'
# setParametersFile: string
# removeAdditionalFilesFlag: boolean
# additionalArguments: string

Output Variables
Web App Hosted URL: Provide a name, such as FabrikamWebAppURL , for the variable populated with the Azure
App Service Hosted URL. The variable can be used as $( variableName .AppSer viceApplicationUrl) , for
example $(FabrikamWebAppURL.AppServiceApplicationUrl) , to refer to the hosted URL of the Azure App Service in
subsequent tasks.

Usage notes
The task works with the Azure Resource Manager APIs only.
To ignore SSL errors, define a variable named VSTS_ARM_REST_IGNORE_SSL_ERRORS with value true in the release
pipeline.
For .NET apps targeting Web App on Windows, avoid deployment failure with the error ERROR_FILE_IN_USE by
ensuring that Rename locked files and Take App Offline settings are enabled. For zero downtime
deployment, use the slot swap option.
When deploying to an App Service that has Application Insights configured, and you have enabled Remove
additional files at destination , ensure you also enable Exclude files from the App_Data folder in order
to maintain the Application insights extension in a safe state. This is required because the Application Insights
continuous web job is installed into the App_Data folder.
Sample Post deployment script
The task provides an option to customize the deployment by providing a script that will run on the Azure App
Service after the app's artifacts have been successfully copied to the App Service. You can choose to provide either
an inline deployment script or the path and name of a script file in your artifact folder.
This is very useful when you want to restore your application dependencies directly on the App Service. Restoring
packages for Node, PHP, and Python apps helps to avoid timeouts when the application dependency results in a
large artifact being copied over from the agent to the Azure App Service.
An example of a deployment script is:

@echo off
if NOT exist requirements.txt (
echo No Requirements.txt found.
EXIT /b 0
)
if NOT exist "$(PYTHON_EXT)/python.exe" (
echo Python extension not available >&2
EXIT /b 1
)
echo Installing dependencies
call "$(PYTHON_EXT)/python.exe" -m pip install -U setuptools
if %errorlevel% NEQ 0 (
echo Failed to install setuptools >&2
EXIT /b 1
)
call "$(PYTHON_EXT)/python.exe" -m pip install -r requirements.txt
if %errorlevel% NEQ 0 (
echo Failed to install dependencies>&2
EXIT /b 1
)

Deployment methods
Several deployment methods are available in this task. Web Deploy (msdeploy.exe) is the default. To change the
deployment option, expand Additional Deployment Options and enable Select deployment method to
choose from additional package-based deployment options.
Based on the type of Azure App Service and agent, the task chooses a suitable deployment technology. The
different deployment technologies used by the task are:
Web Deploy
Kudu REST APIs
Container Registry
Zip Deploy
Run From Package
War Deploy
By default, the task tries to select the appropriate deployment technology based on the input package type, App
Service type, and agent operating system.

Auto Detect Logic


For windows based agents.
A P P SERVIC E T Y P E PA C K A GE T Y P E DEP LO Y M EN T M ET H O D

WebApp on Linux or Function App on Folder/Zip/jar Zip Deploy


Linux War War Deploy

WebApp for Containers (Linux) or Update the App settings NA


Function App for Containers (Linux)

WebApp on Windows, Function App on War War Deploy


Windows, API App, or Mobile App Jar Zip Deploy
MsBuild package type or deploy to Web Deploy
virtual application
if postDeploymentScript == true, Zip
Deploy
Folder/Zip else, Run From Package

On non-Windows agents (for any App Service type), the task relies on Kudu REST APIs to deploy the app.
Web Deploy
Web Deploy (msdeploy.exe) can be used to deploy a Web App on Windows or a Function App to the Azure App
Service using a Windows agent. Web Deploy is feature-rich and offers options such as:
Rename locked files: Rename any file that is still in use by the web server by enabling the msdeploy flag
MSDEPLOY_RENAME_LOCKED_FILES=1 in the Azure App Service settings. This option, if set, enables
msdeploy to rename files that are locked during app deployment.
Remove additional files at destination: Deletes files in the Azure App Service that have no matching
files in the App Service artifact package or folder being deployed.
Exclude files from the App_Data folder : Prevent files in the App_Data folder (in the artifact
package/folder being deployed) being deployed to the Azure App Service
Additional Web Deploy arguments: Arguments that will be applied when deploying the Azure App
Service. Example: -disableLink:AppPoolExtension -disableLink:ContentExtension . For more examples of Web
Deploy operation settings, see Web Deploy Operation Settings.
Install Web Deploy on the agent using the Microsoft Web Platform Installer. Web Deploy 3.5 must be installed
without the bundled SQL support. There is no need to choose any custom settings when installing Web Deploy.
Web Deploy is installed at C:\Program Files (x86)\IIS\Microsoft Web Deploy V3.
Kudu REST APIs
Kudu REST APIs work on both Windows and Linux automation agents when the target is a Web App on Windows,
Web App on Linux (built-in source), or Function App. The task uses Kudu to copy files to the Azure App service.
Container Registry
Works on both Windows and Linux automation agents when the target is a Web App for Containers. The task
updates the app by setting the appropriate container registry, repository, image name, and tag information. You
can also use the task to pass a startup command for the container image.
Zip Deploy
Expects a .zip deployment package and deploys the file contents to the wwwroot folder of the App Service or
Function App in Azure. This option overwrites all existing contents in the wwwroot folder. For more information,
see Zip deployment for Azure Functions.
Run From Package
Expects the same deployment package as Zip Deploy. However, instead of deploying files to the wwwroot folder,
the entire package is mounted by the Functions runtime and files in the wwwroot folder become read-only. For
more information, see Run your Azure Functions from a package file.
War Deploy
Expects a .war deployment package and deploys the file content to the wwwroot folder or webapps folder of the
App Service in Azure.

Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.
Error: No package found with specified pattern
Check if the package mentioned in the task is published as an artifact in the build or a previous stage and
downloaded in the current job.
Error: Publish using zip deploy option is not supported for msBuild package type
Web packages created using MSBuild task (with default arguments) have a nested folder structure that can only be
deployed correctly by Web Deploy. Publish to zip deploy option can not be used to deploy those packages. To
convert the packaging structure, follow the below steps.
In Build Solution task, change the MSBuild Arguments to /p:DeployOnBuild=true
/p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True
/p:publishUrl="$(System.DefaultWorkingDirectory)\WebAppContent"
Add Archive Task and change the inputs as follows:
Change Root folder or file to archive to $(System.DefaultWorkingDirectory)\WebAppContent

Disable Prepend root folder name to archive paths option


Web app deployment on Windows is successful but the app is not working
This may be because web.config is not present in your app. You can either add a web.config file to your source or
auto-generate one using the File Transforms and Variable Substitution Options of the task.
Click on the task and go to Generate web.config parameters for Python, Node.js, Go and Java apps.

Click on the more button Generate web.config parameters for Python, Node.js, Go and Java apps to edit the
parameters.
Select your application type from the drop down.
Click on OK. This will populate web.config parameters required to generate web.config.
ERROR_FILE_IN_USE
When deploying .NET apps to Web App on Windows, deployment may fail with error code ERROR_FILE_IN_USE.
To resolve the error, ensure Rename locked files and Take App Offline options are enabled in the task. For zero
downtime deployments, use slot swap.
You can also use Run From Package deployment method to avoid resource locking.
Web Deploy Error
If you are using web deploy to deploy your app, in some error scenarios Web Deploy will show an error code in
the log. To troubleshoot a web deploy error see this.
Web app deployment on App Service Environment (ASE) is not working
Ensure that the Azure DevOps build agent is on the same VNET (subnet can be different) as the Internal Load
Balancer (ILB) of ASE. This will enable the agent to pull code from Azure DevOps and deploy to ASE.
If you are using Azure DevOps, the agent neednt be accessible from internet but needs only outbound access to
connect to Azure DevOps Service.
If you are using TFS/Azure DevOps Server deployed in a Virtual Network, the agent can be completely isolated.
Build agent must be configured with the DNS configuration of the Web App it needs to deploy to. Since the
private resources in the Virtual Network don't have entries in Azure DNS, this needs to be added to the hosts
file on the agent machine.
If a self-signed certificate is used for the ASE configuration, "-allowUntrusted" option needs to be set in the
deploy task for MSDeploy.It is also recommended to set the variable VSTS_ARM_REST_IGNORE_SSL_ERRORS
to true. If a certificate from a certificate authority is used for ASE configuration, this should not be necessary.

FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during
configuration. This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about
running a self-hosted agent behind a web proxy

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure App Service Manage task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to start, stop, restart, slot swap, Swap with Preview, install site extensions, or enable continuous
monitoring for an Azure App Service.

YAML snippet
# Azure App Service manage
# Start, stop, restart, slot swap, slot delete, install site extensions or enable continuous monitoring for an
Azure App Service
- task: AzureAppServiceManage@0
inputs:
azureSubscription:
#action: 'Swap Slots' # Optional. Options: swap Slots, start Azure App Service, stop Azure App Service,
restart Azure App Service, delete Slot, install Extensions, enable Continuous Monitoring, start All Continuous
Webjobs, stop All Continuous Webjobs
webAppName:
#specifySlotOrASE: false # Optional
#resourceGroupName: # Required when action == Swap Slots || Action == Delete Slot || SpecifySlot == True
#sourceSlot: # Required when action == Swap Slots
#swapWithProduction: true # Optional
#targetSlot: # Required when action == Swap Slots && SwapWithProduction == False
#preserveVnet: false # Optional
#slot: 'production' # Required when action == Delete Slot || SpecifySlot == True
#extensionsList: # Required when action == Install Extensions
#outputVariable: # Optional
#appInsightsResourceGroupName: # Required when action == Enable Continuous Monitoring
#applicationInsightsResourceName: # Required when action == Enable Continuous Monitoring
#applicationInsightsWebTestName: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

ConnectedServiceName (Required) Select the Azure Resource Manager subscription


Azure subscription Argument alias: azureSubscription

Action (Optional) Action to be performed on the App Service. You


Action can Start, Stop, Restart, Slot swap, Start Swap with Preview,
Complete Swap with preview, Cancel Swap with preview,
Install site extensions or enable Continuous Monitoring for an
Azure App Service
Default value: Swap Slots

WebAppName (Required) Enter or select the name of an existing Azure App


App Service name Service

SpecifySlot (Optional) undefined


Specify Slot or App Service Environment
A RGUM EN T DESC RIP T IO N

ResourceGroupName (Required) Enter or Select the Azure Resource Group that


Resource group contains the Azure App Service specified above

SourceSlot (Required) The swap action directs destination slot's traffic to


Source Slot the source slot

SwapWithProduction (Optional) Select the option to swap the traffic of source slot
Swap with Production with production. If this option is not selected, then you will
have to provide source and target slot names.
Default value: true

TargetSlot (Required) The swap action directs destination slot's traffic to


Target Slot the source slot

PreserveVnet (Optional) The swap action would overwrite the destination


Preserve Vnet slot's network configuration with the source
Default value: false

Slot (Required)
Slot Default value: production

ExtensionsList (Required) Site Extensions run on Microsoft Azure App Service.


Install Extensions You can install set of tools as site extension and better
manage your Azure App Service. The App Service will be
restarted to make sure latest changes take effect.

OutputVariable (Optional) Provide the variable name for the local installation
Output variable path for the selected extension.This field is now deprecated
and would be removed. Use LocalPathsForInstalledExtensions
variable from Output Variables section in subsequent tasks.

AppInsightsResourceGroupName (Required) Enter or Select resource group where your


Resource Group name for Application Insights application insights resource is available

ApplicationInsightsResourceName (Required) Select Application Insights resource where


Application Insights resource name continuous monitoring data will be recorded. If your
application insights resource is not listed here and you want
to create a new resource, click on [+New] button. Once the
resource is created on Azure portal, come back here and click
on refresh button.

ApplicationInsightsWebTestName (Optional) Enter Application Insights Web Test name to be


Application Insights web test name created or updated. If not provided, the default test name will
be used.

What happens during a swap


When you swap two slots (usually from a staging slot into the production slot), make sure that the production slot
is always the target slot. This way, the swap operation doesn't affect your production app.
Also at any point of the swap (or swap with preview) operation, all work of initializing the swapped apps happens
on the source slot. The target slot remains online while the source slot is being prepared and warmed up,
regardless of where the swap succeeds or fails. Please refer to Set up staging environments in Azure App Service
for more details.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure App Service Settings task
11/7/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to configure App settings, connection strings and other general settings in bulk using JSON syntax on
your web app or any of its deployment slots. The task works on cross platform Azure Pipelines agents running
Windows, Linux or Mac. The task works for ASP.NET, ASP.NET Core, PHP, Java, Python, Go and Node.js based web
applications.

Arguments
PA RA M ET ERS DESC RIP T IO N

ConnectedServiceName (Required) Name of the Azure Resource Manager service


Azure subscription connection
Argument aliases: ConnectedServiceName

appName (Required) Name of an existing App Service


App name

resourceGroupName (Required) Name of the resource group


Resource group

slotName (Optional) Name of the slot


Slot Default value: production

appSettings (Optional) Application settings to be entered using JSON


App settings syntax. Values containing spaces should be enclosed in double
quotes.

generalSettings (Optional) General settings to be entered using JSON syntax.


General settings Values containing spaces should be enclosed in double quotes.
See the App Service SiteConfig object documentation for the
available properties.

connectionStrings (Optional) Connection strings to be entered using JSON


Connection settings syntax. Values containing spaces should be enclosed in double
quotes.

Following is an example YAML snippet to deploy web application to the Azure Web App service running on
windows.

Example
variables:
azureSubscription: Contoso
WebApp_Name: sampleWebApp
# To ignore SSL error uncomment the below variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true

steps:

- task: AzureWebApp@1
displayName: Azure Web App Deploy
inputs:
azureSubscription: $(azureSubscription)
appName: $(WebApp_Name)
package: $(System.DefaultWorkingDirectory)/**/*.zip

- task: AzureAppServiceSettings@1
displayName: Azure App Service Settings
inputs:
azureSubscription: $(azureSubscription)
appName: $(WebApp_Name)
# To deploy the settings on a slot, provide slot name as below. By default, the settings would be applied
to the actual Web App (Production slot)
# slotName: staging
appSettings: |
[
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "$(Key)",
"slotSetting": false
},
{
"name": "MYSQL_DATABASE_NAME",
"value": "$(DB_Name)",
"slotSetting": false
}
]
generalSettings: |
[
{
"name": "WEBAPP_NAME",
"value": "$(WebApp_Name)",
"slotSetting": false
},
{
"name": "WEBAPP_PLAN_NAME",
"value": "$(WebApp_PlanName)",
"slotSetting": false
}
]
connectionStrings: |
[
{
"name": "MysqlCredentials",
"value": "$(MySQl_ConnectionString)",
"type": "MySql",
"slotSetting": false
}
]

Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure CLI task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to run a shell or batch script containing Azure CLI commands against an Azure subscription.
This task is used to run Azure CLI commands on cross-platform agents running on Linux, macOS, or Windows
operating systems.
What's new in Version 2.0
Supports running PowerShell and PowerShell Core script
PowerShell Core script works with Xplat agents (Windows, Linux or OSX), make sure the agent has PowerShell
version 6 or higher
PowerShell script works only with Windows agent, make sure the agent has PowerShell version 5 or lower

Prerequisites
A Microsoft Azure subscription
Azure Resource Manager service connection to your Azure account
Microsoft hosted agents have Azure CLI pre-installed. However if you are using private agents, install Azure
CLI on the computer(s) that run the build and release agent. If an agent is already running on the machine
on which the Azure CLI is installed, restart the agent to ensure all the relevant stage variables are updated.

Task Inputs
PA RA M ET ERS DESC RIP T IO N

azureSubscription (Required) Select an Azure resource manager subscription for


Azure subscription the deployment. This parameter is shown only when the
selected task version is 0.* as Azure CLI task v1.0 supports
only Azure Resource Manager (ARM) subscriptions

scriptType (Required) Type of script: PowerShell/PowerShell


Script Type Core/Bat/Shell script. Select bash/pscore script when
running on Linux agent or batch/ps/pscore script when
running on Windows agent. PowerShell Core script can run on
cross-platform agents (Linux, macOS, or Windows)

scriptLocation (Required) Path to script: File path or Inline script


Script Location Default value: scriptPath

scriptPath (Required) Fully qualified path of the script(.ps1 or .bat or .cmd


Script Path when using Windows-based agent else .ps1 or .sh when
using linux-based agent) or a path relative to the default
working directory
PA RA M ET ERS DESC RIP T IO N

inlineScript (Required) You can write your scripts inline here. When using
Inline Script Windows agent, use PowerShell or PowerShell Core or batch
scripting whereas use PowerShell Core or shell scripting when
using Linux based agents. For batch files use the prefix \"call\"
before every azure command. You can also pass predefined
and custom variables to this script using arguments.
Example for PowerShell/PowerShellCore/shell: az --
version az account show
Example for batch: call az --version call az account show

arguments (Optional) Arguments passed to the script


Script Arguments

powerShellErrorActionPreference (Optional) Prepends the line $ErrorActionPreference =


ErrorActionPreference 'VALUE' at the top of your powershell/powershell core script
Default value: stop

addSpnToEnvironment (Optional) Adds service principal id and key of the Azure


Access service principal details in script endpoint you chose to the script's execution environment. You
can use these variables: $env:ser vicePrincipalId,
$env:ser vicePrincipalKey and $env:tenantId in your
script. This is honored only when the Azure endpoint has
Service Principal authentication scheme
Default value: false

useGlobalConfig (Optional) If this is false, this task will use its own separate
Use global Azure CLI configuration Azure CLI configuration directory. This can be used to run
Azure CLI tasks in parallel releases"
Default value: false

workingDirectory (Optional) Current working directory where the script is run.


Working Directory Empty is the root of the repo (build) or artifacts (release),
which is $(System.DefaultWorkingDirectory)

failOnStandardError (Optional) If this is true, this task will fail when any errors are
Fail on Standard Error written to the StandardError stream. Unselect the checkbox to
ignore standard errors and rely on exit codes to determine the
status
Default value: false

powerShellIgnoreLASTEXITCODE (Optional) If this is false, the line


Ignore $LASTEXITCODE if ((Test-Path -LiteralPath
variable:\\LASTEXITCODE)) { exit $LASTEXITCODE }
is appended to the end of your script. This will cause the last
exit code from an external command to be propagated as the
exit code of PowerShell. Otherwise the line is not appended to
the end of your script
Default value: false

Example
Following is an example of a YAML snippet which lists the version of Azure CLI and gets the details of the
subscription.
- task: AzureCLI@2
displayName: Azure CLI
inputs:
azureSubscription: <Name of the Azure Resource Manager service connection>
scriptType: ps
scriptLocation: inlineScript
inlineScript: |
az --version
az account show

Related tasks
Azure Resource Group Deployment
Azure Cloud Service Deployment
Azure Web App Deployment

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Azure Cloud Service Deployment task
4/22/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy an Azure Cloud Service.

YAML snippet
# Azure Cloud Service deployment
# Deploy an Azure Cloud Service
- task: AzureCloudPowerShellDeployment@1
inputs:
azureClassicSubscription:
#storageAccount: # Required when enableAdvancedStorageOptions == False
serviceName:
serviceLocation:
csPkg:
csCfg:
#slotName: 'Production'
#deploymentLabel: '$(Build.BuildNumber)' # Optional
#appendDateTimeToLabel: false # Optional
#allowUpgrade: true
#simultaneousUpgrade: false # Optional
#forceUpgrade: false # Optional
#verifyRoleInstanceStatus: false # Optional
#diagnosticStorageAccountKeys: # Optional
#newServiceCustomCertificates: # Optional
#newServiceAdditionalArguments: # Optional
#newServiceAffinityGroup: # Optional
#enableAdvancedStorageOptions: false
#aRMConnectedServiceName: # Required when enableAdvancedStorageOptions == True
#aRMStorageAccount: # Required when enableAdvancedStorageOptions == True

Arguments
A RGUM EN T DESC RIP T IO N

ConnectedServiceName (Required) Azure Classic subscription to target for deployment.


Azure subscription (Classic) Argument alias: azureClassicSubscription

EnableAdvancedStorageOptions (Required) Select to enable Azure Resource Manager storage


Enable Azure Resource Manager storage support support for this task

StorageAccount (Required) Storage account must exist prior to deployment.


Storage account (Classic)

ARMConnectedServiceName (Required) Azure Resource Manager subscription


Azure subscription (Azure Resource Manager)

ARMStorageAccount (Required) Choose a pre-existing Azure Resource Manager


Storage account (Azure Resource Manager) storage account
A RGUM EN T DESC RIP T IO N

ServiceName (Required) Select or enter an existing cloud service name.


Service name

ServiceLocation (Required) Select a region for new service deployment.Possible


Service location options are East US, East US 2, Central US, South Central US,
West US, North Europe, West Europe and others.

CsPkg (Required) Path of CsPkg under the default artifact directory.


CsPkg

CsCfg (Required) Path of CsCfg under the default artifact directory.


CsCfg

Slot (Required) Production or Staging


Environment (Slot) Default value: Production
Argument alias: slotName

DeploymentLabel (Optional) Specifies the label name for the new deployment. If
Deployment label not specified, a Globally Unique Identifier (GUID) is used.
Default value: $(Build.BuildNumber)

AppendDateTimeToLabel (Optional) Appends current date and time to deployment


Append current date and time label
Default value: false

AllowUpgrade (Required) When selected allows an upgrade to the Microsoft


Allow upgrade Azure deployment
Default value: true

SimultaneousUpgrade (Optional) Updates all instances at once. Your cloud service


Simultaneous upgrade will be unavailable during update.
Default value: false

ForceUpgrade (Optional) When selected sets the upgrade to a forced


Force upgrade upgrade, which could potentially cause loss of local data.
Default value: false

VerifyRoleInstanceStatus When selected then the task will wait until role instances are
Verify role instance status in ready state

DiagnosticStorageAccountKeys (Optional) Provide storage keys for diagnostics storage


Diagnostic storage account keys account in Role:Storagekey format. The diagnostics storage
account name for each role will be obtained from diagnostics
config file (.wadcfgx) . If the .wadcfgx file for a role is not
found, diagnostics extensions won’t be set for the role. If the
storage account name is missing in the .wadcfgx file, the
default storage account will be used for storing diagnostics
results and the storage key parameters from deployment task
will be ignored. It’s recommended to save
<storage_account_key> as a secret variable unless there is no
sensitive information in the diagnostics result for your stage.
For example, WebRole: <WebRole_storage_account_key>
WorkerRole: <WorkerRole_storage_account_key>
A RGUM EN T DESC RIP T IO N

NewServiceCustomCertificates (Optional) Provide custom certificates in


Custom certificates to import CertificatePfxBase64:CertificatePassword format. It’s
recommended to save <certificate_password> as a secret
variable.
For example, Certificate1: <Certificate1_password>
Certificate2: <Certificate2_password>

NewServiceAdditionalArguments (Optional) Pass in additional arguments while creating a brand


Additional arguments new service. These will be passed on to New-AzureService
cmdlet. Eg: -Label 'MyTestService'

NewServiceAffinityGroup (Optional) While creating new service, this affinity group will
Affinity group be considered instead of using service location.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure File Copy task
11/2/2020 • 9 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015.3
Use this task to copy files to Microsoft Azure storage blobs or virtual machines (VMs).

NOTE
This task is written in PowerShell and thus works only when run on Windows agents. If your pipelines require Linux agents
and need to copy files to an Azure Storage Account, consider running az storage blob commands in the Azure CLI task
as an alternative.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

The task is used to copy application files and other artifacts that are required in order to install the app; such as
PowerShell scripts, PowerShell-DSC modules, and more.

NOTE
If you are using Azure File copy task version 3 or below refer to this.

When the target is Azure VMs, the files are first copied to an automatically generated Azure blob container and
then downloaded into the VMs. The container is deleted after the files have been successfully copied to the VMs.
The task uses AzCopy , the command-line utility built for fast copying of data from and into Azure storage
accounts. The version 4 of the Azure File Copy task uses AzCopy V10.
To dynamically deploy Azure Resource Groups that contain virtual machines, use the Azure Resource Group
Deployment task. This task has a sample template that can perform the required operations to set up the WinRM
HTTPS protocol on the virtual machines, open the 5986 port in the firewall, and install the test certificate.

NOTE
If you are deploying to Azure Static Websites as a container in blob storage, you must use Version 2 or higher of the task
in order to preserve the $web container name.

The task supports authentication based on Azure Active Directory. Authentication using a service principal and
managed identity are available. For managed identities, only system-wide managed identity is supported.

NOTE
For authorization you will have to provide access to the Security Principal. The level of authorization required can be referred
here.
YAML snippet
# Azure file copy
# Copy files to Azure Blob Storage or virtual machines
- task: AzureFileCopy@4
inputs:
sourcePath:
azureSubscription:
destination: # Options: azureBlob, azureVMs
storage:
#containerName: # Required when destination == AzureBlob
#blobPrefix: # Optional
#resourceGroup: # Required when destination == AzureVMs
#resourceFilteringMethod: 'machineNames' # Optional. Options: machineNames, tags
#machineNames: # Optional
#vmsAdminUserName: # Required when destination == AzureVMs
#vmsAdminPassword: # Required when destination == AzureVMs
#targetPath: # Required when destination == AzureVMs
#additionalArgumentsForBlobCopy: # Optional
#additionalArgumentsForVMCopy: # Optional
#enableCopyPrerequisites: false # Optional
#copyFilesInParallel: true # Optional
#cleanTargetBeforeCopy: false # Optional
#skipCACheck: true # Optional
#sasTokenTimeOutInMinutes: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Source Required. The source of the files to


copy. YAML Pipelines and Classic
Release support pre-defined system
variables like Build.Repository.LocalPath
as well. Release variables are supported
only in classic releases. Wild card
symbol (*) is supported anywhere in
the file path or file name.

Azure Subscription Required. The name of an Azure


Resource Manager service connection
configured for the subscription where
the target Azure service, virtual
machine, or storage account is located.
See Azure Resource Manager overview
for more details.

Destination Type Required. The type of target destination


for the files. Choose Azure Blob or
Azure VMs .

RM Storage Account Required. The name of an existing


storage account within the Azure
subscription.
A RGUM EN T DESC RIP T IO N

Container Name Required if you select Azure Blob for


the Destination Type parameter. The
name of the container to which the files
will be copied. If a container with this
name does not exist, a new one will be
created.

Blob Prefix Optional if you select Azure Blob for


the Destination Type parameter. A
prefix for the blob names, which can be
used to filter the blobs. For example,
using the build number enables easy
filtering when downloading all blobs
with the same build number.

Resource Group Required if you select Azure Resource


Manager for the Azure Connection
Type parameter and Azure VMs for
the Destination Type parameter. The
name of the Azure Resource Group in
which the virtual machines run.

Select Machines By Depending on how you want to specify


the machines in the group when using
the Filter Criteria parameter, choose
Machine Names or Tags .

Filter Criteria Optional. A list of machine names or


tag names that identifies the machines
that the task will target. The filter
criteria can be:
- The name of an Azure Resource
Group.
- An output variable from a previous
task.
- A comma-delimited list of tag names
or machine names.
Format when using machine names is a
comma-separated list of the machine
FQDNs or IP addresses.
Specify tag names for a filter as
{TagName}: {Value} Example:
Role:DB;OS:Win8.1

Admin Login Required if you select Azure VMs for


the Destination Type parameter. The
user name of an account that has
administrative permissions for all the
target VMs.
- Formats such as username ,
domain\username , machine-
name\username , and .\username
are supported.
- UPN formats such as
[email protected] and built-in
system accounts such as NT
Authority\System are not supported.
A RGUM EN T DESC RIP T IO N

Password Required if you select Azure VMs for


the Destination Type parameter. The
password for the account specified as
the Admin Login parameter. Use the
padlock icon for a variable defined in
the Variables tab to protect the value,
and insert the variable name here.

Destination Folder Required if you select Azure VMs for


the Destination Type parameter. The
folder in the Azure VMs to which the
files will be copied. Environment
variables such as $env:windir and
$env:systemroot are supported.
Examples:
$env:windir\FabrikamFiber\Web and
c:\FabrikamFiber

Additional Arguments Optional. Any arguments you want to


pass to the AzCopy.exe program for
use when uploading to the blob and
downloading to the VMs. See Transfer
data with the AzCopy Command-Line
Utility for more details. If you are using
a Premium storage account, which
supports only Azure page blobs, the
pass '--blob-type=PageBlob' as an
additional argument. The default
arguments are --log-level=INFO
(default) and --recursive (only if
container name is not $root).

Enable Copy Prerequisites Available if you select Azure Resource


Manager for the Azure Connection
Type parameter and Azure VMs for
the Destination Type parameter.
Setting this option configures the
Windows Remote Management
(WinRM) listener over HTTPS protocol
on port 5986, using a self-signed
certificate. This configuration is required
for performing copy operation on Azure
virtual machines.
- If the target virtual machines are
accessed through a load balancer,
ensure an inbound NAT rule is
configured to allow access on port
5986.
- If the target virtual machines are
associated with a Network Security
Group (NSG), configure an inbound
security rule to allow access on port
5986.
A RGUM EN T DESC RIP T IO N

Copy in Parallel Available if you select Azure VMs for


the Destination Type parameter.
Setting this option causes the process
to execute in parallel for the copied files.
This can considerably reduce the overall
time taken.

Clean Target Available if you select Azure VMs for


the Destination Type parameter.
Setting this option causes all of the files
in the destination folder to be deleted
before the copy process starts.

Test Cer tificate Available if you select Azure VMs for


the Destination Type parameter.
WinRM requires a certificate for the
HTTPS transfer when copying files from
the intermediate storage blob into the
Azure VMs. If you set use a self-signed
certificate, set this option to prevent
the process from validating the
certificate with a trusted certificate
authority (CA).

SAS Token Expiration Period In Optional. Provide the time in minutes


Minutes after which SAS token will expire. Valid
only when the selected destination is
Azure Blob. Default should be 4 hours.

Control options See Control options

Related tasks
Azure Resource Group Deployment
Azure Cloud Service Deployment
Azure Web App Deployment

FAQ
What are the Azure PowerShell prerequisites for using this task?
The task requires Azure PowerShell to be installed on the machine running the automation agent. The
recommended version is 1.0.2, but the task will work with version 0.9.8 and higher. You can use the Azure
PowerShell Installer v1.0.2 to obtain this.
What are the WinRM prerequisites for this task?
The task uses Windows Remote Management (WinRM) HTTPS protocol to copy the files from the storage blob
container to the Azure VMs. This requires the WinRM HTTPS service to be configured on the VMs, and a suitable
certificate installed. Configure WinRM after virtual machine creation
If the VMs have been created without opening the WinRM HTTPS ports, follow these steps:
1. Configure an inbound access rule to allow HTTPS on port 5986 of each VM.
2. Disable UAC remote restrictions.
3. Specify the credentials for the task to access the VMs using an administrator-level login in the simple form
username without any domain part.
4. Install a certificate on the machine that runs the automation agent.
5. Set the Test Cer tificate parameter of the task if you are using a self-signed certificate.
What type of service connection should I choose?
For Azure Resource Manager storage accounts and Azure Resource Manager VMs, use an Azure Resource
Manager service connection type. See more details at Automating Azure Resource Group deployment
using a Service Principal.
While using an Azure Resource Manager service connection type, the task automatically filters
appropriate newer Azure Resource Manager storage accounts, and other fields. For example, the Resource
Group or cloud service, and the virtual machines.
How do I create a school or work account for use with this task?
A suitable account can be easily created for use in a service connection:
1. Use the Azure portal to create a new user account in Azure Active Directory.
2. Add the Azure Active Directory user account to the co-administrators group in your Azure subscription.
3. Sign into the Azure portal with this user account and change the password.
4. Use the username and password of this account in the service connection. Deployments will be processed
using this account.
If the task fails, will the copy resume?
Since AzCopy V10 does not support journal files, the task cannot resume the copy. You will have to run the task
again to copy all the files.
Are the log files and plan files cleaned after the copy?
The log and plan files are not deleted by the task. To explicitly clean up the files you can add a CLI step in the
workflow using this command.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Function App task
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines
Use the Azure Function App task to deploy Functions to Azure.

Arguments
PA RA M ET ERS DESC RIP T IO N

azureSubscription (Required) Name of the Azure Resource Manager service


Azure subscription connection

appType (Required) Function App type


App type

appName (Required) Name of an existing App Service


App name

deployToSlotOrASE (Optional) Select the option to deploy to an existing


Deploy to Slot or App Service Environment deployment slot or Azure App Service Environment. For both
the targets, the task needs Resource group name. In case the
deployment target is a slot, by default the deployment is done
to the production slot. Any other existing slot name can also
be provided. In case the deployment target is an Azure App
Service environment, specify the resource group name
Default value: false

resourceGroupName (Required if deployToSlotOrASE is true) Name of the resource


Resource group group

slotName (Required if deployToSlotOrASE == true) Name of the slot


Slot Default value: production

package (Required) File path to the package or a folder containing app


Package or folder service contents generated by MSBuild or a compressed zip or
war file. Variables ( Build | Release), wildcards are supported.
For example, $(System.DefaultWorkingDirectory)//*.zip or
$(System.DefaultWorkingDirector y)//*.war

runtimeStack (Optional) Web App on Linux offers two different options to


Runtime stack publish your application, one is custom image deployment
(Web App for Containers) and the other is app deployment
with a built-in platform image (Web App on Linux). You will see
this parameter only when you select Linux Web App in the
app type selection option in the task. [Here](supports a
number of Built-in images) is the list of Built-in images
support

startUpCommand (Optional; Relevant if appType == webAppLinux) Startup


Startup command command to be run post deployment
PA RA M ET ERS DESC RIP T IO N

customWebConfig (Optional) A standard Web.config will be generated and


Generate web.config parameters for Python, Node.js, Go and deployed to Azure App Service if the application does not
Java apps have one. The values in web.config can be edited and vary
based on the application framework. For example for node.js
application, web.config will have startup file and iis_node
module values. This edit feature is only for the generated
web.config. Learn more

appSettings (Optional) Application settings to be entered using the syntax


App settings '-key value'. Values containing spaces should be enclosed in
double quotes.
Example: -Por t 5000 -RequestTimeout 5000 -
WEBSITE_TIME_ZONE "Eastern Standard Time"

configurationStrings (Optional) Configuration strings to be entered using the


Configuration settings syntax '-key value'. Values containing spaces should be
enclosed in double quotes.
Example: -phpVersion 5.6 -linuxFxVersion: node|6.11

deploymentMethod (Required) Deployment method for the app


Deployment method Default value: auto

Following is an example YAML snippet to deploy Azure Functions on Windows.

Example
variables:
azureSubscription: Contoso
# To ignore SSL error uncomment the below variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true

steps:
- task: AzureFunctionApp@1
displayName: Azure Function App Deploy
inputs:
azureSubscription: $(azureSubscription)
appName: samplefunctionapp
package: $(System.DefaultWorkingDirectory)/**/*.zip

To deploy Function on Linux, add the appType parameter and set it to appType: functionAppLinux . If not mentioned,
functionApp is taken as the default value.

To explicitly specify the deployment method as Zip Deploy, add the parameter deploymentMethod: zipDeploy . Other
supported value for this parameter is runFromPackage . If not mentioned, auto is taken as the default value.
For an end-to-end walkthrough, see Build and deploy Java to Azure Functions for End-to-end CI/CD.

Deployment methods
Several deployment methods are available in this task. Auto is the default option.
To change the deployment option in designer task, expand Additional Deployment Options and enable Select
deployment method to choose from additional package-based deployment options.
Based on the type of Azure App Service and Azure Pipelines agent, the task chooses a suitable deployment
technology. The different deployment technologies used by the task are:
Kudu REST APIs
Zip Deploy
RunFromPackage
By default the task tries to select the appropriate deployment technology given the input package, app service type
and agent OS.
When post deployment script is provided, use Zip Deploy
When the App Service type is Web App on Linux App, use Zip Deploy
If War file is provided, use War Deploy
If Jar file is provided, use Run From Zip
For all others, use Run From Package (via Zip Deploy)
On non-Windows agent (for any App service type), the task relies on Kudu REST APIs to deploy the Web App.
Kudu REST APIs
Works on Windows as well as Linux automation agent when the target is a Web App on Windows or Web App on
Linux (built-in source) or Function App. The task uses Kudu to copy over files to the Azure App service.
Zip Deploy
Creates a .zip deployment package of the chosen Package or folder and deploys the file contents to the wwwroot
folder of the App Service name function app in Azure. This option overwrites all existing contents in the wwwroot
folder. For more information, see Zip deployment for Azure Functions.
RunFromPackage
Creates the same deployment package as Zip Deploy. However, instead of deploying files to the wwwroot folder,
the entire package is mounted by the Functions runtime. With this option, files in the wwwroot folder become
read-only. For more information, see Run your Azure Functions from a package file.

Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.
Error: No package found with specified pattern
Check if the package mentioned in the task is published as an artifact in the build or a previous stage and
downloaded in the current job.
Error: Publish using zip deploy option is not supported for msBuild package type
Web packages created using MSBuild task (with default arguments) have a nested folder structure that can only be
deployed correctly by Web Deploy. Publish to zip deploy option can not be used to deploy those packages. To
convert the packaging structure, follow the below steps.
In Build Solution task, change the MSBuild Arguments to /p:DeployOnBuild=true
/p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True
/p:publishUrl="$(System.DefaultWorkingDirectory)\WebAppContent"
Add Archive Task and change the inputs as follows:
Change Root folder or file to archive to $(System.DefaultWorkingDirectory)\WebAppContent

Disable Prepend root folder name to archive paths option

Function app deployment on Windows is successful but the app is not working
This may be because web.config is not present in your app. You can either add a web.config file to your source or
auto-generate one using the Application and Configuration Settings of the task.
Click on the task and go to Generate web.config parameters for Python, Node.js, Go and Java apps.

Click on the more button Generate web.config parameters for Python, Node.js, Go and Java apps to edit the
parameters.
Select your application type from the drop down.
Click on OK. This will populate web.config parameters required to generate web.config.

FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during configuration.
This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about running a self-
hosted agent behind a web proxy

Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Function App for Container task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy an Azure Function on Linux using a custom image.

Task Inputs
PA RA M ET ERS DESC RIP T IO N

azureSubscription (Required) Name of an Azure Resource Manager service


Azure subscription connection.

appName (Required) Name of the Function App for Containers.


App name

deployToSlotOrASE (Optional) Set to true to deploy to an existing deployment slot


Deploy to Slot or App Service Environment or Azure App Service Environment. For both the targets, the
task needs a Resource Group name. For the deployment slot
option, the default is to deploy to the production slot, or
you can specify any other existing slot name. If the
deployment target is an Azure App Service environment, leave
the slot name as production and just specify the Resource
Group name.
Default value: false

resourceGroupName (Required if deployToSlotOrASE is true) Name of the Resource


Resource group Group containing the Function App for Containers.

slotName (Required) Enter or select an existing slot other than the


Slot production slot.
Default value: production

imageName (Required) Image to be used for deployment.


Image name Example: myregistr y.azurecr.io/nginx:latest

containerCommand (Optional) Startup command to be executed after deployment.


Startup command

appSettings (Optional) Application settings to be entered using the syntax


App settings '-key value'. Values containing spaces should be enclosed in
double quotes.
Example: -Por t 5000 -RequestTimeout 5000 -
WEBSITE_TIME_ZONE "Eastern Standard Time"

configurationStrings (Optional) Configuration strings to be entered using the


Configuration settings syntax '-key value'. Values containing spaces should be
enclosed in double quotes.
Example: -phpVersion 5.6 -linuxFxVersion: node|6.11

Example
This example deploys Azure Functions on Linux using containers:

variables:
imageName: contoso.azurecr.io/azurefunctions-containers:$(build.buildId)
azureSubscription: Contoso
# To ignore SSL error uncomment the following variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true

steps:
- task: AzureFunctionAppContainer@1
displayName: Azure Function App on Container deploy
inputs:
azureSubscription: $(azureSubscription)
appName: functionappcontainers
imageName: $(imageName)

Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.

FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during configuration.
This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about running a self-
hosted agent behind a web proxy

Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Key Vault task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Overview
Use this task to download secrets such as authentication keys, storage account keys, data encryption keys, .PFX
files, and passwords from an Azure Key Vault instance. The task can be used to fetch the latest values of all or a
subset of secrets from the vault, and set them as variables that can be used in subsequent tasks of a pipeline. The
task is Node-based, and works with agents on Linux, macOS, and Windows.

Prerequisites
The task has the following Prerequisites:
An Azure subscription linked to Azure Pipelines or Team Foundation Server using the Azure Resource
Manager service connection.
An Azure Key Vault containing the secrets.
You can create a key vault:
In the Azure portal
By using Azure PowerShell
By using the Azure CLI
Add secrets to a key vault:
By using the PowerShell cmdlet Set-AzureKeyVaultSecret. If the secret does not exist, this cmdlet creates it. If
the secret already exists, this cmdlet creates a new version of that secret.
By using the Azure CLI. To add a secret to a key vault, for example a secret named SQLPassword with the
value Pa$$w0rd , type:

az keyvault secret set --vault-name 'ContosoKeyVault' --name 'SQLPassword' --value 'Pa$$w0rd'

When you want to access secrets:


Ensure the Azure service connection has at least Get and List permissions on the vault. You can set these
permissions in the Azure portal:
Open the Settings blade for the vault, choose Access policies , then Add new .
In the Add access policy blade, choose Select principal and select the service principal for your
client account.
In the Add access policy blade, choose Secret permissions and ensure that Get and List are
checked (ticked).
Choose OK to save the changes.

YAML snippet
# Azure Key Vault
# Download Azure Key Vault secrets
- task: AzureKeyVault@1
inputs:
azureSubscription:
keyVaultName:
secretsFilter: '*'
runAsPreJob: false # Azure DevOps Services only

Arguments
PA RA M ET ER DESC RIP T IO N

ConnectedServiceName Azure Subscription (Required) Select the service connection for the Azure
subscription containing the Azure Key Vault instance, or create
a new connection. Learn more

KeyVaultName (Required) Select the name of the Azure Key Vault from which
Key Vault the secrets will be downloaded.

SecretsFilter (Required) A comma-separated list of secret names to be


Secrets filter downloaded.
Default value: *

RunAsPreJob (Required) Run the task before job execution begins. Exposes
Make secrets available to whole job secrets to all tasks in the job, not just tasks that follow this
one.
Default value: false

PA RA M ET ER DESC RIP T IO N

ConnectedServiceName Azure Subscription (Required) Select the service connection for the Azure
subscription containing the Azure Key Vault instance, or create
a new connection. Learn more

KeyVaultName (Required) Select the name of the Azure Key Vault from which
Key Vault the secrets will be downloaded.

SecretsFilter (Required) A comma-separated list of secret names to be


Secrets filter downloaded.
Default value: *

NOTE
Values are retrieved as strings. For example, if there is a secret named connectionString , a task variable
connectionString is created with the latest value of the respective secret fetched from Azure key vault. This variable is
then available in subsequent tasks.

If the value fetched from the vault is a certificate (for example, a PFX file), the task variable will contain the contents
of the PFX in string format. You can use the following PowerShell code to retrieve the PFX file from the task
variable:
$kvSecretBytes = [System.Convert]::FromBase64String($(PfxSecret))
$certCollection = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2Collection
$certCollection.Import($kvSecretBytes,$null,
[System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable)

If the certificate file will be stored locally on the machine, it is good practice to encrypt it with a password:

#Get the file created


$password = 'your password'
$protectedCertificateBytes =
$certCollection.Export([System.Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, $password)
$pfxPath = [Environment]::GetFolderPath("Desktop") + "\MyCert.pfx"
[System.IO.File]::WriteAllBytes($pfxPath, $protectedCertificateBytes)

For more details, see Get started with Azure Key Vault certificates.

Contact Information
Contact [email protected] if you discover issues using the task, to share feedback about the
task, or to suggest new features that you would like to see.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Azure Monitor Alerts task
4/22/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to configure alerts on available metrics for an Azure resource.

YAML snippet
# Azure Monitor alerts
# Configure alerts on available metrics for an Azure resource
- task: AzureMonitorAlerts@0
inputs:
azureSubscription:
resourceGroupName:
#resourceType: 'Microsoft.Insights/components' # Options: microsoft.Insights/Components,
microsoft.Web/Sites, microsoft.Storage/StorageAccounts, microsoft.Compute/VirtualMachines
resourceName:
alertRules:
#notifyServiceOwners: # Optional
#notifyEmails: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

ConnectedServiceName (Required) Select the Azure Resource Manager subscription.


Azure Subscription Note: To configure new service connection, select the Azure
subscription from the list and click Authorize . If your
subscription is not listed or if you want to use an existing
Service Principal, you can setup an Azure service connection
using 'Add' or 'Manage' button.
Argument alias: azureSubscription

ResourceGroupName (Required) Select the Azure Resource Group that contains the
Resource Group Azure resource where you want to configure an alert.

ResourceType (Required) Select the Azure resource type.


Resource Type Options:
Microsoft.Insights/components, Microsoft.Web/sites,
Microsoft.Storage/storageAccounts,
Microsoft.Compute/virtualMachines
Default value: Microsoft.Insights/components

ResourceName (Required) Select name of Azure resource where you want to


Resource name configure an alert.

AlertRules (Required) List of Azure monitor alerts configured on selected


Alert rules Azure resource. To add or modify alerts, click on […] button.

NotifyServiceOwners (Optional) Send email notification to everyone who has access


Subscription owners, contributors and readers to this resource group.
A RGUM EN T DESC RIP T IO N

NotifyEmails (Optional) Add additional email addresses separated by


Additional administrator emails semicolons(;) if you want to send email notification to
additional people (whether or not you checked the
"subscription owners..." box).

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Database for Mysql Deployment task
5/3/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to run your scripts and make changes to your Azure DB for Mysql. Note that this is an early preview
version.

YAML snippet
# Azure Database for MySQL deployment
# Run your scripts and make changes to your Azure Database for MySQL
- task: AzureMysqlDeployment@1
inputs:
azureSubscription:
serverName:
#databaseName: # Optional
sqlUsername:
sqlPassword:
#taskNameSelector: 'SqlTaskFile' # Optional. Options: sqlTaskFile, inlineSqlTask
#sqlFile: # Required when taskNameSelector == SqlTaskFile
#sqlInline: # Required when taskNameSelector == InlineSqlTask
#sqlAdditionalArguments: # Optional
#ipDetectionMethod: 'AutoDetect' # Options: autoDetect, iPAddressRange
#startIpAddress: # Required when ipDetectionMethod == IPAddressRange
#endIpAddress: # Required when ipDetectionMethod == IPAddressRange
#deleteFirewallRule: true # Optional

Arguments
A RGUM EN T DESC RIP T IO N

ConnectedServiceName (Required) This is needed to connect to your Azure account. To


Azure Subscription configure new service connection, select the Azure
subscription from the list and click 'Authorize'. If your
subscription is not listed or if you want to use an existing
Service Principal, you can setup an Azure service connection
using 'Add' or 'Manage' button.
Argument alias: azureSubscription

ServerName (Required) Server name of 'Azure DB for Mysql'.


Host Name Example: fabrikam.mysql.database.azure.com. When you
connect using Mysql Workbench, this is the same value that is
used for 'Hostname' in 'Parameters'

DatabaseName (Optional) The name of database, if you already have one, on


Database Name which the below script is needed to be run, else the script itself
can be used to create the database.
A RGUM EN T DESC RIP T IO N

SqlUsername (Required) Azure Database for MySQL server supports native


Server Admin Login MySQL authentication. You can connect and authenticate to a
server with the server's admin login.
Example: bbo1. When you connect using Mysql Workbench,
this is the same value that is used for 'Username' in
'Parameters'.

SqlPassword (Required) Administrator password for Azure DB for Mysql. In


Password case you don't recall the password you can change the
password from Azure portal. It can be variable defined in the
pipeline.
Example : $(password).Also, you may mark the variable type as
'secret' to secure it.

TaskNameSelector (Optional) Select one of the options between Script File &
Type Inline Script.

SqlFile (Required) Full path of the script file on the automation agent
MySQL Script or on a UNC path accessible to the automation agent like,
\BudgetIT\DeployBuilds\script.sql . Also, predefined
system variables like, $(agent.releaseDirectory) can also
be used here. A file containing SQL statements can be used
here

SqlInline (Required) Enter the MySQL script to execute on the Database


Inline MySQL Script selected above.

SqlAdditionalArguments (Optional) Additional options supported by mysql simple SQL


Additional Mysql Arguments shell. These options will be applied when executing the given
file on the Azure DB for Mysql.
Example: You can change to default tab separated output
format to HTML or even XML format. Or if you have problems
due to insufficient memory for large result sets, use the
--quick option

IpDetectionMethod (Required) For successful execution of the task, we need to


Specify Firewall Rules Using enable administrators to access the Azure Database for
MySQL Server from the IP Address of the automation agent.
By selecting auto-detect you can automatically add firewall
exception for range of possible IP Address of automation
agent or else you can specify the range explicitly.

StartIpAddress (Required) The starting IP Address of the automation agent


Start IP Address machine pool like 196.21.30.50 .

EndIpAddress (Required) The ending IP Address of the automation agent


End IP Address machine pool like 196.21.30.65 .

DeleteFirewallRule (Optional) If selected, the added exception for IP addresses of


Delete Rule After Task Ends the automation agent will be removed for corresponding
Azure Database for MySQL

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Security and Compliance Assessment task
11/2/2020 • 2 minutes to read • Edit Online

Azure Policy allows you to assess and enforce resource compliance against defined IT policies. Use this task in a
gate to identify, analyze and evaluate the security risks, and determine the mitigation measures required to reduce
the risks.

Demands
Can be used only as a gate. This task is not supported in a build or release pipeline.

YAML snippet
# Check Azure Policy compliance
# Security and compliance assessment for Azure Policy
- task: AzurePolicyCheckGate@0
inputs:
azureSubscription:
#resourceGroupName: # Optional
#resources: # Optional

Arguments
PA RA M ET ERS DESC RIP T IO N

Azure subscription (Required) Select the Azure Resource Manager subscription on


which to enforce the policies.

Resource group Select the Resource Group or specify a variable name.

Resource name Select the name of the Azure resources for which you want to
check policy compliance.
Azure PowerShell task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to run a PowerShell script within an Azure environment. The Azure context is authenticated with the
provided Azure Resource Manager service connection.

YAML snippet
# Azure PowerShell
# Run a PowerShell script within an Azure environment
- task: AzurePowerShell@4
inputs:
#azureSubscription: Required. Name of Azure Resource Manager service connection
#scriptType: 'FilePath' # Optional. Options: filePath, inlineScript
#scriptPath: # Optional
#inline: '# You can write your Azure PowerShell scripts inline here. # You can also pass predefined and
custom variables to this script using arguments' # Optional
#scriptArguments: # Optional
#errorActionPreference: 'stop' # Optional. Options: stop, continue, silentlyContinue
#failOnStandardError: false # Optional
#azurePowerShellVersion: 'OtherVersion' # Required. Options: latestVersion, otherVersion
#preferredAzurePowerShellVersion: # Required when azurePowerShellVersion == OtherVersion

Arguments
A RGUM EN T DESC RIP T IO N

ConnectedServiceNameARM (Required) name of an Azure Resource Manager service


Azure Subscription connection for authentication.
Argument alias: azureSubscription

ScriptType (Optional) Type of the script: filePath or inlineScript


Script Type Default value: FilePath

ScriptPath (Optional) Path of the script. Should be fully qualified path or


Script Path relative to the default working directory.

Inline (Optional) Enter the script to execute.


Inline Script Default value:
# You can write your Azure PowerShell scripts inline here.
# You can also pass predefined and custom variables to this
script using arguments

ScriptArguments (Optional) Additional parameters to pass to PowerShell. Can


Script Arguments be either ordinal or named parameters. Not applicable for
inline script option.

errorActionPreference (Optional) Select the value of the ErrorActionPreference


ErrorActionPreference variable for executing the script.
Default value: stop
A RGUM EN T DESC RIP T IO N

FailOnStandardError (Optional) If this is true, this task will fail if any errors are
Fail on Standard Error written to the error pipeline, or if any data is written to the
Standard Error stream.
Default value: false

TargetAzurePs (Required) In case of Microsoft-hosted agents, the supported


Azure PowerShell Version Azure PowerShell version. To pick the latest version available
on the agent, select Latest installed version.
For self-hosted agents, you can specify preferred version of
Azure PowerShell using "Specify version"
Default value: OtherVersion
Argument alias: azurePowerShellVersion

CustomTargetAzurePs (Required when TargetAzurePs = OtherVersion)


preferredAzurePowerShellVersion Preferred Azure PowerShell Version needs to be a proper
semantic version. For example, 1.2.3. Regex like 2.*,2.3.* is not
supported.
Argument alias: preferredAzurePowerShellVersion

Samples
- task: AzurePowerShell@4
inputs:
azureSubscription: my-arm-service-connection
scriptType: filePath
scriptPath: $(Build.SourcesDirectory)\myscript.ps1
scriptArguments:
-Arg1 val1 `
-Arg2 val2 `
-Arg3 val3
azurePowerShellVersion: latestVersion

Troubleshooting
Script worked locally, but failed in the pipeline
This typically occurs when the service connection used in the pipeline has insufficient permissions to run the
script. Locally, the script runs with your credentials and would succeed as you may have the required access.
To resolve this issue, ensure the service principle/ authentication credentials have the required permissions. For
more information, see Use Role-Based Access Control to manage access to your Azure subscription resources.
Error: Could not find the modules: '' with Version: ''. If the module was recently installed, retry after restarting
the Azure Pipelines task agent
Azure PowerShell task uses Azure/AzureRM/Az PowerShell Module to interact with Azure Subscription. This issue
occurs when the PowerShell module is not available on the Hosted Agent. Hence, for a particular task version,
Preferred Azure PowerShell version must be specified in the Azure PowerShell version options from the
following available list of versions.

TA SK VERSIO N AVA IL A B L E VERSIO N S O F P O W ERSH EL L M O DUL ES

2.* Choose one from any of the 2 lists:


Azure: 2.1.0, 3.8.0, 4.2.1, 5.1.1
AzureRM: 2.1.0, 3.8.0, 4.2.1, 5.1.1, 6.7.0
TA SK VERSIO N AVA IL A B L E VERSIO N S O F P O W ERSH EL L M O DUL ES

3.* Choose one from any of the 2 lists:


Azure: 2.1.0, 3.8.0, 4.2.1, 5.1.1
AzureRM: 2.1.0, 3.8.0, 4.2.1, 5.1.1, 6.7.0

4.* Az Module: 1.0.0, 1.6.0, 2.3.2, 2.6.0, 3.1.0, 3.5.0

5.* (preview) Az Module: 1.0.0, 1.6.0, 2.3.2, 2.6.0, 3.1.0

Service Connection Issues


To troubleshoot issues related to service connections, see Service Connection troubleshooting

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Resource Group Deployment task
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy, start, stop, and delete Azure Resource Groups.

YAML snippet
# Azure resource group deployment
# Deploy an Azure Resource Manager (ARM) template to a resource group and manage virtual machines
- task: AzureResourceGroupDeployment@2
inputs:
azureSubscription:
#action: 'Create Or Update Resource Group' # Options: create Or Update Resource Group, select Resource
Group, start, stop, stopWithDeallocate, restart, delete, deleteRG
resourceGroupName:
#location: # Required when action == Create Or Update Resource Group
#templateLocation: 'Linked artifact' # Options: linked Artifact, uRL Of The File
#csmFileLink: # Required when templateLocation == URL Of The File
#csmParametersFileLink: # Optional
#csmFile: # Required when TemplateLocation == Linked Artifact
#csmParametersFile: # Optional
#overrideParameters: # Optional
#deploymentMode: 'Incremental' # Options: Incremental, Complete, Validation
#enableDeploymentPrerequisites: 'None' # Optional. Options: none, configureVMwithWinRM,
configureVMWithDGAgent
#teamServicesConnection: # Required when enableDeploymentPrerequisites == ConfigureVMWithDGAgent
#teamProject: # Required when enableDeploymentPrerequisites == ConfigureVMWithDGAgent
#deploymentGroupName: # Required when enableDeploymentPrerequisites == ConfigureVMWithDGAgent
#copyAzureVMTags: true # Optional
#runAgentServiceAsUser: # Optional
#userName: # Required when enableDeploymentPrerequisites == ConfigureVMWithDGAgent &&
RunAgentServiceAsUser == True
#password: # Optional
#outputVariable: # Optional
#deploymentName: # Optional
#deploymentOutputs: # Optional
#addSpnToEnvironment: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

ConnectedServiceName (Required) Select the Azure Resource Manager subscription for


Azure subscription the deployment.
Argument aliases: azureSubscription

action (Required) Action to be performed on the Azure resources or


Action resource group.
Default value: Create Or Update Resource Group

resourceGroupName (Required) Provide the name of a resource group.


Resource group
A RGUM EN T DESC RIP T IO N

location (Required) Location for deploying the resource group. If the


Location resource group already exists in the subscription, then this
value will be ignored.

templateLocation (Required) Select either Linked ar tifact or URL of the file .


Template location Default value: Linked artifact

csmFileLink (Required) Specify the URL of the template file.


Template link Example: https://raw.githubusercontent.com/Azure/...
To deploy a template stored in a private storage account,
retrieve and include the shared access signature (SAS) token
in the URL of the template.
Example : <blob_storage_url>/template.json? .
To upload a template file (or a linked template) to a storage
account and generate a SAS token, you could use Azure file
copy task or follow the steps using PowerShell or Azure CLI.
To view the template parameters in a grid, click on ... next to
Override template parameters text box. This feature requires
that CORS rules are enabled at the source. If templates are in
Azure storage blob, refer to this to enable CORS.

csmParametersFileLink (Optional) Specify the URL of the parameters file.


Template parameters link Example : https://raw.githubusercontent.com/Azure/...
To use a file stored in a private storage account, retrieve and
include the shared access signature (SAS) token in the URL of
the template.
Example: <blob_storage_url>/template.json? .
To upload a parameters file to a storage account and generate
a SAS token, you could use Azure file copy task or follow the
steps using PowerShell or Azure CLI.
To view the template parameters in a grid, click on ... next to
Override template parameters text box. This feature requires
that CORS rules are enabled at the source. If templates are in
Azure storage blob, refer to this to enable CORS.

csmFile (Required) Specify the path or a pattern pointing to the Azure


Template Resource Manager template. For more information about the
templates see https://aka.ms/azuretemplates. To get started
immediately use template https://aka.ms/sampletemplate.

csmParametersFile (Optional) Specify the path or a pattern pointing for the


Template parameters parameters file for the Azure Resource Manager template.
A RGUM EN T DESC RIP T IO N

overrideParameters (Optional) To view the template parameters in a grid, click on


Override template parameters ... next to Override Parameters textbox. This feature requires
that CORS rules are enabled at the source. If templates are in
Azure storage blob, refer to this to enable CORS. Or type the
template parameters to override in the textbox.
Example : –storageName fabrikam –adminUsername
$(vmusername) -adminPassword $(password) –
azureKeyVaultName $(fabrikamFibre) .
If the parameter value you're using has multiple words,
enclose them in quotes, even if you're passing them using
variables.
For example , -name "parameter value" -name2
"$(var)".
To override object type parameters use stringified JSON
objects.
For example , -options ["option1"] -map {"key1":
"value1" } .

deploymentMode (Required) Incremental mode handles deployments as


Deployment mode incremental updates to the resource group. It leaves
unchanged resources that exist in the resource group but are
not specified in the template. Complete mode deletes
resources that are not in your template. Validate mode
enables you to find problems with the template before
creating actual resources.
Default value: Incremental

enableDeploymentPrerequisites (Optional) These options would be applicable only when the


Enable prerequisites Resource group contains virtual machines. Choosing
Deployment Group option would configure Deployment
Group agent on each of the virtual machines. Selecting
WinRM option configures Windows Remote Management
(WinRM) listener over HTTPS protocol on port 5986, using a
self-signed certificate. This configuration is required for
performing deployment operation on Azure machines. If the
target Virtual Machines are backed by a Load balancer,
ensure Inbound NAT rules are configured for target port
(5986).
Default value: None

deploymentGroupEndpoint (Required) Specify the service connection to connect to an


Azure Pipelines service connection Azure DevOps organization or collection for agent
registration.

You can create a service connection using +New , and select


Token-based authentication . You need a personal access
token (PAT) to set up a service connection.
Click Manage to update the service connection details.
Argument aliases: teamServicesConnection

project (Required) Specify the Team project which has the Deployment
Team project Group defined in it.
Argument aliases: teamProject

deploymentGroupName (Required) Specify the Deployment Group against which the


Deployment Group Agent(s) will be registered. For more guidance, refer to
Deployment Groups.
A RGUM EN T DESC RIP T IO N

copyAzureVMTags (Optional) Choose if the tags configured on the Azure VM


Copy Azure VM tags to agents need to be copied to the corresponding Deployment Group
agent. By default all Azure tags will be copied following the
format Key: Value .
Example : An Azure Tag "Role : Web" would be copied as-is
to the Agent machine. For more information on how tag
Azure resources refer to the link.

runAgentServiceAsUser (Optional) Decide whether to run the agent service as a user


Run agent service as a user other than the default. The default user is NT
AUTHORITY\\SYSTEM in Windows and root in Linux.

userName (Required) The username to run the agent service on the


User name virtual machines.
For domain users, please enter values as
domain\\username or [email protected] .
For local users, please enter just the user name.
It is assumed that the same domain user or a local user with
the same name, respectively, is present on all the virtual
machines in the resource group.

password The password for the user to run the agent service on the
Password Windows VMs.
It is assumed that the password is same for the specified user
on all the VMs.
It can accept variable defined in build or release pipelines as
$(passwordVariable) . You may mark variable as secret to
secure it.
For linux VMs, a password is not required and will be ignored.

outputVariable (Optional) Provide a name for the variable for the resource
VM details for WinRM group. The variable can be used as $(variableName) to refer
to the resource group in subsequent tasks like in the
PowerShell on Target Machines task for deploying
applications. Valid only when the selected action is Create,
Update or Select , and required when an existing resource
group is selected.

deploymentName (Optional) Specifies the name of the resource group


Deployment name deployment to create

deploymentOutputs (Optional) Provide a name for the variable for the output
Deployment outputs variable which will contain the outputs section of the current
deployment object in string format. You can use the
Conver tFrom-Json PowerShell cmdlet to parse the JSON
object and access the individual output values.

addSpnToEnvironment Adds service principal ID and key of the Azure endpoint you
Access service principal details in override parameters chose to the script's execution environment. You can use
these variables: $ser vicePrincipalId and
$ser vicePrincipalKey in your override parameters like -key
$ser vicePrincipalKey

Troubleshooting
Error: Internal Server Error
These issues are mostly transient in nature. There are multiple reasons why it could be happening:
One of the Azure service you're trying to deploy is undergoing maintainance in the region you're trying to
deploy to. Keep an eye out on https://status.azure.com/ to check downtimes of Azure Services.
Azure Pipelines service itself is going through maintenance. Keep an eye out on https://status.dev.azure.com/
for downtimes.
However, we've seen some instances where this is due to an error in the ARM template, such as the Azure service
you're trying to deploy doesn't support the region you've chosen for the resource.
Error: Timeout
Timeout issues could be coming from two places:
Azure Pipelines Agent
Portal Deployment
You can identify if the timeout is from portal, by checking for the portal deployment link that'll be in the task logs.
If there's no link, this is likely due to Azure Pipelines agent. If there's a link, follow the link to see if there's a timeout
that has happened in the portal deployment.
Azure Pipelines Agent
If the issue is coming from Azure Pipelines agent, you can increase the timeout by setting timeoutInMinutes as key
in the YAML to 0. Check out this article for more details:
https://docs.microsoft.com/azure/devops/pipelines/process/phases?tabs=yaml.
Portal Deployment
Check out this doc on how to identify if the error came from the Azure portal:
https://docs.microsoft.com/azure/azure-resource-manager/templates/deployment-history?tabs=azure-portal.
In case of portal deployment, try setting "timeoutInMinutes" in the ARM template to "0". If not specified, the value
assumed is 60 minutes. 0 makes sure the deployment will run for as long as it can to succeed.
This could also be happening because of transient issues in the system. Keep an eye on
https://status.dev.azure.com/ to check if there's a downtime in Azure Pipelines service.
Error: Azure Resource Manager (ARM ) template failed validation
This issue happens mostly because of an invalid parameter in the ARM Template, such as an unsupported SKU or
Region. If the validation has failed, please check the error message. It should point you to the resource and
parameter that is invalid.
In addition, refer to this article regarding structure and syntax of ARM Templates:
https://docs.microsoft.com/azure/azure-resource-manager/templates/template-syntax.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure SQL Database Deployment task
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy to Azure SQL DB using a DACPAC or run scripts using SQLCMD.

IMPORTANT
This task is supported only in a Windows environment. If you are trying to use Azure Active Directory (Azure AD) integrated
authentication, you must create a private agent. Azure AD integrated authentication is not supported for hosted agents.

YAML snippet
# Azure SQL Database deployment
# Deploy an Azure SQL Database using DACPAC or run scripts using SQLCMD
- task: SqlAzureDacpacDeployment@1
inputs:
#azureConnectionType: 'ConnectedServiceNameARM' # Optional. Options: connectedServiceName,
connectedServiceNameARM
#azureClassicSubscription: # Required when azureConnectionType == ConnectedServiceName
#azureSubscription: # Required when azureConnectionType == ConnectedServiceNameARM
#authenticationType: 'server' # Options: server, aadAuthenticationPassword, aadAuthenticationIntegrated,
connectionString
#serverName: # Required when authenticationType == Server || AuthenticationType ==
AadAuthenticationPassword || AuthenticationType == AadAuthenticationIntegrated
#databaseName: # Required when authenticationType == Server || AuthenticationType ==
AadAuthenticationPassword || AuthenticationType == AadAuthenticationIntegrated
#sqlUsername: # Required when authenticationType == Server
#sqlPassword: # Required when authenticationType == Server
#aadSqlUsername: # Required when authenticationType == AadAuthenticationPassword
#aadSqlPassword: # Required when authenticationType == AadAuthenticationPassword
#connectionString: # Required when authenticationType == ConnectionString
#deployType: 'DacpacTask' # Options: dacpacTask, sqlTask, inlineSqlTask
#deploymentAction: 'Publish' # Required when deployType == DacpacTask. Options: publish, extract, export,
import, script, driftReport, deployReport
#dacpacFile: # Required when deploymentAction == Publish || DeploymentAction == Script || DeploymentAction
== DeployReport
#bacpacFile: # Required when deploymentAction == Import
#sqlFile: # Required when deployType == SqlTask
#sqlInline: # Required when deployType == InlineSqlTask
#publishProfile: # Optional
#additionalArguments: # Optional
#sqlAdditionalArguments: # Optional
#inlineAdditionalArguments: # Optional
#ipDetectionMethod: 'AutoDetect' # Options: autoDetect, iPAddressRange
#startIpAddress: # Required when ipDetectionMethod == IPAddressRange
#endIpAddress: # Required when ipDetectionMethod == IPAddressRange
#deleteFirewallRule: true # Optional

Arguments
A RGUM EN T DESC RIP T IO N

ConnectedServiceNameSelector (Optional) Argument alias: azureConnectionType


Azure Connection Type Default value: ConnectedServiceNameARM

ConnectedServiceName (Required) Target Azure Classic subscription for deploying SQL


Azure Classic Subscription files
Argument alias: azureClassicSubscription

ConnectedServiceNameARM (Required) Target Azure Resource Manager subscription for


Azure Subscription deploying SQL files
Argument alias: azureSubscription

AuthenticationType (Required) Type of database authentication, can be SQL Server


Authentication Type Authentication, Active Directory - Integrated, Active Directory
- Password, or Connection String. Integrated authentication
means that the agent will access the database using its
current Active Directory account context.
Default value: server

ServerName (Required except when Authentication Type is Connection


Azure SQL Server String) Azure SQL Server name, like
Fabrikam.database.windows.net,1433 or
Fabrikam.database.windows.net.

DatabaseName (Required) Name of the Azure SQL Database, where the files
Database will be deployed.

SqlUsername (Required when Authentication Type is SQL Server


Login Authentication or Active Directory - Password) Specify the
Azure SQL Server administrator login or Active Directory user
name.

SqlPassword (Required when Authentication Type is SQL Server


Password Authentication or Active Directory - Password) Password for
the Azure SQL Server administrator or Active Directory user. It
can accept variables defined in build/release pipelines as
'$(passwordVariable)'.You may mark the variable type as
'secret' to secure it.

ConnectionString (Required when Authentication Type is Connection String) The


Connection String connection string, including authentication information, for
the Azure SQL Server.

TaskNameSelector (Optional) Specify the type of artifact, SQL DACPAC file, SQL
Deploy Type Script file, or Inline SQL Script.
Argument alias: deployType
Default value: DacpacTask

DeploymentAction (Required) Choose one of the SQL Actions from the list.
Action Publish, Extract, Export, Import, Script, Drift Report, Deploy
Report.
For more details refer link
Default value: Publish
A RGUM EN T DESC RIP T IO N

DacpacFile (Required when Deploy Type is SQL DACPAC file) Location of


DACPAC File the DACPAC file on the automation agent or on a UNC path
accessible to the automation agent like,
\BudgetIT\Web\Deploy\FabrikamDB.dacpac . Predefined
system variables like, $(agent.releaseDirectory) can also
be used here.

BacpacFile (Required) Location of the BACPAC file on the automation


BACPAC File agent or on a UNC path accessible to the automation agent
like \BudgetIT\Web\Deploy\FabrikamDB.bacpac . Predefined
system variables like, $(agent.releaseDirectory) can also
be used here

SqlFile (Required when Deploy Type is SQL Script file) Location of the
SQL Script SQL script file on the automation agent or on a UNC path
accessible to the automation agent like,
\BudgetIT\Web\Deploy\FabrikamDB.sql . Predefined system
variables like, $(agent.releaseDirectory) can also be used
here.

SqlInline (Required when Deploy Type is Inline SQL Script) Enter the
Inline SQL Script SQL script to execute on the Database selected above.

PublishProfile (Optional) Publish profile provides fine-grained control over


Publish Profile Azure SQL Database creation or upgrades. Specify the path to
the publish profile XML file on the agent machine or a UNC
share. If the publish profile contains secrets like credentials,
upload it to the secure files library where it is securely stored
with encryption. Then use the Download secure file task at
the start of your pipeline to download it to the agent machine
when the pipeline runs and delete it when the pipeline is
complete. Predefined system variables like
$(agent.buildDirectory) or
$(agent.releaseDirectory) can also be used in this field.

AdditionalArguments (Optional) Additional SqlPackage.exe arguments that will be


Additional SqlPackage.exe Arguments applied when deploying the Azure SQL Database, in case
DACPAC option is selected like,
/p:IgnoreAnsiNulls=True /p:IgnoreComments=True . These
arguments will override the settings in the publish profile XML
file (if provided).

SqlAdditionalArguments (Optional) Additional Invoke-Sqlcmd arguments that will be


Additional Invoke-Sqlcmd Arguments applied when executing the given SQL query on the Azure
SQL Database like,
-ConnectionTimeout 100 -OutputSqlErrors

InlineAdditionalArguments (Optional) Additional Invoke-Sqlcmd arguments that will be


Additional Invoke-Sqlcmd Arguments applied when executing the given SQL query on the Azure
SQL Database like,
-ConnectionTimeout 100 -OutputSqlErrors
A RGUM EN T DESC RIP T IO N

IpDetectionMethod (Required) For the task to run, the IP Address of the


Specify Firewall Rules Using automation agent has to be added to the 'Allowed IP
Addresses' in the Azure SQL Server's Firewall. Select auto-
detect to automatically add firewall exception for range of
possible IP Address of automation agent or specify the range
explicitly.
Default value: AutoDetect

StartIpAddress (Required) The starting IP Address of the automation agent


Start IP Address machine pool like 196.21.30.50.

EndIpAddress (Required) The ending IP Address of the automation agent


End IP Address machine pool like 196.21.30.65.

DeleteFirewallRule (Optional) If selected, then after the task ends, the IP


Delete Rule After Task Ends Addresses specified here are deleted from the 'Allowed IP
Addresses' list of the Azure SQL Server's Firewall.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Web App task
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy web applications to Azure App service.

Arguments
PA RA M ET ERS DESC RIP T IO N

azureSubscription (Required) Name of the Azure Resource Manager service


Azure subscription connection

appType (Optional) Web App type


App type

appName (Required) Name of an existing App Service


App name

deployToSlotOrASE (Optional) Select the option to deploy to an existing


Deploy to Slot or App Service Environment deployment slot or Azure App Service Environment. For both
the targets, the task needs resource group name. In case the
deployment target is a slot, by default the deployment is
done to the production slot. Any other existing slot name can
also be provided. In case the deployment target is an Azure
App Service environment, specify the resource group name.
Default value: false

resourceGroupName (Required if deployToSlotOrASE == true) Name of the


Resource group resource group

slotName (Required if deployToSlotOrASE == true) Name of the slot


Slot Default value: production

package (Required) File path to the package, or to a folder containing


Package or folder App Service contents generated by MSBuild, or to a
compressed zip or war file.
Build variables or release variables) and wildcards are
supported. For example,
$(System.DefaultWorkingDirectory)/**/*.zip or
$(System.DefaultWorkingDirectory)/**/*.war

runtimeStack (Optional) Web App on Linux offers two different options to


Runtime stack publish your application, one is custom image deployment
(Web App for Containers) and the other is app deployment
with a built-in platform image (Web App on Linux). You will
see this parameter only when you select Linux Web App in
the app type selection option in the task. [Here](supports a
number of Built-in images) is the list of Built-in images
support
PA RA M ET ERS DESC RIP T IO N

startUpCommand (Optional; Relevant if appType == webAppLinux) Startup


Startup command command to be run post deployment

customWebConfig (Optional) A standard web.config will be generated and


Generate web.config parameters for Python, Node.js, Go and deployed to Azure App Service if the application does not
Java apps have one. The values in web.config can be edited and vary
based on the application framework. For example for node.js
application, web.config will have startup file and iis_node
module values. This edit feature is only for the generated
web.config. Learn more

appSettings (Optional) Application settings to be entered using the syntax


App settings '-key value'. Values containing spaces should be enclosed in
double quotes.
Example: -Por t 5000 -RequestTimeout 5000 -
WEBSITE_TIME_ZONE "Eastern Standard Time"

configurationStrings (Optional) Configuration strings to be entered using the


Configuration settings syntax '-key value'. Values containing spaces should be
enclosed in double quotes.
Example: -phpVersion 5.6 -linuxFxVersion: node|6.11

deploymentMethod (Required) Deployment method for the app. Acceptable values


Deployment method are auto , zipDeploy , runFromPackage
Default value: auto

Following is an example YAML snippet to deploy web application to the Azure Web App service running on
Windows.

Example
variables:
azureSubscription: Contoso
# To ignore SSL error uncomment the below variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true

steps:

- task: AzureWebApp@1
displayName: Azure Web App Deploy
inputs:
azureSubscription: $(azureSubscription)
appName: samplewebapp
package: $(System.DefaultWorkingDirectory)/**/*.zip

To deploy Web App on Linux, add the appType parameter and set it to appType: webAppLinux .
To specify the deployment method as Zip Deploy, add the parameter deploymentMethod: zipDeploy . Other
supported value for this parameter is runFromPackage . If not mentioned, auto is taken as the default value.

Deployment methods
Several deployment methods are available in this task. Auto is the default option.
To change the deployment option in designer task, expand Additional Deployment Options and enable Select
deployment method to choose from additional package-based deployment options.
Based on the type of Azure App Service and Azure Pipelines agent, the task chooses a suitable deployment
technology. The different deployment technologies used by the task are:
Kudu REST APIs
Zip Deploy
RunFromPackage
By default the task tries to select the appropriate deployment technology given the input package, app service type
and agent OS.
When the App Service type is Web App on Linux App, use Zip Deploy
If War file is provided, use War Deploy
If Jar file is provided, use Run From package
For all others, use Run From Zip (via Zip Deploy)
On non-Windows agent (for any App service type), the task relies on Kudu REST APIs to deploy the Web App.
Kudu REST APIs
Works on Windows as well as Linux automation agent when the target is Web App on Windows or Web App on
Linux (built-in source) or Function App. The task uses Kudu to copy files to the Azure App service.
Zip Deploy
Creates a .zip deployment package of the chosen Package or folder and deploys the file contents to the wwwroot
folder of the App Service name function app in Azure. This option overwrites all existing contents in the wwwroot
folder. For more information, see Zip deployment for Azure Functions.
RunFromPackage
Creates the same deployment package as Zip Deploy. However, instead of deploying files to the wwwroot folder,
the entire package is mounted by the Functions runtime. With this option, files in the wwwroot folder become
read-only. For more information, see Run your Azure Functions from a package file.

Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.
Error: No package found with specified pattern
Check if the package mentioned in the task is published as an artifact in the build or a previous stage and
downloaded in the current job.
Error: Publish using zip deploy option is not supported for msBuild package type
Web packages created using MSBuild task (with default arguments) have a nested folder structure that can only be
deployed correctly by Web Deploy. Publish to zip deploy option can not be used to deploy those packages. To
convert the packaging structure, follow the below steps.
In Build Solution task, change the MSBuild Arguments to /p:DeployOnBuild=true
/p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True
/p:publishUrl="$(System.DefaultWorkingDirectory)\WebAppContent"
Add Archive Task and change the inputs as follows:
Change Root folder or file to archive to $(System.DefaultWorkingDirectory)\WebAppContent

Disable Prepend root folder name to archive paths option

Web app deployment on Windows is successful but the app is not working
This may be because web.config is not present in your app. You can either add a web.config file to your source or
auto-generate one using the Application and Configuration Settings of the task.
Click on the task and go to Generate web.config parameters for Python, Node.js, Go and Java apps.
Click on the more button Generate web.config parameters for Python, Node.js, Go and Java apps to edit the
parameters.

Select your application type from the drop down.


Click on OK. This will populate web.config parameters required to generate web.config.
Web app deployment on App Service Environment (ASE) is not working
Ensure that the Azure DevOps build agent is on the same VNET (subnet can be different) as the Internal Load
Balancer (ILB) of ASE. This will enable the agent to pull code from Azure DevOps and deploy to ASE.
If you are using Azure DevOps, the agent neednt be accessible from internet but needs only outbound access to
connect to Azure DevOps Service.
If you are using TFS/Azure DevOps server deployed in a Virtual Network, the agent can be completely isolated.
Build agent must be configured with the DNS configuration of the Web App it needs to deploy to. Since the
private resources in the Virtual Network don't have entries in Azure DNS, this needs to be added to the hosts
file on the agent machine.
If a self-signed certificate is used for the ASE configuration, "-allowUntrusted" option needs to be set in the
deploy task for MSDeploy.It is also recommended to set the variable VSTS_ARM_REST_IGNORE_SSL_ERRORS
to true. If a certificate from a certificate authority is used for ASE configuration, this should not be necessary.

FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during
configuration. This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about
running a self-hosted agent behind a web proxy

Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure virtual machine scale set Deployment task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy a virtual machine scale set image.

YAML snippet
# Azure VM scale set deployment
# Deploy a virtual machine scale set image
- task: AzureVmssDeployment@0
inputs:
azureSubscription:
#action: 'Update image' # Options: update Image, configure Application Startup
vmssName:
vmssOsType: # Options: windows, linux
imageUrl:
#customScriptsDirectory: # Optional
#customScript: # Optional
#customScriptArguments: # Optional
#customScriptsStorageAccount: # Optional
#skipArchivingCustomScripts: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Azure subscription (Required) Select the Azure Resource Manager subscription for
the scale set.

Action (Required) Choose between updating a virtual machine scale


set by using a VHD image and/or by running
deployment/install scripts using Custom Script VM extension.
The VHD image approach is better for scaling quickly and
doing rollback. The extension approach is useful for post
deployment configuration, software installation, or any other
configuration / management task.
You can use a VHD image to update a virtual machine scale
set only when it was created by using a custom image, the
update will fail if the virtual machine scale set was created by
using a platform/gallery image available in Azure.
The Custom script VM extension approach can be used for
virtual machine scale set created by using either custom
image or platform/gallery image.

Virtual machine scale set name (Required) Name of virtual machine scale set which you want
to update by using either a VHD image or by using Custom
script VM extension.

OS type (Required) Select the operating system type of virtual machine


scale set.
A RGUM EN T DESC RIP T IO N

Image url (Required) Specify the URL of VHD image. If it is an Azure


storage blob url, the storage account location should be same
as scale set location.

Custom script directory (Optional) Path to directory containing custom script(s) that
will be run by using Custom Script VM extension. The
extension approach is useful for post deployment
configuration, application/software installation, or any other
application configuration/management task. For example: the
script can set a machine level stage variable which the
application uses, like database connection string.

Command (Optional) The script that will be run by using Custom Script
VM extension. This script can invoke other scripts in the
directory. The script will be invoked with arguments passed
below.
This script in conjugation with such arguments can be used to
execute commands. For example:
1. Update-DatabaseConnectionStrings.ps1 -clusterType dev -
user $(dbUser) -password $(dbUserPwd) will update
connection string in web.config of web application.
2. install-secrets.sh --key-vault-type prod -key
serviceprincipalkey will create an encrypted file containing
service principal key.

Arguments (Optional) The custom script will be invoked with arguments


passed. Build/Release variables can be used which makes it
easy to use secrets.

Azure storage account where custom scripts will be uploaded (Optional) The Custom Script Extension downloads and
executes scripts provided by you on each virtual machines in
the virtual machine scale set. These scripts will be stored in the
storage account specified here. Specify a pre-existing ARM
storage account.

Skip Archiving custom scripts (Optional) By default, this task creates a compressed archive of
directory containing custom scripts. This improves
performance and reliability while uploading to azure storage. If
not selected, archiving will not be done and all files will be
individually uploaded.

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Web App for Container task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy Web Apps, Azure Functions, and WebJobs to Azure App Services using a custom Docker
image.

Task Inputs
PA RA M ET ERS DESC RIP T IO N

azureSubscription (Required) Name of Azure Resource Manager service


Azure subscription connection.

appName (Required) Name of Web App for Container.


App name

deployToSlotOrASE (Optional) Set to true to deploy to an existing deployment slot


Deploy to Slot or App Service Environment or Azure App Service Environment. For both the targets, the
task needs a Resource Group name. For the deployment slot
option, the default is to deploy to the production slot, or
you can specify any other existing slot name. If the
deployment target is an Azure App Service environment, leave
the slot name as production and just specify the Resource
Group name.
Default value: false

resourceGroupName (Required if deployToSlotOrASE is true) Name of the Resource


Resource group Group containing the Web App for Containers.

slotName (Required) Enter or select an existing slot other than the


Slot production slot.
Default value: production

imageName (Required) Image to be used for deployment.


Image name Example: myregistr y.azurecr.io/nginx:latest

containerCommand (Optional) Startup command to be executed after deployment.


Startup command

appSettings (Optional) Application settings to be entered using the syntax


App settings '-key value'. Values containing spaces must be enclosed in
double quotes.
Example: -Por t 5000 -RequestTimeout 5000 -
WEBSITE_TIME_ZONE "Eastern Standard Time"

configurationStrings (Optional) Configuration strings to be entered using the


Configuration settings syntax '-key value'. Values containing spaces must be enclosed
in double quotes.
Example: -phpVersion 5.6 -linuxFxVersion: node|6.11
Example
This example deploys a Web App on Linux using containers:

variables:
imageName: contoso.azurecr.io/aspnetcore:$(build.buildId)
azureSubscription: Contoso
# To ignore SSL error uncomment the following variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true

steps:
- task: AzureWebAppContainer@1
displayName: Azure Web App on Container Deploy
inputs:
appName: webappforcontainers
azureSubscription: $(azureSubscription)
imageName: $(imageName)

Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.

FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during configuration.
This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about running a self-
hosted agent behind a web proxy
Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Build Machine Image task
4/10/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to build a machine image using Packer. This image can be used for Azure Virtual machine scale set
deployment.

YAML snippet
# Build machine image
# Build a machine image using Packer, which may be used for Azure Virtual machine scale set deployment
- task: PackerBuild@1
inputs:
#templateType: 'builtin' # Options: builtin, custom
#customTemplateLocation: # Required when templateType == Custom
#customTemplateParameters: '{}' # Optional
connectedServiceName:
#isManagedImage: true
#managedImageName: # Required when isManagedImage == True
location:
storageAccountName:
azureResourceGroup:
#baseImageSource: 'default' # Options: default, customVhd
#baseImage: 'MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:windows' # Required when
baseImageSource == Default# Options: microsoftWindowsServer:WindowsServer:2012-R2-Datacenter:Windows,
microsoftWindowsServer:WindowsServer:2016-Datacenter:Windows, microsoftWindowsServer:WindowsServer:2012-
Datacenter:Windows, microsoftWindowsServer:WindowsServer:2008-R2-SP1:Windows, canonical:UbuntuServer:14.04.4-
LTS:Linux, canonical:UbuntuServer:16.04-LTS:Linux, redHat:RHEL:7.2:Linux, redHat:RHEL:6.8:Linux,
openLogic:CentOS:7.2:Linux, openLogic:CentOS:6.8:Linux, credativ:Debian:8:Linux, credativ:Debian:7:Linux,
sUSE:OpenSUSE-Leap:42.2:Linux, sUSE:SLES:12-SP2:Linux, sUSE:SLES:11-SP4:Linux
#customImageUrl: # Required when baseImageSource == CustomVhd
#customImageOSType: 'windows' # Required when baseImageSource == CustomVhd# Options: windows, linux
packagePath:
deployScriptPath:
#deployScriptArguments: # Optional
#additionalBuilderParameters: '{vm_size:Standard_D3_v2}' # Optional
#skipTempFileCleanupDuringVMDeprovision: true # Optional
#packerVersion: # Optional
#imageUri: # Optional
#imageId: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Packer template (Required) Select whether you want the task to auto generate
Packer template or use custom template provided by you.

Packer template location (Required) Path to a custom user-provided template.


A RGUM EN T DESC RIP T IO N

Template parameters (Optional) Specify parameters which will be passed to Packer


for building custom template. This should map to "variables"
section in your custom template. E.g. if the template has a
variable named "drop-location", then add a parameter here
with name "drop-location" and a value which you want to use.
You can link the value to a release variable as well. To view/edit
the additional parameters in a grid, click on "…" next to text
box.

Azure subscription (Required) Select the Azure Resource Manager subscription for
baking and storing the machine image.

Storage location (Required) Location for storing the built machine image. This
location will also be used to create a temporary VM for the
purpose of building image.

Storage account (Required) Storage account for storing the built machine
image. This storage account must be pre-existing in the
location selected.

Resource group (Required) Azure Resource group that contains the selected
storage account.

Base image source (Required) Select the source of base image. You can either
choose from a curated gallery of OS images or provide url of
your custom image.

Base image (Required) Choose from curated list of OS images. This will be
used for installing pre-requisite(s) and application(s) before
capturing machine image.

Base image URL (Required) Specify url of base image. This will be used for
installing pre-requisite(s) and application(s) before capturing
machine image.

Base image OS (Required) undefined

Deployment Package (Required) Specify the path for deployment package directory
relative to $(System.DefaultWorkingDirectory). Supports
minimatch pattern. Example path:
FrontendWebApp//Galler yApp

Deployment script (Required) Specify the relative path to powershell script(for


Windows) or shell script(for Linux) which deploys the package.
This script should be contained in the package path selected
above. Supports minimatch pattern. Example path:
deploy//scripts/windows/deploy.ps1

Deployment script arguments (Optional) Specify the arguments to be passed to deployment


script.
A RGUM EN T DESC RIP T IO N

Additional Builder parameters (Optional) In auto generated Packer template mode the task
creates a Packer template with an Azure builder. This builder is
used to generate a machine image. You can add keys to the
Azure builder to customize the generated Packer template. For
example setting ssh_tty=true in case you are using a CentOS
base image and you need to have a tty to run sudo.
To view/edit the additional parameters in a grid, click on “…”
next to text box.

Skip temporary file cleanup during deprovision (Optional) During deprovisioning of VM, skip clean-up of
temporary files uploaded to VM. Refer here

Image URL (Optional) Provide a name for the output variable which will
store generated machine image url.

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Chef task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy to Chef environments by editing environment attributes.

YAML snippet
# Chef
# Deploy to Chef environments by editing environment attributes
- task: Chef@1
inputs:
connectedServiceName:
environment:
attributes:
#chefWaitTime: '30'

Arguments
A RGUM EN T DESC RIP T IO N

Chef Connection (Required) Name of the Chef subscription

Environment (Required) Name of the Chef Environment to be used for


Deployment. The attributes of that environment will be edited.

Environment Attributes (Required) Specify the value of the leaf node attribute(s) to be
updated. Example. { "default_attributes.connectionString" :
"$(connectionString)", "override_attributes.buildLocation" :
"https:\//sample.blob.core.windows.net/build" }. Task fails if the
leaf node does not exist.

Wait Time (Required) The amount of time (in minutes) to wait for this
task to complete. Default value: 30 minutes

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Chef Knife task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to run scripts with Knife commands on your Chef workstation.

YAML snippet
# Chef Knife
# Run scripts with Knife commands on your Chef workstation
- task: ChefKnife@1
inputs:
connectedServiceName:
scriptPath:
#scriptArguments: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Chef Subscription (Required) Chef subscription to configure before running knife


commands

Script Path (Required) Path of the script. Should be fully qualified path or
relative to the default working directory.

Script Arguments (Optional) Additional parameters to pass to Script. Can be


either ordinal or named parameters.

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Copy Files Over SSH task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to copy files from a source folder to a target folder on a remote machine over SSH.
This task allows you to connect to a remote machine using SSH and copy files matching a set of minimatch
patterns from specified source folder to target folder on the remote machine. Supported protocols for file transfer
are SFTP and SCP via SFTP. In addition to Linux, macOS is partially supported (see FAQ).

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Prerequisites
The task supports use of an SSH key pair to connect to the remote machine(s).
The public key must be pre-installed or copied to the remote machine(s).

YAML snippet
# Copy files over SSH
# Copy files or build artifacts to a remote machine over SSH
- task: CopyFilesOverSSH@0
inputs:
sshEndpoint:
#sourceFolder: # Optional
#contents: '**'
#targetFolder: # Optional
#cleanTargetFolder: false # Optional
#overwrite: true # Optional
#failOnEmptySource: false # Optional
#flattenFolders: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

SSH endpoint The name of an SSH service connection containing connection


details for the remote machine.
- The hostname or IP address of the remote machine, the
port number, and the user name are required to create an
SSH service connection.
- The private key and the passphrase must be specified for
authentication.
A RGUM EN T DESC RIP T IO N

Source folder The source folder for the files to copy to the remote machine.
If omitted, the root of the repository is used. Names
containing wildcards such as *.zip are not supported. Use
variables if files are not in the repository. Example:
$(Agent.BuildDirectory)

Contents File paths to include as part of the copy. Supports multiple


lines of minimatch patterns. Default is ** which includes all
files (including sub folders) under the source folder.
- Example: **/*.jar \n **/*.war includes all jar and war
files (including sub folders) under the source folder.
- Example: ** \n !**/*.xml includes all files (including sub
folders) under the source folder but excludes xml files.

Target folder Target folder on the remote machine to where files will be
copied. Example: /home/user/MySite . Preface with a tilde (~ )
to specify the user's home directory.

Advanced - Clean target folder If this option is selected, all existing files in the target folder
will be deleted before copying.

Advanced - Over write If this option is selected (the default), existing files in the
target folder will be replaced.

Advanced - Flatten folders If this option is selected, the folder structure is not preserved
and all the files will be copied into the specified target folder
on the remote machine.

Control options See Control options

Supported algorithms
Key pair algorithms
RSA
DSA
Encryption algorithms
aes256-cbc
aes192-cbc
aes128-cbc
blowfish-cbc
3des-cbc
arcfour256
arcfour128
cast128-cbc
arcfour
For OpenSSL v1.0.1 and higher (on agent):
aes256-ctr
aes192-ctr
aes128-ctr
For OpenSSL v1.0.1 and higher, NodeJS v0.11.12 and higher (on agent):
aes128-gcm
[email protected]
aes256-gcm
[email protected]

See also
Install SSH Key task
SSH task
Blog post SSH build task

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
What key formats are supported for the SSH tasks?
The Azure Pipelines SSH tasks use the Node.js ssh2 package for SSH connections. Ensure that you are using the
latest version of the SSH tasks. Older versions may not support the OpenSSH key format.
If you run into an "Unsupported key format" error, then you may need to add the -m PEM flag to your ssh-keygen
command so that the key is in a supported format.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Is this task supported for target machines running operating systems other than Linux?
This task is intended for target machines running Linux.
For copying files to a macOS machine, this task may be used, but authenticating with a password is not
supported.
For copying files to a Windows machine, consider using Windows Machine File Copy.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Docker task
11/7/2020 • 4 minutes to read • Edit Online

Use this task to build and push Docker images to any container registry using Docker registry service
connection.

Overview
Following are the key benefits of using Docker task as compared to directly using docker client binary in script -
Integration with Docker registr y ser vice connection - The task makes it easy to use a Docker
registry service connection for connecting to any container registry. Once logged in, the user can author
follow up tasks to execute any tasks/scripts by leveraging the login already done by the Docker task. For
example, you can use the Docker task to sign in to any Azure Container Registry and then use a
subsequent task/script to build and push an image to this registry.
Metadata added as labels - The task adds traceability-related metadata to the image in the form of the
following labels -
com.azure.dev.image.build.buildnumber
com.azure.dev.image.build.builduri
com.azure.dev.image.build.definitionname
com.azure.dev.image.build.repository.name
com.azure.dev.image.build.repository.uri
com.azure.dev.image.build.sourcebranchname
com.azure.dev.image.build.sourceversion
com.azure.dev.image.release.definitionname
com.azure.dev.image.release.releaseid
com.azure.dev.image.release.releaseweburl
com.azure.dev.image.system.teamfoundationcollectionuri
com.azure.dev.image.system.teamproject

Task Inputs
PA RA M ET ERS DESC RIP T IO N

command (Required) Possible values: buildAndPush , build , push ,


Command login , logout
Added in version 2.173.0: start , stop
Default value: buildAndPush

containerRegistry (Optional) Name of the Docker registry service connection


Container registry

repository (Optional) Name of repository within the container registry


Repository corresponding to the Docker registry service connection
specified as input for containerRegistry
PA RA M ET ERS DESC RIP T IO N

container (Required for commands start and stop ) The container


Container resource to start or stop

tags (Optional) Multiline input where each line contains a tag to


Tags be used in build , push or buildAndPush commands
Default value: $(Build.BuildId)

Dockerfile (Optional) Path to the Dockerfile. The task will use the first
Dockerfile dockerfile it finds to build the image.
Default value: **/Dockerfile

buildContext (Optional) Path to the build context


Build context Default value: **

arguments (Optional) Additional arguments to be passed onto the


Arguments docker client
Be aware that if you use value buildAndPush for the
command parameter, then the arguments property will be
ignored.

addPipelineData (Optional) Adds the above mentioned metadata as labels to


Add Pipeline Data the image
Possible values: true , false
Default value: true

Login
Following YAML snippet showcases container registry login using a Docker registry service connection -

- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1

Build and Push


A convenience command called buildAndPush allows for build and push of images to container registry in a
single command. The following YAML snippet is an example of building and pushing multiple tags of an image to
multiple registries -
steps:
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
- task: Docker@2
displayName: Login to Docker Hub
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection2
- task: Docker@2
displayName: Build and Push
inputs:
command: buildAndPush
repository: contosoRepository
tags: |
tag1
tag2

In the above snippet, the images contosoRepository:tag1 and contosoRepository:tag2 are built and pushed to
the container registries corresponding to dockerRegistryServiceConnection1 and
dockerRegistryServiceConnection2 .

If one wants to build and push to a specific authenticated container registry instead of building and pushing to all
authenticated container registries at once, the containerRegistry input can be explicitly specified along with
command: buildAndPush as shown below -

steps:
- task: Docker@2
displayName: Build and Push
inputs:
command: buildAndPush
containerRegistry: dockerRegistryServiceConnection1
repository: contosoRepository
tags: |
tag1
tag2

Logout
Following YAML snippet showcases container registry logout using a Docker registry service connection -

- task: Docker@2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: dockerRegistryServiceConnection1

Start/stop
This task can also be used to control job and service containers. This usage is uncommon, but occasionally used
for unique circumstances.
resources:
containers:
- container: builder
image: ubuntu:18.04
steps:
- script: echo "I can run inside the container (it starts by default)"
target:
container: builder
- task: Docker@2
inputs:
command: stop
container: builder
# any task beyond this point would not be able to target the buider container
# because it's been stopped

Other commands and arguments


The command and argument inputs can be used to pass additional arguments for build or push commands
using docker client binary as shown below -

steps:
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
- task: Docker@2
displayName: Build
inputs:
command: build
repository: contosoRepository
tags: tag1
arguments: --secret id=mysecret,src=mysecret.txt

NOTE
The arguments input is evaluated for all commands except buildAndPush . As buildAndPush is a convenience command
( build followed by push ), arguments input is ignored for this command.

Troubleshooting
Why does Docker task ignore arguments passed to buildAndPush command?
Docker task configured with buildAndPush command ignores the arguments passed since they become
ambiguous to the build and push commands that are run internally. You can split your command into separate
build and push steps and pass the suitable arguments. See this stackoverflow post for example.
DockerV2 only supports Docker registry service connection and not support ARM service connection. How
can I use an existing Azure service principal (SPN ) for authentication in Docker task?
You can create a Docker registry service connection using your Azure SPN credentials. Choose the Others from
Registry type and provide the details as follows:

Docker Registry: Your container registry URL (eg. https://myacr.azurecr.io)


Docker ID: Service principal client ID
Password: Service principal key
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Docker Compose task
11/7/2020 • 11 minutes to read • Edit Online

Azure Pipelines
Use this task to build, push or run multi-container Docker applications. This task can be used with a Docker registry
or an Azure Container Registry.

Container registry types


Azure Container Registry
PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Optional) Azure Container Registry if using ACR or Container


(Container registry type) Registry if using any other container registry.
Default value: Azure Container Registry

azureSubscriptionEndpoint (Required) Name of the Azure Service Connection. See Azure


(Azure subscription) Resource Manager service connection to manually set up the
connection.
Argument aliases: azureSubscription

azureContainerRegistry (Required) Name of the Azure Container Registry.


(Azure container registry) Example: Contoso.azurecr.io

This YAML example specifies the inputs for Azure Container Registry:

variables:
azureContainerRegistry: Contoso.azurecr.io
azureSubscriptionEndpoint: Contoso
steps:
- task: DockerCompose@0
displayName: Container registry login
inputs:
containerregistrytype: Azure Container Registry
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)

Other container registries


The containerregistr ytype value is required when using any container registry other than ACR. Use
containerregistrytype: Container Registry in this case.

PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Required) Azure Container Registry if using ACR or Container


(Container registry type) Registry if using any other container registry.
Default value: Azure Container Registry

dockerRegistryEndpoint (Required) Docker registry service connection.


(Docker registry service connection)

This YAML example specifies a container registry other than ACR where Contoso is the name of the Docker
registry service connection for the container registry:

- task: DockerCompose@0
displayName: Container registry login
inputs:
containerregistrytype: Container Registry
dockerRegistryEndpoint: Contoso

Build service images


PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Required) Azure Container Registry if using ACR or Container


(Container Registry Type) Registry if using any other container registry.
Default value: Azure Container Registry

azureSubscriptionEndpoint (Required) Name of the Azure Service Connection.


(Azure subscription)

azureContainerRegistry (Required) Name of the Azure Container Registry.


(Azure Container Registry)

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name = value pair on a new line. You
need to use the | operator in YAML to indicate that newlines
should be preserved.
Example: dockerComposeFileArgs: dockerComposeFileArgs: -f
--verbose

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

additionalImageTags (Optional) Additional tags for the Docker images being built
(Additional Image Tags) or pushed.

includeSourceTags (Optional) Include Git tags when building or pushing Docker


(Include Source Tags) images.
Default value: false
PA RA M ET ERS DESC RIP T IO N

includeLatestTag (Optional) Include the latest tag when building or pushing


(Include Latest Tag) Docker images.
Default value: false

This YAML example builds the image where the image name is qualified on the basis of the inputs related to Azure
Container Registry:

- task: DockerCompose@0
displayName: Build services
inputs:
action: Build services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
additionalImageTags: $(Build.BuildId)

Push service images


PA RA M ET ERS DESC RIP T IO N

containerregistrytype (Required) Azure Container Registry if using ACR or Container


(Container Registry Type) Registry if using any other container registry.
Default value: Azure Container Registry

azureSubscriptionEndpoint (Required) Name of the Azure Service Connection.


(Azure subscription)

azureContainerRegistry (Required) Name of the Azure Container Registry.


(Azure Container Registry)

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true
PA RA M ET ERS DESC RIP T IO N

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

additionalImageTags (Optional) Additional tags for the Docker images being built
(Additional Image Tags) or pushed.

includeSourceTags (Optional) Include Git tags when building or pushing Docker


(Include Source Tags) images.
Default value: false

includeLatestTag (Optional) Include the latest tag when building or pushing


(Include Latest Tag) Docker images.
Default value: false

This YAML example pushes an image to a container registry:

- task: DockerCompose@0
displayName: Push services
inputs:
action: Push services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
additionalImageTags: $(Build.BuildId)

Run service images


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command
PA RA M ET ERS DESC RIP T IO N

buildImages (Optional) Build images before starting service containers.


(Build Images) Default value: true

detached (Optional) Run the service containers in the background.


(Run in Background) Default value: true

This YAML example runs services:

- task: DockerCompose@0
displayName: Run services
inputs:
action: Run services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.ci.build.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
buildImages: true
abortOnContainerExit: true
detached: false

Run a specific service image


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

serviceName (Required) Name of the specific service to run.


(Service Name)

containerName (Optional) Name of the specific service container to run.


(Container Name)
PA RA M ET ERS DESC RIP T IO N

ports (Optional) Ports in the specific service container to publish to


(Ports) the host. Specify each host-port:container-port binding on a
new line.

workDir (Optional) The working directory for the specific service


(Working Directory) container.
Argument aliases: workingDirectory

entrypoint (Optional) Override the default entry point for the specific
(Entry Point Override) service container.

containerCommand (Optional) Command to run in the specific service container.


(Command) For example, if the image contains a simple Python Flask web
application you can specify python app.py to launch the web
application.

detached (Optional) Run the service containers in the background.


(Run in Background) Default value: true

This YAML example runs a specific service:

- task: DockerCompose@0
displayName: Run a specific service
inputs:
action: Run a specific service
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
serviceName: myhealth.web
ports: 80
detached: true

Lock service images


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)
PA RA M ET ERS DESC RIP T IO N

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

removeBuildOptions (Optional) Remove the build options from the output Docker
(Remove Build Options) Compose file.
Default value: false

baseResolveDirectory (Optional) The base directory from which relative paths in the
(Base Resolve Directory) output Docker Compose file should be resolved.

outputDockerComposeFile (Required) Path to an output Docker Compose file.


(Output Docker Compose File) Default value: $(Build.StagingDirectory)/docker-compose.yml

This YAML example locks services:

- task: DockerCompose@0
displayName: Lock services
inputs:
action: Lock services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
outputDockerComposeFile: $(Build.StagingDirectory)/docker-compose.yml

Write service image digests


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)
PA RA M ET ERS DESC RIP T IO N

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command

imageDigestComposeFile (Required) Path to a Docker Compose file that is created and


(Image Digest Compose File) populated with the full image repository digests of each
service's Docker image.
Default value: $(Build.StagingDirectory)/docker-
compose.images.yml

This YAML example writes service image digests:

- task: DockerCompose@0
displayName: Write service image digests
inputs:
action: Write service image digests
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
imageDigestComposeFile: $(Build.StagingDirectory)/docker-compose.images.yml

Combine configuration
PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Required) Path to the primary Docker Compose file to use.


(Docker Compose File) Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command
PA RA M ET ERS DESC RIP T IO N

removeBuildOptions (Optional) Remove the build options from the output Docker
(Remove Build Options) Compose file.
Default value: false

baseResolveDirectory (Optional) The base directory from which relative paths in the
(Base Resolve Directory) output Docker Compose file should be resolved.

outputDockerComposeFile (Required) Path to an output Docker Compose file.


(Output Docker Compose File) Default value: $(Build.StagingDirectory)/docker-compose.yml

This YAML example combines configurations:

- task: DockerCompose@0
displayName: Combine configuration
inputs:
action: Combine configuration
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
additionalDockerComposeFiles: docker-compose.override.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
outputDockerComposeFile: $(Build.StagingDirectory)/docker-compose.yml

Run a Docker Compose command


PA RA M ET ERS DESC RIP T IO N

dockerComposeFile (Docker Compose File) (Required) Path to the primary Docker Compose file to use.
Default value: **/docker-compose.yml

additionalDockerComposeFiles (Optional) Additional Docker Compose files to be combined


(Additional Docker Compose Files) with the primary Docker Compose file. Relative paths are
resolved relative to the directory containing the primary
Docker Compose file. If a specified file is not found, it is
ignored. Specify each file path on a new line.

dockerComposeFileArgs (Optional) Environment variables to be set up during the


(Environment Variables) command. Specify each name=value pair on a new line.

projectName (Optional) Project name used for default naming of images


(Project Name) and containers.
Default value: $(Build.Repository.Name)

qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true

action (Required) Select a Docker Compose action.


(Action) Default value: Run a Docker Compose command
PA RA M ET ERS DESC RIP T IO N

dockerComposeCommand (Required) Docker Compose command to execute with the


(Command) help of arguments. For example, rm to remove all stopped
service containers.

This YAML example runs a docker Compose command:

- task: DockerCompose@0
displayName: Run a Docker Compose command
inputs:
action: Run a Docker Compose command
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
dockerComposeCommand: rm

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Package and Deploy Helm Charts task
11/7/2020 • 7 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy, configure, or update a Kubernetes cluster in Azure Container Service by running Helm
commands. Helm is a tool that streamlines deploying and managing Kubernetes apps using a packaging format
called charts.
You can define, version, share, install, and upgrade even the most complex Kubernetes app by using Helm.
Helm helps you combine multiple Kubernetes manifests (yaml) such as service, deployments, configmaps, and
more into a single unit called Helm Charts. You don't need to either invent or use a tokenization or a templating
tool.
Helm Charts help you manage application dependencies and deploy as well as rollback as a unit. They are also
easy to create, version, publish, and share with other partner teams.
Azure Pipelines has built-in support for Helm charts:
The Helm Tool installer task can be used to install the correct version of Helm onto the agents.
The Helm package and deploy task can be used to package the app and deploy it to a Kubernetes cluster. You
can use the task to install or update Tiller to a Kubernetes namespace, to securely connect to Tiller over TLS for
deploying charts, or to run any Helm command such as lint .
The Helm task supports connecting to an Azure Kubernetes Service by using an Azure service connection. You
can connect to any Kubernetes cluster by using kubeconfig or a service account.
Helm deployments can be supplemented by using the Kubectl task; for example, create/update,
imagepullsecret, and others.

Service Connection
The task works with two service connection types: Azure Resource Manager and Kubernetes Ser vice
Connection .

NOTE
A service connection isn't required if an environment resource that points to a Kubernetes cluster has already been specified
in the pipeline's stage.

Azure Resource Manager


PA RA M ET ERS DESC RIP T IO N

connectionType (Required unless an environment resource is already present)


(Service connection type) Azure Resource Manager to use Azure Kubernetes Service.
Kubernetes Ser vice Connection for any other cluster.
Default value: Azure Resource Manager

azureSubscriptionEndpoint (Required) Name of the Azure service connection.


(Azure subscription)
PA RA M ET ERS DESC RIP T IO N

azureResourceGroup (Required) Name of the resource group within the


(Resource group) subscription.

kubernetesCluster (Required) Name of the AKS cluster.


(Kubernetes cluster)

namespace (Optional) The namespace on which the kubectl commands


(Namespace) are run. If not specified, the default namespace is used.

This YAML example YAML shows how Azure Resource Manager is used to refer to the Kubernetes cluster. This is
used with one of the helm commands and the appropriate values required for the command:

variables:
azureSubscriptionEndpoint: Contoso
azureContainerRegistry: contoso.azurecr.io
azureResourceGroup: Contoso
kubernetesCluster: Contoso

- task: HelmDeploy@0
displayName: Helm deploy
inputs:
connectionType: Azure Resource Manager
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)

Kubernetes Service Connection


PA RA M ET ERS DESC RIP T IO N

kubernetesServiceEndpoint (Required unless an environment resource is already present)


(Kubernetes service connection) Select a Kubernetes service connection.

namespace (Optional) The namespace on which the kubectl commands


(Namespace) are run. If not specified, the default namespace is used.

This YAML example YAML shows how Kubernetes service connection is used to refer to the Kubernetes cluster. This
is used with one of the helm commands and the appropriate values required for the command:

- task: HelmDeploy@0
displayName: Helm deploy
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: Contoso

Command values
The command input accepts one of the following helm commands:
create/delete/expose/get/init/install/login/logout/ls/package/rollback/upgrade.
PA RA M ET ERS DESC RIP T IO N

command (Required) Select a helm command.


(Command) Default value: ls

arguments Helm command options.


(Arguments)

This YAML example demonstrates the ls command:

- task: HelmDeploy@0
displayName: Helm list
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: ls
arguments: --all

init command
PA RA M ET ERS DESC RIP T IO N

command (Required) Select a helm command.


(Command) Default value: ls

canaryimage Use the canary Tiller image, the latest pre-release version of
(Use canary image version) Tiller.
Default value: false

upgradetiller Upgrade if Tiller is already installed.


(Upgrade Tiller) Default value: true

waitForExecution Block until the command execution completes.


(Wait) Default value: true

arguments Helm command options.


(Arguments)

This YAML example demonstrates the init command:

- task: HelmDeploy@0
displayName: Helm init
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: init
upgradetiller: true
waitForExecution: true
arguments: --client-only

install command
PA RA M ET ERS DESC RIP T IO N

command (Required) Select a helm command.


(Command) Default value: ls

chartType (Required) Select how you want to enter chart information.


(Chart Type) You can provide either the name of the chart or the folder/file
path to the chart.
Available options: Name, FilePath. Default value: Name

chartName (Required) Chart reference to install, this can be a url or a


(Chart Name) chart name. For example, if chart name is stable/mysql, the
task will run helm install stable/mysql

releaseName (Optional) Release name. If not specified, it will be


(Release Name) autogenerated. releaseName input is only valid for 'install' and
'upgrade' commands

overrideValues (Optional) Set values on the command line. You can specify
(Set Values) multiple values by separating values with commas. For
example, key1=val1,key2=val2 . You can also specify
multiple values by delimiting them with newline as so:
key1=val1
key2=val2
Please note that if you have a value which itself contains
newlines, use the valueFile option, else the task will treat
the newline as a delimiter. The task will construct the helm
command by using these set values. For example, helm
install --set key1=val1 ./redis

valueFile (Optional) Specify values in a YAML file or a URL. For example,


(Value File) specifying myvalues.yaml will result in helm install --
values=myvals.yaml

updatedependency (Optional) Run helm dependency update before installing the


(Update Dependency) chart. Update dependencies from requirements.yaml to the
char ts/ directory before packaging.
Default value: false

waitForExecution (Optional) Block until command execution completes.


(Wait) Default value: true

arguments Helm command options


(Arguments)

This YAML example demonstrates the install command:

- task: HelmDeploy@0
displayName: Helm install
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: install
chartType: FilePath
chartPath: Application/charts/sampleapp
package command
PA RA M ET ERS DESC RIP T IO N

command (Required) Select a helm command.


(Command) Default value: ls

chartPath (Required) Path to the chart to install. This can be a path to a


(Chart Path) packaged chart or a path to an unpacked chart directory. For
example, if ./redis is specified the task will run helm install
./redis . If you are consuming a chart which is published as an
artifact then the path will be
$(System.DefaultWorkingDirector y)/ARTIFACT-
NAME/Char ts/CHART-NAME

version (Optional) Specify the exact chart version to install. If this is


(Version) not specified, the latest version is installed. Set the version on
the chart to this semver version.

destination (Optional) Specify values in a YAML file or a URL.


(Destination) Default value: $(Build.ArtifactStagingDirectory)

updatedependency (Optional) Run helm dependency update before installing the


(Update Dependency) chart. Update dependencies from requirements.yaml to the
char ts/ directory before packaging.
Default value: false

save (Optional) Save packaged chart to local chart repository.


(Save) Default value: true

arguments Helm command options.


(Arguments)

This YAML example demonstrates the package command:

- task: HelmDeploy@0
displayName: Helm package
inputs:
command: package
chartPath: Application/charts/sampleapp
destination: $(Build.ArtifactStagingDirectory)

upgrade command
PA RA M ET ERS DESC RIP T IO N

command (Required) Select a helm command.


(Command) Default value: ls

chartType (Required) Select how you want to enter chart information.


(Chart Type) You can provide either the name of the chart or the folder/file
path to the chart.
Available options: Name, FilePath. Default value: Name
PA RA M ET ERS DESC RIP T IO N

chartName (Required) Chart reference to install, this can be a url or a


(Chart Name) chart name. For example, if chart name is stable/mysql, the
task will run helm install stable/mysql

releaseName (Optional) Release name. If not specified, it will be


(Release Name) autogenerated.

overrideValues (Optional) Set values on the command line. You can specify
(Set Values) multiple values by separating values with commas. For
example, key1=val1,key2=val2 . You can also specify
multiple values by delimiting them with newline as so:
key1=val1
key2=val2
Please note that if you have a value which itself contains
newlines, use the valueFile option, else the task will treat
the newline as a delimiter. The task will construct the helm
command by using these set values. For example, helm
install --set key1=val1 ./redis

valueFile (Optional) Specify values in a YAML file or a URL. For example,


(Value File) specifying myvalues.yaml will result in helm install --
values=myvals.yaml

install (Optional) If a release by this name does not already exist,


(Install if release not present) start an installation.
Default value: true

recreate (Optional) Performs pods restart for the resource if applicable.


(Recreate Pods) Default value: false

resetValues (Optional) Reset the values to the ones built into the chart.
(Reset Values) Default value: false

force (Optional) Force resource update through delete/recreate if


(Force) required.
Default value: false

waitForExecution (Optional) Block until command execution completes.


(Wait) Default value: true

arguments Helm command options


(Arguments)

This YAML example demonstrates the upgrade command:


- task: HelmDeploy@0
displayName: Helm upgrade
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: upgrade
chartType: filepath
chartPath: $(Build.ArtifactStagingDirectory)/sampleapp-v0.2.0.tgz
releaseName: azuredevopsdemo
install: true
waitForExecution: false

Troubleshooting
HelmDeploy task throws error 'unknown flag: --wait' while running 'helm init --wait --client-only' on Helm 3.0.2
version.
There are some breaking changes between Helm 2 and Helm 3. One of them includes removal of tiller, and hence
helm init command is no longer supported. Remove command: init when you use Helm 3.0+ versions.

When using Helm 3, if System.debug is set to true and Helm upgrade is the command being used, the pipeline
fails even though the upgrade was successful.
This is a known issue with Helm 3, as it writes some logs to stderr. Helm Deploy Task is marked as failed if there are
logs to stderr or exit code is non-zero. Set the task input failOnStderr: false to ignore the logs printed to stderr.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
IIS Web App Deploy task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy a website or web app using WebDeploy.

YAML snippet
# IIS web app deploy
# Deploy a website or web application using Web Deploy
- task: IISWebAppDeploymentOnMachineGroup@0
inputs:
webSiteName:
#virtualApplication: # Optional
#package: '$(System.DefaultWorkingDirectory)\**\*.zip'
#setParametersFile: # Optional
#removeAdditionalFilesFlag: false # Optional
#excludeFilesFromAppDataFlag: false # Optional
#takeAppOfflineFlag: false # Optional
#additionalArguments: # Optional
#xmlTransformation: # Optional
#xmlVariableSubstitution: # Optional
#jSONFiles: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Website Name (Required) Provide the name of an existing website on the


machine group machines

Virtual Application (Optional) Specify the name of an already existing Virtual


Application on the target Machines

Package or Folder (Required) File path to the package or a folder generated by


MSBuild or a compressed archive file.
Variables ( Build | Release), wild cards are supported.
For example, $(System.DefaultWorkingDirectory)**.zip.

SetParameters File (Optional) Optional: location of the SetParameters.xml file to


use.

Remove Additional Files at Destination (Optional) Select the option to delete files on the Web App
that have no matching files in the Web App zip package.

Exclude Files from the App_Data Folder (Optional) Select the option to prevent files in the App_Data
folder from being deployed to the Web App.

Take App Offline (Optional) Select the option to take the Web App offline by
placing an app_offline.htm file in the root directory of the Web
App before the sync operation begins. The file will be removed
after the sync operation completes successfully.
A RGUM EN T DESC RIP T IO N

Additional Arguments (Optional) Additional Web Deploy arguments that will be


applied when deploying the Azure Web App like,-
disableLink:AppPoolExtension -disableLink:ContentExtension.

XML transformation (Optional) The config transforms will be run for


.Release.config and .<stageName>.config on the
.config file .
Config transforms will be run prior to the Variable
Substitution.
XML transformations are supported only for Windows
platform.

XML variable substitution (Optional) Variables defined in the Build or Release Pipeline will
be matched against the 'key' or 'name' entries in the
appSettings, applicationSettings, and connectionStrings
sections of any config file and parameters.xml. Variable
Substitution is run after config transforms.

Note: If the same variables are defined in the release pipeline


and in the stage, the stage variables will supersede the Release
Pipeline variables.

JSON variable substitution (Optional) Provide new line separated list of JSON files to
substitute the variable values. Files names are to be provided
relative to the root folder.
To substitute JSON variables that are nested or hierarchical,
specify them using JSONPath expressions.

For example, to replace the value of ‘ConnectionString’ in the


sample below, you need to define a variable as
‘Data.DefaultConnection.ConnectionString’ in the build/release
pipeline (or release pipeline's stage).
{
"Data": {
"DefaultConnection": {
"ConnectionString": "Server=
(localdb)\SQLEXPRESS;Database=MyDB;Trusted_Connection=T
rue"
}
}
}
Variable Substitution is run after configuration transforms.

Note: Build/release pipeline variables are excluded in


substitution

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
IIS Web App Manage task
11/2/2020 • 9 minutes to read • Edit Online

Azure Pipelines
Use this task to create or update a Website, Web App, Virtual Directory, or Application Pool.

YAML snippet
# IIS web app manage
# Create or update websites, web apps, virtual directories, or application pools
- task: IISWebAppManagementOnMachineGroup@0
inputs:
#enableIIS: false # Optional
#iISDeploymentType: 'IISWebsite' # Options: iISWebsite, iISWebApplication, iISVirtualDirectory,
iISApplicationPool
#actionIISWebsite: 'CreateOrUpdateWebsite' # Required when iISDeploymentType == IISWebsite# Options:
createOrUpdateWebsite, startWebsite, stopWebsite
#actionIISApplicationPool: 'CreateOrUpdateAppPool' # Required when iISDeploymentType ==
IISApplicationPool# Options: createOrUpdateAppPool, startAppPool, stopAppPool, recycleAppPool
#startStopWebsiteName: # Required when actionIISWebsite == StartWebsite || ActionIISWebsite == StopWebsite
websiteName:
#websitePhysicalPath: '%SystemDrive%\inetpub\wwwroot'
#websitePhysicalPathAuth: 'WebsiteUserPassThrough' # Options: websiteUserPassThrough, websiteWindowsAuth
#websiteAuthUserName: # Required when websitePhysicalPathAuth == WebsiteWindowsAuth
#websiteAuthUserPassword: # Optional
#addBinding: false # Optional
#protocol: 'http' # Required when iISDeploymentType == RandomDeployment# Options: https, http
#iPAddress: 'All Unassigned' # Required when iISDeploymentType == RandomDeployment
#port: '80' # Required when iISDeploymentType == RandomDeployment
#serverNameIndication: false # Optional
#hostNameWithOutSNI: # Optional
#hostNameWithHttp: # Optional
#hostNameWithSNI: # Required when iISDeploymentType == RandomDeployment
#sSLCertThumbPrint: # Required when iISDeploymentType == RandomDeployment
bindings:
#createOrUpdateAppPoolForWebsite: false # Optional
#configureAuthenticationForWebsite: false # Optional
appPoolNameForWebsite:
#dotNetVersionForWebsite: 'v4.0' # Options: v4.0, v2.0, no Managed Code
#pipeLineModeForWebsite: 'Integrated' # Options: integrated, classic
#appPoolIdentityForWebsite: 'ApplicationPoolIdentity' # Options: applicationPoolIdentity, localService,
localSystem, networkService, specificUser
#appPoolUsernameForWebsite: # Required when appPoolIdentityForWebsite == SpecificUser
#appPoolPasswordForWebsite: # Optional
#anonymousAuthenticationForWebsite: false # Optional
#basicAuthenticationForWebsite: false # Optional
#windowsAuthenticationForWebsite: true # Optional
parentWebsiteNameForVD:
virtualPathForVD:
#physicalPathForVD: '%SystemDrive%\inetpub\wwwroot'
#vDPhysicalPathAuth: 'VDUserPassThrough' # Optional. Options: vDUserPassThrough, vDWindowsAuth
#vDAuthUserName: # Required when vDPhysicalPathAuth == VDWindowsAuth
#vDAuthUserPassword: # Optional
parentWebsiteNameForApplication:
virtualPathForApplication:
#physicalPathForApplication: '%SystemDrive%\inetpub\wwwroot'
#applicationPhysicalPathAuth: 'ApplicationUserPassThrough' # Optional. Options:
applicationUserPassThrough, applicationWindowsAuth
#applicationAuthUserName: # Required when applicationPhysicalPathAuth == ApplicationWindowsAuth
#applicationAuthUserPassword: # Optional
#createOrUpdateAppPoolForApplication: false # Optional
#createOrUpdateAppPoolForApplication: false # Optional
appPoolNameForApplication:
#dotNetVersionForApplication: 'v4.0' # Options: v4.0, v2.0, no Managed Code
#pipeLineModeForApplication: 'Integrated' # Options: integrated, classic
#appPoolIdentityForApplication: 'ApplicationPoolIdentity' # Options: applicationPoolIdentity,
localService, localSystem, networkService, specificUser
#appPoolUsernameForApplication: # Required when appPoolIdentityForApplication == SpecificUser
#appPoolPasswordForApplication: # Optional
appPoolName:
#dotNetVersion: 'v4.0' # Options: v4.0, v2.0, no Managed Code
#pipeLineMode: 'Integrated' # Options: integrated, classic
#appPoolIdentity: 'ApplicationPoolIdentity' # Options: applicationPoolIdentity, localService, localSystem,
networkService, specificUser
#appPoolUsername: # Required when appPoolIdentity == SpecificUser
#appPoolPassword: # Optional
#startStopRecycleAppPoolName: # Required when actionIISApplicationPool == StartAppPool ||
ActionIISApplicationPool == StopAppPool || ActionIISApplicationPool == RecycleAppPool
#appCmdCommands: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Enable IIS (Optional) Check this if you want to install IIS on the machine.

Configuration type (Required) You can create or update sites, applications, virtual
directories, and application pools.

Action (Required) Select the appropriate action that you want to


perform on an IIS website.
"Create Or Update" will create a website or update an
existing website.
Start, Stop will start or stop the website respectively.

Action (Required) Select the appropriate action that you want to


perform on an IIS Application Pool.
"Create Or Update" will create app-pool or update an
existing one.
Start, Stop, Recycle will start, stop or recycle the
application pool respectively.

Website name (Required) Provide the name of the IIS website.

Website name (Required) Provide the name of the IIS website to create or
update.

Physical path (Required) Provide the physical path where the website
content will be stored. The content can reside on the local
Computer, or in a remote directory, or on a network share, like
C:\Fabrikam or \ContentShare\Fabrikam.

Physical path authentication (Required) Select the authentication mechanism that will be
used to access the physical path of the website.

Username (Required) Provide the user name that will be used to access
the website's physical path.
A RGUM EN T DESC RIP T IO N

Password (Optional) Provide the user's password that will be used to


access the website's physical path.
The best practice is to create a variable in the Build or Release
pipeline, and mark it as 'Secret' to secure it, and then use it
here, like '$(userCredentials)'.
Note: Special characters in password are interpreted as per
command-line arguments

Add binding (Optional) Select the option to add port binding for the
website.

Protocol (Required) Select HTTP for the website to have an HTTP


binding, or select HTTPS for the website to have a Secure
Sockets Layer (SSL) binding.

IP address (Required) Provide an IP address that end-users can use to


access this website.
If 'All Unassigned' is selected, then the website will respond to
requests for all IP addresses on the port and for the host
name, unless another website on the server has a binding on
the same port but with a specific IP address.

Port (Required) Provide the port, where the Hypertext Transfer


Protocol Stack (HTTP.sys) will listen to the website requests.

Server Name Indication required (Optional) Select the option to set the Server Name Indication
(SNI) for the website.
SNI extends the SSL and TLS protocols to indicate the host
name that the clients are attempting to connect to. It allows,
multiple secure websites with different certificates, to use the
same IP address.

Host name (Optional) Enter a host name (or domain name) for the
website.
If a host name is specified, then the clients must use the host
name instead of the IP address to access the website.

Host name (Optional) Enter a host name (or domain name) for the
website.
If a host name is specified, then the clients must use the host
name instead of the IP address to access the website.

Host name (Required) Enter a host name (or domain name) for the
website.
If a host name is specified, then the clients must use the host
name instead of the IP address to access the website.

SSL certificate thumbprint (Required) Provide the thumb-print of the Secure Socket Layer
certificate that the website is going to use for the HTTPS
communication as a 40 character long hexadecimal string. The
SSL certificate should be already installed on the Computer, at
Local Computer, Personal store.

Add bindings (Required) Click on the extension [...] button to add bindings
for the website.
A RGUM EN T DESC RIP T IO N

Create or update app pool (Optional) Select the option to create or update an application
pool. If checked, the website will be created in the specified
app pool.

Configure authentication (Optional) Select the option to configure authentication for


website.

Name (Required) Provide the name of the IIS application pool to


create or update.

.NET version (Required) Select the version of the .NET Framework that is
loaded by the application pool.
If the applications assigned to this application pool do not
contain managed code, then select the 'No Managed Code'
option from the list.

Managed pipeline mode (Required) Select the managed pipeline mode that specifies
how IIS processes requests for managed content. Use classic
mode only when the applications in the application pool
cannot run in the Integrated mode.

Identity (Required) Configure the account under which an application


pool's worker process runs. Select one of the predefined
security accounts or configure a custom account.

Username (Required) Provide the username of the custom account that


you want to use.

Password (Optional) Provide the password for custom account.


The best practice is to create a variable in the Build or Release
pipeline, and mark it as 'Secret' to secure it, and then use it
here, like '$(userCredentials)'.
Note: Special characters in password are interpreted as per
command-line arguments

Anonymous authentication (Optional) Select the option to enable anonymous


authentication for website.

Basic authentication (Optional) Select the option to enable basic authentication for
website.

Windows authentication (Optional) Select the option to enable windows authentication


for website.

Parent website name (Required) Provide the name of the parent Website of the
virtual directory.

Virtual path (Required) Provide the virtual path of the virtual directory.
Example: To create a virtual directory Site/Application/VDir
enter /Application/Vdir. The parent website and application
should be already existing.
A RGUM EN T DESC RIP T IO N

Physical path (Required) Provide the physical path where the virtual
directory's content will be stored. The content can reside on
the local Computer, or in a remote directory, or on a network
share, like C:\Fabrikam or \ContentShare\Fabrikam.

Physical path authentication (Optional) Select the authentication mechanism that will be
used to access the physical path of the virtual directory.

Username (Required) Provide the user name that will be used to access
the virtual directory's physical path.

Password (Optional) Provide the user's password that will be used to


access the virtual directory's physical path.
The best practice is to create a variable in the Build or Release
pipeline, and mark it as 'Secret' to secure it, and then use it
here, like '$(userCredentials)'.
Note: Special characters in password are interpreted as per
command-line arguments

Parent website name (Required) Provide the name of the parent Website under
which the application will be created or updated.

Virtual path (Required) Provide the virtual path of the application.


Example: To create an application Site/Application enter
/Application. The parent website should be already
existing.

Physical path (Required) Provide the physical path where the application's
content will be stored. The content can reside on the local
Computer, or in a remote directory, or on a network share, like
C:\Fabrikam or \ContentShare\Fabrikam.

Physical path authentication (Optional) Select the authentication mechanism that will be
used to access the physical path of the application.

Username (Required) Provide the user name that will be used to access
the application's physical path.

Password (Optional) Provide the user's password that will be used to


access the application's physical path.
The best practice is to create a variable in the Build or Release
pipeline, and mark it as 'Secret' to secure it, and then use it
here, like '$(userCredentials)'.
Note: Special characters in password are interpreted as per
command-line arguments

Create or update app pool (Optional) Select the option to create or update an application
pool. If checked, the application will be created in the specified
app pool.

Name (Required) Provide the name of the IIS application pool to


create or update.
A RGUM EN T DESC RIP T IO N

.NET version (Required) Select the version of the .NET Framework that is
loaded by the application pool.
If the applications assigned to this application pool do not
contain managed code, then select the 'No Managed Code'
option from the list.

Managed pipeline mode (Required) Select the managed pipeline mode that specifies
how IIS processes requests for managed content. Use classic
mode only when the applications in the application pool
cannot run in the Integrated mode.

Identity (Required) Configure the account under which an application


pool's worker process runs. Select one of the predefined
security accounts or configure a custom account.

Username (Required) Provide the username of the custom account that


you want to use.

Password (Optional) Provide the password for custom account.


The best practice is to create a variable in the Build or Release
pipeline, and mark it as 'Secret' to secure it, and then use it
here, like '$(userCredentials)'.
Note: Special characters in password are interpreted as per
command-line arguments

Name (Required) Provide the name of the IIS application pool to


create or update.

.NET version (Required) Select the version of the .NET Framework that is
loaded by the application pool.
If the applications assigned to this application pool do not
contain managed code, then select the 'No Managed Code'
option from the list.

Managed pipeline mode (Required) Select the managed pipeline mode that specifies
how IIS processes requests for managed content. Use classic
mode only when the applications in the application pool
cannot run in the Integrated mode.

Identity (Required) Configure the account under which an application


pool's worker process runs. Select one of the predefined
security accounts or configure a custom account.

Username (Required) Provide the username of the custom account that


you want to use.

Password (Optional) Provide the password for custom account.


The best practice is to create a variable in the Build or Release
pipeline, and mark it as 'Secret' to secure it, and then use it
here, like '$(userCredentials)'.
Note: Special characters in password are interpreted as per
command-line arguments

Application pool name (Required) Provide the name of the IIS application pool.
A RGUM EN T DESC RIP T IO N

Additional appcmd.exe commands (Optional) Enter additional AppCmd.exe commands. For more
than one command use a line separator, like
list apppools
list sites
recycle apppool /apppool.name:ExampleAppPoolName

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Kubectl task
11/7/2020 • 6 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy, configure, or update a Kubernetes cluster by running kubectl commands.

Service Connection
The task works with two service connection types: Azure Resource Manager and Kubernetes Ser vice
Connection , described below.
Azure Resource Manager
PA RA M ET ERS DESC RIP T IO N

connectionType (Required) Azure Resource Manager when using Azure


Service connection type Kubernetes Service, or Kubernetes Ser vice Connection for
any other cluster.
Default value: Azure Resource Manager

azureSubscriptionEndpoint (Required) Name of the Azure Service Connection.


Azure subscription

azureResourceGroup (Required) Name of the resource group within the


Resource group subscription.

kubernetesCluster (Required) Name of the AKS cluster.


Kubernetes cluster

useClusterAdmin (Optional) Use cluster administrator credentials instead of


Use cluster admin credentials default cluster user credentials. This will ignore role based
access control.

namespace (Optional) The namespace on which the kubectl commands


Namespace are to be run. If unspecified, the default namespace is used.

This YAML example shows how Azure Resource Manager is used to refer to the Kubernetes cluster. This is to be
used with one of the kubectl commands and the appropriate values required by the command.
variables:
azureSubscriptionEndpoint: Contoso
azureContainerRegistry: contoso.azurecr.io
azureResourceGroup: Contoso
kubernetesCluster: Contoso
useClusterAdmin: false

steps:
- task: Kubernetes@1
displayName: kubectl apply
inputs:
connectionType: Azure Resource Manager
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
useClusterAdmin: $(useClusterAdmin)

Kubernetes Service Connection


PA RA M ET ERS DESC RIP T IO N

kubernetesServiceEndpoint (Required) Select a Kubernetes service connection.


Kubernetes service connection

namespace (Optional) The namespace on which the kubectl commands


Namespace are to be run. If not specified, the default namespace is used.

This YAML example shows how a Kubernetes Service Connection is used to refer to the Kubernetes cluster. This is
to be used with one of the kubectl commands and the appropriate values required by the command.

- task: Kubernetes@1
displayName: kubectl apply
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: Contoso

Commands
The command input accepts one of the following kubectl commands:
apply , create , delete , exec , expose , get , login , logout , logs , run , set , or top .

PA RA M ET ERS DESC RIP T IO N

command (Required) Applies a configuration to a resource by filename or


Command stdin.
Default value: apply

useConfigurationFile (Optional) Use Kubernetes configuration files with the kubectl


Use configuration files command. Enter the filename, directory, or URL of the
Kubernetes configuration files.
Default value: false

arguments (Optional) Arguments for the specified kubectl command.


Arguments

This YAML example demonstrates the apply command:


- task: Kubernetes@1
displayName: kubectl apply using arguments
inputs:
connectionType: Azure Resource Manager
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml

This YAML example demonstrates the use of a configuration file with the apply command:

- task: Kubernetes@1
displayName: kubectl apply using configFile
inputs:
connectionType: Azure Resource Manager
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
useConfigurationFile: true
configuration: mhc-aks.yaml

Secrets
Kubernetes objects of type secret are intended to hold sensitive information such as passwords, OAuth tokens,
and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a pod
definition or in a Docker image. Azure Pipelines simplifies the addition of ImagePullSecrets to a service account,
or setting up of any generic secret, as described below.
ImagePullSecret
PA RA M ET ERS DESC RIP T IO N

secretType (Required) Create or update an ImagePullSecret or any other


Type of secret generic secret. Acceptable values: dockerRegistr y for
ImagePullSecret or generic for any other type of secret.
Default value: dockerRegistry

containerRegistryType (Required) Acceptable values: Azure Container Registr y , or


Container registry type Container Registr y for any other registry.
Default value: Azure Container Registry

azureSubscription (Required if secretType == dockerRegistry and


EndpointForSecrets containerRegistryType == Azure Container Registry) Azure
Azure subscription Resource Manager service connection scoped to the
subscription containing the Azure Container Registry for
which the ImagePullSecret is to be set up.

azureContainerRegistry (Required if secretType == dockerRegistry and


Azure container registry containerRegistryType == Azure Container Registry) The
Azure Container Registry for which the ImagePullSecret is to
be set up.

secretName (Optional) Name of the secret.


Secret name
PA RA M ET ERS DESC RIP T IO N

forceUpdate (Optional) Delete the secret if it exists and create a new one
Force update secret with updated values.
Default value: true

This YAML example demonstrates the setting up of ImagePullSecrets:

- task: Kubernetes@1
displayName: kubectl apply for secretType dockerRegistry
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: dockerRegistry
containerRegistryType: Azure Container Registry
azureSubscriptionEndpointForSecrets: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
secretName: mysecretkey2
forceUpdate: true

Generic Secrets

PA RA M ET ERS DESC RIP T IO N

secretType (Required) Create or update an ImagePullSecret or any other


Type of secret generic secret. Acceptable values: dockerRegistr y for
ImagePullSecret or generic for any other type of secret.
Default value: dockerRegistry

secretArguments (Optional) Specify keys and literal values to insert in the


Arguments secret. For example,
--from-literal=key1=value1 --from-literal=key2="top
secret"

secretName (Optional) Name of the secret.


Secret name

This YAML example creates generic secrets from literal values specified for the secretArguments input:

- task: Kubernetes@1
displayName: secretType generic with literal values
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: generic
secretArguments: --from-literal=contoso=5678
secretName: mysecretkey

Pipeline variables can be used to pass arguments for specifying literal values, as shown here:
- task: Kubernetes@1
displayName: secretType generic with pipeline variables
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: generic
secretArguments: --from-literal=contoso=$(contosovalue)
secretName: mysecretkey

ConfigMap
ConfigMaps allow you to decouple configuration artifacts from image content to maintain portability for
containerized applications.

PA RA M ET ERS DESC RIP T IO N

configMapName (Optional) Name of the ConfigMap.


ConfigMapName

forceUpdateConfigMap (Optional) Delete the configmap if it exists and create a new


Force update configmap one with updated values.
Default value: false

useConfigMapFile (Optional) Create a ConfigMap from an individual file, or from


Use file multiple files by specifying a directory.
Default value: false

configMapFile (Required if useConfigMapFile == true) Specify a file or


ConfigMap File directory that contains the configMaps. Note that this will use
the --from-file argument.

configMapArguments (Optional) Specify keys and literal values to insert in


Arguments configMap. For example,
--from-literal=key1=value1 --from-literal=key2="top
secret"

This YAML example creates a ConfigMap by pointing to a ConfigMap file:

- task: Kubernetes@1
displayName: kubectl apply
inputs:
configMapName: myconfig
useConfigMapFile: true
configMapFile: src/configmap

This YAML example creates a ConfigMap by specifying the literal values directly as the configMapArguments
input, and setting forceUpdate to true:
- task: Kubernetes@1
displayName: configMap with literal values
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: generic
secretArguments: --from-literal=contoso=$(contosovalue)
secretName: mysecretkey4
configMapName: myconfig
forceUpdateConfigMap: true
configMapArguments: --from-literal=myname=contoso

You can use pipeline variables to pass literal values when creating ConfigMap, as shown here:

- task: Kubernetes@1
displayName: configMap with pipeline variables
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: generic
secretArguments: --from-literal=contoso=$(contosovalue)
secretName: mysecretkey4
configMapName: myconfig
forceUpdateConfigMap: true
configMapArguments: --from-literal=myname=$(contosovalue)

Advanced
PA RA M ET ERS DESC RIP T IO N

versionOrLocation (Optional) Explicitly choose a version of kubectl to be used, or


Version specify the path (location) of the kubectl binary.
Default value: version

versionSpec (Required if versionOrLocation == version) The version of the


Version spec kubectl to be used. Examples: 1.7.0 , 1.x.0 , 4.x.0 , 6.10.0 ,
>=6.10.0
Default value: 1.7.0

checkLatest (Optional) If true, a check for the latest version of kubectl is


Check for latest version performed.
Default value: false

specifyLocation (Required) Full path to the kubectl.exe file.


Specify location

cwd (Optional) Working directory for the Kubectl command.


Working directory Default value: $(System.DefaultWorkingDirectory)

outputFormat (Optional) Acceptable values: json or YAML .


Output format Default value: json
Troubleshooting
My Kubernetes cluster is behind a firewall and I am using hosted agents. How can I deploy to this cluster?
You can grant hosted agents access through your firewall by allowing the IP addresses for the hosted agents. For
more details, see Agent IP ranges

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Kubernetes manifest task
11/2/2020 • 12 minutes to read • Edit Online

Use a Kubernetes manifest task in a build or release pipeline to bake and deploy manifests to Kubernetes clusters.

Overview
The following list shows the key benefits of this task:
Ar tifact substitution : The deployment action takes as input a list of container images that you can
specify along with their tags and digests. The same input is substituted into the nontemplatized manifest
files before application to the cluster. This substitution ensures that the cluster nodes pull the right version
of the image.
Manifest stability : The rollout status of the deployed Kubernetes objects is checked. The stability checks
are incorporated to determine whether the task status is a success or a failure.
Traceability annotations : Annotations are added to the deployed Kubernetes objects to superimpose
traceability information. The following annotations are supported:
azure-pipelines/org
azure-pipelines/project
azure-pipelines/pipeline
azure-pipelines/pipelineId
azure-pipelines/execution
azure-pipelines/executionuri
azure-pipelines/jobName
Secret handling : The createSecret action lets Docker registry secrets be created using Docker registry
service connections. It also lets generic secrets be created using either plain-text variables or secret
variables. Before deployment to the cluster, you can use the secrets input along with the deploy action to
augment the input manifest files with the appropriate imagePullSecrets value.
Bake manifest : The bake action of the task allows for baking templates into Kubernetes manifest files.
The action uses tools such as Helm, Compose, and kustomize. With baking, these Kubernetes manifest files
are usable for deployments to the cluster.
Deployment strategy : Choosing the canar y strategy with the deploy action leads to creation of
workloads having names suffixed with "-baseline" and "-canary". The task supports two methods of traffic
splitting:
Ser vice Mesh Interface : Service Mesh Interface (SMI) abstraction allows configuration with
service mesh providers like Linkerd and Istio. The Kubernetes Manifest task maps SMI TrafficSplit
objects to the stable, baseline, and canary services during the life cycle of the deployment strategy.
Canary deployments that are based on a service mesh and use this task are more accurate. This
accuracy comes because service mesh providers enable the granular percentage-based split of
traffic. The service mesh uses the service registry and sidecar containers that are injected into pods.
This injection occurs alongside application containers to achieve the granular traffic split.
Kubernetes with no ser vice mesh : In the absence of a service mesh, you might not get the exact
percentage split you want at the request level. But you can possibly do canary deployments by
using baseline and canary variants next to the stable variant.
The service sends requests to pods of all three workload variants as the selector-label constraints
are met. Kubernetes Manifest honors these requests when creating baseline and canary variants.
This routing behavior achieves the intended effect of routing only a portion of total requests to the
canary.
Compare the baseline and canary workloads by using either a Manual Intervention task in release
pipelines or a Delay task in YAML pipelines. Do the comparison before using the promote or reject action
of the task.

Deploy action
PA RA M ET ER DESC RIP T IO N

action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .

kubernetesSer viceConnection (Required unless the task is used in a Kubernetes


Kubernetes service connection environment)

The name of the Kubernetes service connection.

namespace (Required unless the task is used in a Kubernetes


Namespace environment)

The namespace within the cluster to deploy to.

manifests (Required)
Manifests
The path to the manifest files to be used for deployment. A
file-matching pattern is an acceptable value for this input.

containers (Optional)
Containers
The fully qualified URL of the image to be used for
substitutions on the manifest files. This input accepts the
specification of multiple artifact substitutions in newline-
separated form. Here's an example:

containers: |
contosodemo.azurecr.io/foo:test1
contosodemo.azurecr.io/bar:test2

In this example, all references to


contosodemo.azurecr.io/foo and
contosodemo.azurecr.io/bar are searched for in the image
field of the input manifest files. For each match found, the tag
test1 or test2 replaces the matched reference.

imagePullSecrets (Optional)
Image pulls secrets
Multiline input where each line contains the name of a
Docker registry secret that has already been set up within
the cluster. Each secret name is added under
imagePullSecrets for the workloads that are found in the
input manifest files.
PA RA M ET ER DESC RIP T IO N

strategy (Optional)
Strategy
The deployment strategy used while manifest files are applied
on the cluster. Currently, canar y is the only acceptable
deployment strategy.

trafficSplitMethod (Optional)
Traffic split method
Acceptable values are pod and smi. The default value is pod .

For the value smi, the percentage traffic split is done at the
request level by using a service mesh. A service mesh must
be set up by a cluster admin. This task handles orchestration
of SMI TrafficSplit objects.

For the value pod , the percentage split isn't possible at the
request level in the absence of a service mesh. Instead, the
percentage input is used to calculate the replicas for baseline
and canary. The calculation is a percentage of replicas that
are specified in the input manifests for the stable variant.

percentage (Required only if strategy is set to canar y )


Percentage
The percentage that is used to compute the number of
baseline-variant and canary-variant replicas of the workloads
that are contained in manifest files.

For the specified percentage input, calculate:

(percentage × number of replicas) / 100

If the result isn't an integer, the mathematical floor of the


result is used when baseline and canary variants are created.

For example, assume the deployment hello-world is in the


input manifest file and that the following lines are in the task
input:

replicas: 4
strategy: canary
percentage: 25

In this case, the deployments hello-world-baseline and hello-


world-canary are created with one replica each. The baseline
variant is created with the same image and tag as the stable
version, which is the four-replica variant before deployment.
The canary variant is created with the image and tag
corresponding to the newly deployed changes.
PA RA M ET ER DESC RIP T IO N

baselineAndCanar yReplicas (Optional, and relevant only if trafficSplitMethod is set to


Baseline and canary replicas smi)

When you set trafficSplitMethod to smi, the percentage


traffic split is controlled in the service mesh plane. But you
can control the actual number of replicas for canary and
baseline variants independently of the traffic split.

For example, assume that the input deployment manifest


specifies 30 replicas for the stable variant. Also assume that
you specify the following input for the task:

strategy: canary
trafficSplitMethod: smi
percentage: 20
baselineAndCanaryReplicas: 1

In this case, the stable variant receives 80% of the traffic,


while the baseline and canary variants each receive half of the
specified 20%. But baseline and canary variants don't receive
three replicas each. They instead receive the specified number
of replicas, which means they each receive one replica.

The following YAML code is an example of deploying to a Kubernetes namespace by using manifest files:

steps:
- task: KubernetesManifest@0
displayName: Deploy
inputs:
kubernetesServiceConnection: someK8sSC1
namespace: default
manifests: manifests/deployment.yml|manifests/service.yml
containers: |
foo/demo:$(tagVariable1)
bar/demo:$(tagVariable2)
imagePullSecrets: |
some-secret
some-other-secret

In the above example, the task tries to find matches for the images foo/demo and bar/demo in the image fields of
manifest files. For each match found, the value of either tagVariable1 or tagVariable2 is appended as a tag to
the image name. You can also specify digests in the containers input for artifact substitution.

NOTE
While you can author deploy, promote, and reject actions with YAML input related to deployment strategy, support for a
Manual Intervention task is currently unavailable for build pipelines.
For release pipelines, we advise you to use actions and input related to deployment strategy in the following sequence:
1. A deploy action specified with strategy: canary and percentage: $(someValue) .
2. A Manual Intervention task so that you can pause the pipeline and compare the baseline variant with the canary
variant.
3. A promote action that runs if a Manual Intervention task is resumed and a reject action that runs if a Manual
Intervention task is rejected.
Promote and reject actions
PA RA M ET ER DESC RIP T IO N

action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .

kubernetesSer viceConnection (Required)


Kubernetes service connection
The name of the Kubernetes service connection.

namespace (Required)
Namespace
The namespace within the cluster to deploy to.

manifests (Required)
Manifests
The path to the manifest files to be used for deployment. A
file-matching pattern is an acceptable value for this input.

containers (Optional)
Containers
The fully qualified resource URL of the image to be used for
substitutions on the manifest files. The URL
contosodemo.azurecr.io/helloworld:test is an example.

imagePullSecrets (Optional)
Image pull secrets
Multiline input where each line contains the name of a
Docker registry secret that is already set up within the cluster.
Each secret name is added under the imagePullSecrets field
for the workloads that are found in the input manifest files.

strategy (Optional)
Strategy
The deployment strategy used in the deploy action before a
promote action or reject action. Currently, canar y is the only
acceptable deployment strategy.

Create secret action


PA RA M ET ER DESC RIP T IO N

action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .

secretType (Required only if action is set to createSecret )


Secret type
Acceptable values are dockerRegistr y and generic. The
default value is dockerRegistr y .

If you set secretType to dockerRegistr y , the


imagePullSecrets field is created or updated in a cluster to
help image pull from a private container registry.
PA RA M ET ER DESC RIP T IO N

secretName (Required)
Secret name
The name of the secret to be created or updated.

dockerRegistr yEndpoint (Required only if action is set to createSecret and


Docker registry service connection secretType is set to dockerRegistr y )

The credentials of the specified service connection are used


to create a Docker registry secret within the cluster. Manifest
files under the imagePullSecrets field can then refer to this
secret's name.

secretArguments (Required only if action is set to createSecret and


Secret arguments secretType is set to generic)

Multiline input that accepts keys and literal values to be used


for creation and updating of secrets. Here's an example:
--from-literal=key1=value1 --from-
literal=key2="top secret"

kubernetesSer viceConnection (Required)


Kubernetes service connection
The name of the Kubernetes service connection.

namespace (Required)
Namespace
The cluster namespace within which to create a secret.

The following YAML code shows a sample creation of Docker registry secrets by using Docker Registry service
connection:

steps:
- task: KubernetesManifest@0
displayName: Create secret
inputs:
action: createSecret
secretType: dockerRegistry
secretName: foobar
dockerRegistryEndpoint: demoACR
kubernetesServiceConnection: someK8sSC
namespace: default

This YAML code shows a sample creation of generic secrets:

steps:
- task: KubernetesManifest@0
displayName: Create secret
inputs:
action: createSecret
secretType: generic
secretName: some-secret
secretArguments: --from-literal=key1=value1
kubernetesServiceConnection: someK8sSC
namespace: default

Bake action
PA RA M ET ER DESC RIP T IO N

action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .

renderType (Required only if action is set to bake )


Render engine
The render type used to produce the manifest files.

Acceptable values are helm2 , kompose , and kustomize .


The default value is helm2 .

helmChar t (Required only if action is set to bake and renderType is


Helm chart set to helm2 )

The path to the Helm chart used for baking.

overrideFiles (Optional, and relevant only if action is set to bake and


Override files renderType is set to helm2 )

Multiline input that accepts the path to the override files. The
files are used when manifest files from Helm charts are
baked.

overrides (Optional, and relevant only if action is set to bake and


Override values renderType is set to helm2 )

Additional override values that are used via the command-


line switch --set when manifest files using Helm are baked.

Specify override values as key-value pairs in the format key:


value. If you use multiple overriding key-value pairs, specify
each key-value pair in a separate line. Use a newline character
as the delimiter between different key-value pairs.

releaseName (Optional, and relevant only if action is set to bake and


Release name renderType is set to helm2 )

The name of the release used when baking Helm charts.

kustomizationPath (Optional, and relevant only if action is set to bake and


Kustomization path renderType is set to kustomize )

The path to the directory containing the file


kustomization.yaml.

dockerComposeFile (Optional, and relevant only if action is set to bake and


Path to Docker compose file renderType is set to kompose )

The path to the Docker compose file.

The following YAML code is an example of baking manifest files from Helm charts. Note the usage of name input
in the first task. This name is later referenced from the deploy step for specifying the path to the manifests that
were produced by the bake step.
steps:
- task: KubernetesManifest@0
name: bake
displayName: Bake K8s manifests from Helm chart
inputs:
action: bake
helmChart: charts/sample
overrides: 'image.repository:nginx'

- task: KubernetesManifest@0
displayName: Deploy K8s manifests
inputs:
kubernetesServiceConnection: someK8sSC
namespace: default
manifests: $(bake.manifestsBundle)
containers: |
nginx: 1.7.9

Scale action
PA RA M ET ER DESC RIP T IO N

action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .

kind (Required)
Kind
The kind of Kubernetes object to be scaled up or down.
Examples include ReplicaSet and StatefulSet.

name (Required)
Name
The name of the Kubernetes object to be scaled up or down.

replicas (Required)
Replica count
The number of replicas to scale to.

kubernetesSer viceConnection (Required)


Kubernetes service connection
The name of the Kubernetes service connection.

namespace (Required)
Namespace
The namespace within the cluster to deploy to.

The following YAML code shows an example of scaling objects:


steps:
- task: KubernetesManifest@0
displayName: Scale
inputs:
action: scale
kind: deployment
name: bootcamp-demo
replicas: 5
kubernetesServiceConnection: someK8sSC
namespace: default

Patch action
PA RA M ET ER DESC RIP T IO N

action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .

resourceToPatch (Required)
Resource to patch
Indicates one of the following patch methods:
A manifest file identifies the objects to be patched.
An individual object is identified by kind and name as
the patch target.
Acceptable values are file and name . The default value is
file .

resourceFiletoPatch (Required only if action is set to patch and


File path resourceToPatch is set to file )

The path to the file used for the patch.

kind (Required only if action is set to patch and


Kind resourceToPatch is set to name )

The kind of the Kubernetes object. Examples include


ReplicaSet and StatefulSet.

name (Required only if action is set to patch and


Name resourceToPatch is set to name )

The name of the Kubernetes object to be patched.

mergeStrategy (Required)
Merge strategy
The strategy to be used for applying the patch.

Acceptable values are json , merge , and strategic. The


default value is strategic.

patch (Required)
Patch
The contents of the patch.
PA RA M ET ER DESC RIP T IO N

kubernetesSer viceConnection (Required)


Kubernetes service connection
The name of the Kubernetes service connection.

namespace (Required)
Namespace
The namespace within the cluster to deploy to.

The following YAML code shows an example of object patching:

steps:
- task: KubernetesManifest@0
displayName: Patch
inputs:
action: patch
kind: pod
name: demo-5fbc4d6cd9-pgxn4
mergeStrategy: strategic
patch: '{"spec":{"containers":[{"name":"demo","image":"foobar/demo:2239"}]}}'
kubernetesServiceConnection: someK8sSC
namespace: default

Delete action
PA RA M ET ER DESC RIP T IO N

action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .

arguments (Required)
Arguments
Arguments to be passed on to kubectl for deleting the
necessary objects. An example is:
arguments: deployment hello-world foo-bar

kubernetesSer viceConnection (Required)


Kubernetes service connection
The name of the Kubernetes service connection.

namespace (Required)
Namespace
The namespace within the cluster to deploy to.

This YAML code shows a sample object deletion:

steps:
- task: KubernetesManifest@0
displayName: Delete
inputs:
action: delete
arguments: deployment expressapp
kubernetesServiceConnection: someK8sSC
namespace: default
Troubleshooting
My Kubernetes cluster is behind a firewall and I am using hosted agents. How can I deploy to this cluster?
You can grant hosted agents access through your firewall by allowing the IP addresses for the hosted agents. For
more details, see Agent IP ranges

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
PowerShell on Target Machines task
11/2/2020 • 5 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to execute PowerShell scripts on remote machine(s).
This task can run both PowerShell scripts and PowerShell-DSC scripts:
For PowerShell scripts, the computers must have PowerShell 2.0 or higher installed.
For PowerShell-DSC scripts, the computers must have the latest version of the Windows Management Framework
installed. This is installed by default on Windows 8.1, Windows Server 2012 R2, and later.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are
called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Prerequisites
This task uses Windows Remote Management (WinRM) to access on-premises physical computers or virtual computers
that are domain-joined or workgroup-joined.
To set up WinRM for on-premises physical computers or vir tual machines
Follow the steps described in domain-joined
To set up WinRM for Microsoft Azure Vir tual Machines
Azure Virtual Machines require WinRM to use the HTTPS protocol. You can use a self-signed Test Certificate. In this case,
the automation agent will not validate the authenticity of the certificate as being issued by a trusted certification authority.
Azure Classic Vir tual Machines . When you create a classic virtual machine from the Azure portal, the virtual
machine is already set up for WinRM over HTTPS, with the default port 5986 already opened in the firewall and a
self-signed certificate installed on the machine. These virtual machines can be accessed with no further
configuration required. Existing Classic virtual machines can be also selected by using the Azure Resource Group
Deployment task.
Azure Resource Group . If you have an Azure Resource Group
already defined in the Azure portal, you must configure it to use the WinRM HTTPS protocol. You need to open port 5986
in the firewall, and install a self-signed certificate.
To dynamically deploy Azure Resource Groups that contain virtual machines, use the Azure Resource Group Deployment
task. This task has a checkbox named Enable Deployment Prerequisites . Select this to automatically set up the WinRM
HTTPS protocol on the virtual machines, open port 5986 in the firewall, and install a test certificate. The virtual machines
are then ready for use in the deployment task.

YAML snippet
# PowerShell on target machines
# Execute PowerShell scripts on remote machines using PSSession and Invoke-Command for remoting
- task: PowerShellOnTargetMachines@3
inputs:
machines:
#userName: # Optional
#userPassword: # Optional
#scriptType: 'Inline' # Optional. Options: filePath, inline
#scriptPath: # Required when scriptType == FilePath
#inlineScript: '# Write your powershell commands here.Write-Output Hello World' # Required when scriptType ==
Inline
#scriptArguments: # Optional
#initializationScript: # Optional
#sessionVariables: # Optional
#communicationProtocol: 'Https' # Optional. Options: http, https
#authenticationMechanism: 'Default' # Optional. Options: default, credssp
#newPsSessionOptionArguments: '-SkipCACheck -IdleTimeout 7200000 -OperationTimeout 0 -OutputBufferingMode Block'
# Optional
#errorActionPreference: 'stop' # Optional. Options: stop, continue, silentlyContinue
#failOnStderr: false # Optional
#ignoreLASTEXITCODE: false # Optional
#workingDirectory: # Optional
#runPowershellInParallel: true # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Machines A comma-separated list of machine FQDNs or IP addresses,


optionally including the port number. Can be:
- The name of an Azure Resource Group.
- A comma-delimited list of machine names. Example:
dbserver.fabrikam.com,dbserver_int.fabrikam.com:5986,192.168.34:5986
- An output variable from a previous task.
If you do not specify a port, the default WinRM port is used. This
depends on the protocol you have configured: for WinRM 2.0, the
default HTTP port is 5985 and the default HTTPS port is 5986.

Admin Login The username of either a domain or a local administrative account


on the target host(s).
- Formats such as username , domain\username , machine-
name\username , and .\username are supported.
- UPN formats such as [email protected] and built-in
system accounts such as NT Authority\System are not
supported.

Password The password for the administrative account specified above.


Consider using a secret variable global to the build or release
pipeline to hide the password. Example: $(passwordVariable)

Protocol The protocol that will be used to connect to the target host, either
HTTP or HTTPS.

Test Cer tificate If you choose the HTTPS option, set this checkbox to skip
validating the authenticity of the machine's certificate by a trusted
certification authority.

Deployment - PowerShell Script The location of the PowerShell script on the target machine. Can
include environment variables such as $env:windir and
$env:systemroot Example:
C:\FabrikamFibre\Web\deploy.ps1
A RGUM EN T DESC RIP T IO N

Deployment - Script Arguments The arguments required by the script, if any. Example:
-applicationPath $(applicationPath) -username
$(vmusername) -password $(vmpassword)

Deployment - Initialization Script The location on the target machine(s) of the data script used by
PowerShell-DSC. It is recommended to use arguments instead of
an initialization script.

Deployment - Session Variables Used to set up the session variables for the PowerShell scripts. A
comma-separated list such as $varx=valuex, $vary=valuey
Most commonly used for backward compatibility with earlier
versions of the release service. It is recommended to use
arguments instead of session variables.

Advanced - Run PowerShell in Parallel Set this option to execute the PowerShell scripts in parallel on all
the target machines

Advanced - Select Machines By Depending on how you want to specify the machines in the group
when using the Filter Criteria parameter, choose Machine
Names or Tags .

Advanced - Filter Criteria Optional. A list of machine names or tag names that identifies the
machines that the task will target. The filter criteria can be:
- The name of an Azure Resource Group.
- An output variable from a previous task.
- A comma-delimited list of tag names or machine names.
Format when using machine names is a comma-separated list of
the machine FQDNs or IP addresses.
Specify tag names for a filter as {TagName}: {Value} Example:
Role:DB;OS:Win8.1

Control options See Control options

Version 3.x of the task includes the Inline script setting where you can enter your PowerShell script code.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Service Fabric Application Deployment task
11/2/2020 • 7 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to deploy a Service Fabric application to a cluster. This task deploys an Azure Service Fabric application to a cluster according to
the settings defined in the publish profile.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are called builds, service
connections are called service endpoints, stages are called environments, and jobs are called phases.

Prerequisites
Service Fabric
This task uses a Service Fabric installation to connect and deploy to a Service Fabric cluster.
Download and install Service Fabric on the build agent.

YAML snippet
# Service Fabric application deployment
# Deploy an Azure Service Fabric application to a cluster
- task: ServiceFabricDeploy@1
inputs:
applicationPackagePath:
serviceConnectionName:
#publishProfilePath: # Optional
#applicationParameterPath: # Optional
#overrideApplicationParameter: false # Optional
#compressPackage: false # Optional
#copyPackageTimeoutSec: # Optional
#registerPackageTimeoutSec: # Optional
#overwriteBehavior: 'SameAppTypeAndVersion' # Options: always, never, sameAppTypeAndVersion
#skipUpgradeSameTypeAndVersion: false # Optional
#skipPackageValidation: false # Optional
#useDiffPackage: false # Optional
#overridePublishProfileSettings: false # Optional
#isUpgrade: true # Optional
#unregisterUnusedVersions: true # Optional
#upgradeMode: 'Monitored' # Required when overridePublishProfileSettings == True && IsUpgrade == True# Options: monitored,
unmonitoredAuto, unmonitoredManual
#failureAction: 'Rollback' # Required when overridePublishProfileSettings == True && IsUpgrade == True && UpgradeMode == Monitored#
Options: rollback, manual
#upgradeReplicaSetCheckTimeoutSec: # Optional
#timeoutSec: # Optional
#forceRestart: false # Optional
#healthCheckRetryTimeoutSec: # Optional
#healthCheckWaitDurationSec: # Optional
#healthCheckStableDurationSec: # Optional
#upgradeDomainTimeoutSec: # Optional
#considerWarningAsError: false # Optional
#defaultServiceTypeHealthPolicy: # Optional
#maxPercentUnhealthyDeployedApplications: # Optional
#upgradeTimeoutSec: # Optional
#serviceTypeHealthPolicyMap: # Optional
#configureDockerSettings: false # Optional
#registryCredentials: 'AzureResourceManagerEndpoint' # Required when configureDockerSettings == True# Options:
azureResourceManagerEndpoint, containerRegistryEndpoint, usernamePassword
#dockerRegistryConnection: # Required when configureDockerSettings == True && RegistryCredentials == ContainerRegistryEndpoint
#azureSubscription: # Required when configureDockerSettings == True && RegistryCredentials == AzureResourceManagerEndpoint
#registryUserName: # Optional
#registryPassword: # Optional
#passwordEncrypted: true # Optional

Task Inputs
PA RA M ET ERS DESC RIP T IO N

applicationPackagePath (Required) Path to the application package that is to be deployed. [Variables]


Application Package (../../build/variables.md) and wildcards can be used in the path

serviceConnectionName (Required) Select an Azure Service Fabric service connection to be used to


Cluster Service Connection connect to the cluster. The settings defined in this referenced service
connection will override those defined in the publish profile. Choose
'Manage' to register a new service connection.
To connect to the cluster, the service fabric task uses the machine cert store
to store the information about the certificate. Using the same certificate, if
two releases run together on one machine they will start properly. However,
if one of the tasks is complete, the certificate from the machine cert store
would be cleaned up, which would affect the second release

publishProfilePath (Optional) Path to the publish profile file that defines the settings to use.
Publish Profile [Variables](../../build/variables.md) and wildcards can be used in the path.
Publish profiles can be created in Visual Studio as shown here

applicationParameterPath (Optional) Path to the application parameters file. [Variables]


Application Parameters (../../build/variables.md) and wildcards can be used in the path. If specified,
this will override the value in the publish profile. Application parameters file
can be created in Visual Studio as shown here

overrideApplicationParameter (Optional) Variables defined in the build or release pipeline will be matched
Override Application Parameters against the 'Parameter Name' entries in the application manifest file.
Application parameters file can be created in Visual Studio as shown here
Example: If your application has a parameter defined as below-

<Parameters>
<Parameter Name = "SampleApp_PartitionCount" Value ="1"/>
<Parameter Name = "SampleApp_InstanceCount" DefaultValue ="-
1"/>
</Parameters>

and you want to change the partition count to 2, you can define a
release pipeline or an environment variable "SampleApp_PartitionCount"
and its value as "2".
Note: If same variables are defined in the release pipeline and in the
environment, then the environment variables will supersede the release
pipeline variables
Default value: false

compressPackage (Optional) Indicates whether the application package should be compressed


Compress Package before copying to the image store. If enabled, this will override the value in
the publish profile. More information for compress package can be found
here
Default value: false

copyPackageTimeoutSec (Optional) Timeout in seconds for copying application package to image


CopyPackageTimeoutSec store. If specified, this will override the value in the publish profile

registerPackageTimeoutSec (Optional) Timeout in seconds for registering or un-registering application


RegisterPackageTimeoutSec package

overwriteBehavior (Required) Overwrite Behavior: when upgrade is not configured and an


Overwrite Behavior Application with same name already exists in the cluster, then following
actions are available => Never, Always, SameAppTypeAndVersion.
Never will not remove the existing Application. This is the default behavior.
Always will remove the existing Application even if its Application type and
Version is different from the Application being created.
SameAppTypeAndVersion will remove the existing Application only if its
Application type and Version is same as the Application being created
Default value: SameAppTypeAndVersion

skipUpgradeSameTypeAndVersion (Optional) Indicates whether an upgrade will be skipped if the same


Skip upgrade for same Type and Version application type and version already exists in the cluster, otherwise the
upgrade fails during validation. If enabled, re-deployments are idempotent.
Default value: false
PA RA M ET ERS DESC RIP T IO N

skipPackageValidation (Optional) Indicates whether the package should be validated or not before
Skip package validation deployment. More information about package validation can be found here
Default value: false

useDiffPackage (Optional) Diff package is created by task by comparing the package


Use Diff Package specified in Application Package input against the package which is currently
registered in the target cluster. If a service version in cluster's current
package is same as the new package, then this service package will be
removed from new Application package. See more details about diff package
here.
Default value: false

overridePublishProfileSettings (Optional) This will override all upgrade settings with either the values
Override All Publish Profile Upgrade Settings specified below or the default value if not specified. More information about
upgrade settings can be found here
Default value: false

isUpgrade (Optional) If false, the application will be overwritten.


Upgrade the Application Default value: true

unregisterUnusedVersions (Optional) Indicates whether all unused versions of the application type will
Unregister Unused Versions be removed after an upgrade
Default value: true

upgradeMode (Required)
Upgrade Mode Default value: Monitored

FailureAction (Required)
FailureAction Default value: Rollback

UpgradeReplicaSetCheckTimeoutSec (Optional)
UpgradeReplicaSetCheckTimeoutSec

TimeoutSec (Optional)
TimeoutSec

ForceRestart (Optional)
ForceRestart Default value: false

HealthCheckRetryTimeoutSec (Optional)
HealthCheckRetryTimeoutSec

HealthCheckWaitDurationSec (Optional)
HealthCheckWaitDurationSec

HealthCheckStableDurationSec (Optional)
HealthCheckStableDurationSec

UpgradeDomainTimeoutSec (Optional)
UpgradeDomainTimeoutSec

ConsiderWarningAsError (Optional)
ConsiderWarningAsError Default value: false

DefaultServiceTypeHealthPolicy (Optional)
DefaultServiceTypeHealthPolicy

MaxPercentUnhealthyDeployedApplications (Optional)
MaxPercentUnhealthyDeployedApplications

UpgradeTimeoutSec (Optional)
UpgradeTimeoutSec

ServiceTypeHealthPolicyMap (Optional)
ServiceTypeHealthPolicyMap
PA RA M ET ERS DESC RIP T IO N

configureDockerSettings Default value: false


Configure Docker settings

registryCredentials (Required) Choose how credentials for the Docker registry will be provided
Registry Credentials Source Default value: AzureResourceManagerEndpoint

dockerRegistryEndpoint (Required) Select a Docker registry service connection. Required for


Docker Registry Service Connection commands that need to authenticate with a registry.
Note: task will try to encrypt the registry secret before transmitting it to
service fabric cluster. However, it needs cluster's server certificate to be
installed on agent machine in order to do so. If certificate is not present,
secret will not be encrypted

azureSubscriptionEndpoint (Required) Select an Azure subscription.


Azure subscription Note: task will try to encrypt the registry secret before transmitting it to
service fabric cluster. However, it needs cluster's server certiticate to be
installed on agent machine in order to do so. If certificate is not present,
secret will not be encrypted

registryUserName (Optional) Username for the Docker registry


Registry User Name

registryPassword (Optional) Password for the Docker registry. If the password is not
Registry Password encrypted, it is recommended that you use a custom release pipeline secret
variable to store it

passwordEncrypted (Optional) It is recommended to encrypt your password using [Invoke-


Password Encrypted ServiceFabricEncryptText](/azure/service-fabric/service-fabric-application-
secret-management#encrypt-application-secrets). If you do not, and a
certificate matching the Server Certificate Thumbprint in the Cluster Service
Connection is installed on the build agent, it will be used to encrypt the
password; otherwise an error will occur

Also see: Update Service Fabric Manifests task

Arguments
A RGUM EN T DESC RIP T IO N

Publish Profile The location of the publish profile that specifies the settings to use for
deployment, including the location of the target Service Fabric cluster. Can
include wildcards and variables. Example:
$(system.defaultworkingdirectory)/**/drop/projectartifacts/**/PublishProfiles/Cloud

Application Package The location of the Service Fabric application package to be deployed to the
cluster. Can include wildcards and variables. Example:
$(system.defaultworkingdirectory)/**/drop/applicationpackage

Cluster Connection The name of the Azure Service Fabric service connection defined in the
TS/TFS project that describes the connection to the cluster.

Control options See Control options

Also see: Update Service Fabric App Versions task

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Service Fabric Compose Deploy task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy a Docker-compose application to a Service Fabric cluster. This task deploys an Azure Service Fabric
application to a cluster according to the settings defined in the compose file.

Prerequisites
NOTE: This task is currently in preview and requires a preview version of Service Fabric that supports compose deploy. See
Docker Compose deployment support in Azure Service Fabric.
Service Fabric
This task uses a Service Fabric installation to connect and deploy to a Service Fabric cluster.
Download and install Azure Service Fabric Core SDK on the build agent.

YAML snippet
# Service Fabric Compose deploy
# Deploy a Docker Compose application to an Azure Service Fabric cluster
- task: ServiceFabricComposeDeploy@0
inputs:
clusterConnection:
#composeFilePath: '**/docker-compose.yml'
#applicationName: 'fabric:/Application1'
#registryCredentials: 'AzureResourceManagerEndpoint' # Options: azureResourceManagerEndpoint,
containerRegistryEndpoint, usernamePassword, none
#dockerRegistryConnection: # Optional
#azureSubscription: # Required when registryCredentials == AzureResourceManagerEndpoint
#registryUserName: # Optional
#registryPassword: # Optional
#passwordEncrypted: true # Optional
#upgrade: # Optional
#deployTimeoutSec: # Optional
#removeTimeoutSec: # Optional
#getStatusTimeoutSec: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Cluster Connection The Azure Service Fabric service connection to use to connect and
authenticate to the cluster.

Compose File Path Path to the compose file that is to be deployed. Can include
wildcards and variables. Example:
$(System.DefaultWorkingDirectory)/**/drop/projectartifacts/**/docker-
compose.yml
. Note : combining compose files is not supported as part of this
task.

Application Name The Service Fabric Application Name of the application being
deployed. Use fabric:/ as a prefix. Application Names within a
Service Fabric cluster must be unique.
A RGUM EN T DESC RIP T IO N

Registr y Credentials Source Specifies how credentials for the Docker container registry will be
provided to the deployment task:
Azure Resource Manager Endpoint : An Azure Resource
Manager service connection and Azure subscription to be used to
obtain a service principal ID and key for an Azure Container
Registry.
Container Registr y Endpoint : A Docker registry service
connection. If a certificate matching the Server Certificate
Thumbprint in the Cluster Connection is installed on the build
agent, it will be used to encrypt the password; otherwise the
password will not be encrypted and sent in clear text.
Username and Password : Username and password to be used.
We recommend you encrypt your password using
Invoke-ServiceFabricEncryptText (Check Password
Encr ypted ). If you do not, and a certificate matching the Server
Certificate Thumbprint in the Cluster Connection is installed on the
build agent, it will be used to encrypt the password; otherwise the
password will not be encrypted and sent in clear text.
None : No registry credentials are provided (used for accessing
public container registries).

Deploy Timeout (s) Timeout in seconds for deploying the application.

Remove Timeout (s) Timeout in seconds for removing an existing application.

Get Status Timeout (s) Timeout in seconds for getting the status of an existing application.

Control options See Control options

Also see: Service Fabric PowerShell Utility

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
SSH Deployment task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to run shell commands or a script on a remote machine using SSH. This task enables you to connect
to a remote machine using SSH and run commands or a script.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Prerequisites
The task supports use of an SSH key pair to connect to the remote machine(s).
The public key must be pre-installed or copied to the remote machine(s).

YAML snippet
# SSH
# Run shell commands or a script on a remote machine using SSH
- task: SSH@0
inputs:
sshEndpoint:
#runOptions: 'commands' # Options: commands, script, inline
#commands: # Required when runOptions == Commands
#scriptPath: # Required when runOptions == Script
#inline: # Required when runOptions == Inline
#interpreterCommand: # Used when runOptions == Inline
#args: # Optional
#failOnStdErr: true # Optional

Arguments
A RGUM EN T DESC RIP T IO N

SSH endpoint The name of an SSH service connection containing connection


details for the remote machine. The hostname or IP address
of the remote machine, the port number, and the user name
are required to create an SSH service connection.
- The private key and the passphrase must be specified for
authentication.
- A password can be used to authenticate to remote Linux
machines, but this is not supported for macOS or Windows
systems.

Run Choose to run either shell commands or a shell script on the


remote machine.
A RGUM EN T DESC RIP T IO N

Commands The shell commands to run on the remote machine. This


parameter is available only when Commands is selected for
the Run option. Enter each command together with its
arguments on a new line of the multi-line textbox. To run
multiple commands together, enter them on the same line
separated by semicolons. Example:
cd /home/user/myFolder;build

NOTE : Each command runs in a separate process. If you want


to run a series of commands that are interdependent (for
example, changing the current folder before executing a
command) use the Inline Script option instead.

Shell script path Path to the shell script file to run on the remote machine. This
parameter is available only when Shell script is selected for
the Run option.

Interpreter command Path to the command interpreter used to execute the script.
Used when Run option = Inline. Adds a shebang line to the
beginning of the script. Relevant only for UNIX-like operating
systems. Please use empty string for Windows-based remote
hosts. See more about shebang (#!)

Arguments The arguments to pass to the shell script. This parameter is


available only when Shell script is selected for the Run
option.

Advanced - Fail on STDERR If this option is selected (the default), the build will fail if the
remote commands or script write to STDERR.

Control options See Control options

Supported algorithms
Key pair algorithms
RSA
DSA
Encryption algorithms
aes256-cbc
aes192-cbc
aes128-cbc
blowfish-cbc
3des-cbc
arcfour256
arcfour128
cast128-cbc
arcfour
For OpenSSL v1.0.1 and higher (on agent):
aes256-ctr
aes192-ctr
aes128-ctr
For OpenSSL v1.0.1 and higher, NodeJS v0.11.12 and higher (on agent):
aes128-gcm
[email protected]
aes256-gcm
[email protected]

See also
Install SSH Key task
Copy Files Over SSH
Blog post SSH build task

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
What key formats are supported for the SSH tasks?
The Azure Pipelines SSH tasks use the Node.js ssh2 package for SSH connections. Ensure that you are using the
latest version of the SSH tasks. Older versions may not support the OpenSSH key format.
If you run into an "Unsupported key format" error, then you may need to add the -m PEM flag to your ssh-keygen
command so that the key is in a supported format.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Windows Machine File Copy task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to copy application files and other artifacts such as PowerShell scripts and PowerShell-DSC modules
that are required to install the application on Windows Machines. It uses RoboCopy, the command-line utility built
for fast copying of data.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.

YAML snippet
# Windows machine file copy
# Copy files to remote Windows machines
- task: WindowsMachineFileCopy@2
inputs:
sourcePath:
#machineNames: # Optional
#adminUserName: # Optional
#adminPassword: # Optional
targetPath:
#cleanTargetBeforeCopy: false # Optional
#copyFilesInParallel: true # Optional
#additionalArguments: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Source The path to the files to copy. Can be a local physical path
such as c:\files or a UNC path such as
\\myserver\fileshare\files . You can use pre-defined
system variables such as $(Build.Repository.LocalPath)
(the working folder on the agent computer), which makes it
easy to specify the location of the build artifacts on the
computer that hosts the automation agent.

Machines A comma-separated list of machine FQDNs or IP addresses,


optionally including the port number. Can be:
- The name of an Azure Resource Group.
- A comma-delimited list of machine names. Example:
dbserver.fabrikam.com,
dbserver_int.fabrikam.com:5986,192.168.34:5986
- An output variable from a previous task.
A RGUM EN T DESC RIP T IO N

Admin Login The username of either a domain or a local administrative


account on the target host(s).
- Formats such as domain\username , username , and
machine-name\username are supported.
- UPN formats such as [email protected] and built-
in system accounts such as NT Authority\System are not
supported.

Password The password for the administrative account specified above.


Consider using a secret variable global to the build or release
pipeline to hide the password. Example:
$(passwordVariable)

Destination Folder The folder on the Windows machine(s) to which the files will
be copied. Example: C:\FabrikamFibre\Web

Advanced - Clean Target Set this option to delete all the files in the destination folder
before copying the new files to it.

Advanced - Copy Files in Parallel Set this option to copy files to all the target machines in
parallel, which can speed up the copying process.

Advanced - Additional Arguments Arguments to pass to the RoboCopy process. Example:


/min:33553332 /l

Select Machines By Depending on how you want to specify the machines in the
group when using the Filter Criteria parameter, choose
Machine Names or Tags .

Filter Criteria Optional. A list of machine names or tag names that identifies
the machines that the task will target. The filter criteria can
be:
- The name of an Azure Resource Group.
- An output variable from a previous task.
- A comma-delimited list of tag names or machine names.
Format when using machine names is a comma-separated list
of the machine FQDNs or IP addresses.
Specify tag names for a filter as {TagName}: {Value} Example:
Role:DB;OS:Win8.1

Control options See Control options

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
I get a system error 53 when using this task. Why?
Typically this occurs when the specified path cannot be located. This may be due to a firewall blocking the
necessary ports for file and printer sharing, or an invalid path specification. For more details, see Error 53 on
TechNet.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
WinRM SQL Server DB Deployment task
4/10/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to deploy to SQL Server Database using a DACPAC or SQL script.

YAML snippet
# SQL Server database deploy
# Deploy a SQL Server database using DACPAC or SQL scripts
- task: SqlDacpacDeploymentOnMachineGroup@0
inputs:
#taskType: 'dacpac' # Options: dacpac, sqlQuery, sqlInline
#dacpacFile: # Required when taskType == Dacpac
#sqlFile: # Required when taskType == SqlQuery
#executeInTransaction: false # Optional
#exclusiveLock: false # Optional
#appLockName: # Required when exclusiveLock == True
#inlineSql: # Required when taskType == SqlInline
#targetMethod: 'server' # Required when taskType == Dacpac# Options: server, connectionString,
publishProfile
#serverName: 'localhost' # Required when targetMethod == Server || TaskType == SqlQuery || TaskType ==
SqlInline
#databaseName: # Required when targetMethod == Server || TaskType == SqlQuery || TaskType == SqlInline
#authScheme: 'windowsAuthentication' # Required when targetMethod == Server || TaskType == SqlQuery ||
TaskType == SqlInline# Options: windowsAuthentication, sqlServerAuthentication
#sqlUsername: # Required when authScheme == SqlServerAuthentication
#sqlPassword: # Required when authScheme == SqlServerAuthentication
#connectionString: # Required when targetMethod == ConnectionString
#publishProfile: # Optional
#additionalArguments: # Optional
#additionalArgumentsSql: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

Deploy SQL Using (Required) Specify the way in which you want to deploy DB,
either by using Dacpac or by using Sql Script.

DACPAC File (Required) Location of the DACPAC file on the target machines
or on a UNC path like,
\BudgetIT\Web\Deploy\FabrikamDB.dacpac. The UNC path
should be accessible to the machine's administrator account.
Environment variables are also supported, such as $env:windir,
$env:systemroot, $env:windir\FabrikamFibre\DB. Wildcards
can be used. For example, /*.dacpac for DACPAC file
present in all sub folders.
A RGUM EN T DESC RIP T IO N

Sql File (Required) Location of the SQL file on the target. Provide
semi-colon separated list of SQL script files to execute multiple
files. The SQL scripts will be executed in the order given.
Location can also be a UNC path like,
\BudgetIT\Web\Deploy\FabrikamDB.sql. The UNC path should
be accessible to the machine's administrator account.
Environment variables are also supported, such as $env:windir,
$env:systemroot, $env:windir\FabrikamFibre\DB. Wildcards
can be used. For example, /*.sql for sql file present in all
sub folders.

Execute within a transaction (Optional) Executes SQL script(s) within a transaction

Acquire an exclusive app lock while executing script(s) (Optional) Acquires an exclusive app lock while executing
script(s)

App lock name (Required) App lock name

Inline Sql (Required) Sql Queries inline

Specify SQL Using (Required) Specify the option to connect to the target SQL
Server Database. The options are either to provide the SQL
Server Database details, or the SQL Server connection string,
or the Publish profile XML file.

Server Name (Required) Provide the SQL Server name like,


machinename\FabrikamSQL,1433 or localhost or
.\SQL2012R2. Specifying localhost will connect to the Default
SQL Server instance on the machine.

Database Name (Required) Provide the name of the SQL Server database.

Authentication (Required) Select the authentication mode for connecting to


the SQL Server. In Windows authentication mode, the
administrator's account, as specified in the Machines section, is
used to connect to the SQL Server. In SQL Server
Authentication mode, the SQL login and Password have to be
provided in the parameters below.

SQL User name (Required) Provide the SQL login to connect to the SQL Server.
The option is only available if SQL Server Authentication mode
has been selected.

SQL Password (Required) Provide the Password of the SQL login. The option
is only available if SQL Server Authentication mode has been
selected.

Connection String (Required) Specify the SQL Server connection string like
"Server=localhost;Database=Fabrikam;User
ID=sqluser;Password=placeholderpassword;"

Publish Profile (Optional) Publish profile provide fine-grained control over


SQL Server database deployments. Specify the path to the
Publish profile XML file on the target machine or on a UNC
share that is accessible by the machine administrator's
credentials.
A RGUM EN T DESC RIP T IO N

Additional Arguments (Optional) Additional SqlPackage.exe arguments that will be


applied when deploying the SQL Server database like,
/p:IgnoreAnsiNulls=True /p:IgnoreComments=True. These
arguments will override the settings in the Publish profile XML
file (if provided).

Additional Arguments (Optional) Additional Invoke-Sqlcmd arguments that will be


applied when deploying the SQL Server database.

C O N T RO L O P T IO N S

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
MySql Database Deployment on Machine Group task
4/10/2020 • 2 minutes to read • Edit Online

Use this task to run your scripts and make changes to your MySQL Database. There are two ways to deploy, either
using a script file or writing the script in our inline editor. Note that this is an early preview version. Since this task
is server based, it appears on Deployment group jobs.

Prerequisites
MySQL Client in agent box
The task expects MySQL client must be in agent box.
Windows Agent : Use this script file to install MySQL client
Linux Agent : Run command 'apt-get install mysql-client' to install MySQL client

Task Inputs
PA RA M ET ERS DESC RIP T IO N

TaskNameSelector Select one of the options between Script File & Inline Script.
Deploy MySql Using Default value: SqlTaskFile

SqlFile (Required) Full path of the script file on the automation agent
MySQL Script or on a UNC path accessible to the automation agent like,
BudgetIT\DeployBuilds\script.sql. Also, predefined system
variables like, $(agent.releaseDirectory) can also be used here.
A file containing SQL statements can be used here.

SqlInline (Required) MySQL script to execute on the Database


Inline MySQL Script

ServerName (Required) Server name of Database for MySQL.


Host Name Example: localhost.
When you connect using MySQL Workbench, this is the same
value that is used for 'Hostname' in 'Parameters'.
Default value: localhost

DatabaseName The name of database, if you already have one, on which the
Database Name below script is needed to be run, else the script itself can be
used to create the database.

SqlUsername (Required) When you connect using MySQL Workbench, this is


Mysql User Name the same value that is used for 'Username' in 'Parameters'

SqlPassword (Required) Password for MySQL Database.


Password It can be variable defined in the pipeline.
Example : $(password).
Mark the variable type as 'secret' to secure it.
PA RA M ET ERS DESC RIP T IO N

SqlAdditionalArguments Additional options supported by MySQL simple SQL shell.


Additional Arguments These options will be applied when executing the given file on
the Database for MySQL.
Example: You can change to default tab separated output
format to HTML or even XML format. Or if you have problems
due to insufficient memory for large result sets, use the --
quick option.

Example
This example creates a sample db in MySQL.

steps:
- task: MysqlDeploymentOnMachineGroup@1
displayName: 'Deploy Using : InlineSqlTask'
inputs:
TaskNameSelector: InlineSqlTask
SqlInline: |
CREATE DATABASE IF NOT EXISTS alm;
use alm;
ServerName: localhost
SqlUsername: root
SqlPassword: P2ssw0rd

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Docker Installer task
4/10/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to install a specific version of the Docker CLI on the agent machine.

Task Inputs
PA RA M ET ERS DESC RIP T IO N

dockerVersion (Required) Specify the version of Docker CLI to install.


Docker Version Default value: 17.09.0-ce

releaseType (Optional) Select the release type to install. 'Nightly' is not


Release type supported on Windows.
Default value: stable

This YAML example installs the Docker CLI on the agent machine:

- task: DockerInstaller@0
displayName: Docker Installer
inputs:
dockerVersion: 17.09.0-ce
releaseType: stable

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Go Tool Installer task
4/21/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to find or download a specific version of the Go tool into the tools cache and add it to the PATH. Use
the task to change the version of Go Lang used in subsequent tasks.

YAML snippet
# Go tool installer
# Find in cache or download a specific version of Go and add it to the PATH
- task: GoTool@0
inputs:
#version: '1.10'
#goPath: # Optional
#goBin: # Optional

Arguments
A RGUM EN T DESC RIP T IO N

version (Required) Go tool version to download and install. Example:


Version 1.9.3
Default value: 1.10

goPath (Optional) Value for the GOPATH environment variable.


GOPATH

goBin (Optional) Value for the GOBIN environment variable.


GOBIN

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Helm installer task
2/28/2020 • 2 minutes to read • Edit Online

This task can be used for installing a specific version of helm binary on agents.

YAML snippet
# Helm tool installer
# Install Helm on an agent machine
- task: HelmInstaller@1
inputs:
#helmVersionToInstall: 'latest' # Optional

Task inputs
PA RA M ET ERS DESC RIP T IO N

helmVersionToInstall (Optional) The version of Helm to be installed on the agent.


Helm Version Spec Acceptable values are latest or any semantic version string
like 2.14.1
Default value: latest

The following YAML example showcases the installation of latest version of helm binary on the agent -

- task: HelmInstaller@1
displayName: Helm installer
inputs:
helmVersionToInstall: latest

The following YAML example demonstrates the use of an explicit version string rather than installing the latest
version available at the time of task execution -

- task: HelmInstaller@1
displayName: Helm installer
inputs:
helmVersionToInstall: 2.14.1

Troubleshooting
HelmInstaller task running on a private agent behind a proxy fails to download helm package.
The HelmInstaller task does not use the proxy settings to download the file https://get.helm.sh/helm-v3.1.0-linux-
amd64.zip. You can work around this by pre-installing Helm on your private agents.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Java Tool Installer task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to acquire a specific version of Java from a user supplied Azure blob, from a location in the source or
on the agent, or from the tools cache. The task also sets the JAVA_HOME environment variable. Use this task to
change the version of Java used in Java tasks.

Demands
None

YAML snippet
# Java tool installer
# Acquire a specific version of Java from a user-supplied Azure blob or the tool cache and sets JAVA_HOME
- task: JavaToolInstaller@0
inputs:
#versionSpec: '8'
jdkArchitectureOption: # Options: x64, x86
jdkSourceOption: # Options: AzureStorage, LocalDirectory
#jdkFile: # Required when jdkSourceOption == LocalDirectory
#azureResourceManagerEndpoint: # Required when jdkSourceOption == AzureStorage
#azureStorageAccountName: # Required when jdkSourceOption == AzureStorage
#azureContainerName: # Required when jdkSourceOption == AzureStorage
#azureCommonVirtualFile: # Required when jdkSourceOption == AzureStorage
jdkDestinationDirectory:
#cleanDestinationDirectory: true

Arguments
A RGUM EN T DESC RIP T IO N

versionSpec (Required) Specify which JDK version to download and use.


JDK Version Default value: 8

jdkArchitectureOption Specify the bit version of the JDK.


JDK Architecture Options: x64, x86

jdkSourceOption (Required) Specify the source for the compressed JDK, either
JDK source Azure blob storage or a local directory on the agent or source
repository or use the pre-installed version of Java (available
for Microsoft-hosted agents). Please see example below about
how to use pre-installed version of Java
A RGUM EN T DESC RIP T IO N

jdkFile (Required) Applicable when


JDK file jdkSourceOption == LocalDirectory . Specify the path to
the jdk archive file that contains the compressed JDK. The
path could be in your source repository or a local path on the
agent. The file should be an archive (.zip, .tar.gz, .7z),
containing bin folder either on the root level or inside a single
directory. For macOS - there's support of .pkg and .dmg files
containing only one .pkg file inside.

azureResourceManagerEndpoint (Required) Applicable when


Azure Subscription jdkSourceOption == AzureStorage . Specify the Azure
Resource Manager subscription for the JDK.

azureStorageAccountName (Required) Applicable when


Storage Account Name jdkSourceOption == AzureStorage . Specify the Storage
account name in which the JDK is located. Azure Classic and
Resource Manager storage accounts are listed.

azureContainerName (Required) Applicable when


Container Name jdkSourceOption == AzureStorage . Specify the name of the
container in the storage account in which the JDK is located.

azureCommonVirtualFile (Required) Applicable when


Common Virtual Path jdkSourceOption == AzureStorage . Specify the path to the
JDK inside the Azure storage container.

jdkDestinationDirectory (Required) Specify the destination directory into which the JDK
Destination directory should be extracted.

cleanDestinationDirectory (Required) Select this option to clean the destination directory


Clean destination directory before the JDK is extracted into it.
Default value: true

Examples
Here's an example of getting the archive file from a local directory on Linux. The file should be an archive (.zip, .gz)
of the JAVA_HOME directory so that it includes the bin , lib , include , jre , etc. directories.

- task: JavaToolInstaller@0
inputs:
versionSpec: "11"
jdkArchitectureOption: x64
jdkSourceOption: LocalDirectory
jdkFile: "/builds/openjdk-11.0.2_linux-x64_bin.tar.gz"
jdkDestinationDirectory: "/builds/binaries/externals"
cleanDestinationDirectory: true

Here's an example of downloading the archive file from Azure Storage. The file should be an archive (.zip, .gz) of the
JAVA_HOME directory so that it includes the bin , lib , include , jre , etc. directories.
- task: JavaToolInstaller@0
inputs:
versionSpec: '6'
jdkArchitectureOption: 'x64'
jdkSourceOption: AzureStorage
azureResourceManagerEndpoint: myARMServiceConnection
azureStorageAccountName: myAzureStorageAccountName
azureContainerName: myAzureStorageContainerName
azureCommonVirtualFile: 'jdk1.6.0_45.zip'
jdkDestinationDirectory: '$(agent.toolsDirectory)/jdk6'
cleanDestinationDirectory: false

Here's an example of using "pre-installed" feature. This feature allows you to use Java versions that are pre-
installed on the Microsoft-hosted agent. You can find available pre-installed versions of Java in Software section.

- task: JavaToolInstaller@0
inputs:
versionSpec: '8'
jdkArchitectureOption: 'x86'
jdkSourceOption: 'PreInstalled'

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Kubectl installer task
2/26/2020 • 2 minutes to read • Edit Online

This task can be used for installing a specific version of kubectl binary on agents.

YAML snippet
# Kubectl tool installer
# Install Kubectl on agent machine
- task: KubectlInstaller@0
inputs:
#kubectlVersion: 'latest' # Optional

Task inputs
PA RA M ET ERS DESC RIP T IO N

kubectlVersion (Optional) The version of kubectl to be installed on the agent.


Kubectl version spec Acceptable values are latest or any semantic version string
like 1.15.0
Default value: latest

The following YAML example showcases the installation of latest version of kubectl binary on the agent -

- task: KubectlInstaller@0
displayName: Kubectl installer
inputs:
kubectlVersion: latest

The following YAML example demonstrates the use of an explicit version string rather than installing the latest
version available at the time of task execution -

- task: KubectlInstaller@0
displayName: Kubectl installer
inputs:
kubectlVersion: 1.15.0

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Node.js Tool Installer task
11/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Build
Use this task to find, download, and cache a specified version of Node.js and add it to the PATH.

Demands
None

YAML snippet
# Node.js tool installer
# Finds or downloads and caches the specified version spec of Node.js and adds it to the PATH
- task: NodeTool@0
inputs:
#versionSpec: '6.x'
#checkLatest: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

versionSpec (Required) Specify which Node.js version you want to use,


Version Spec using semver's version range syntax.
Examples : 7.x , 6.x , 6.10.0 , >=6.10.0
Default value: 6.x

checkLatest (Optional) Select if you want the agent to check for the latest
Check for Latest Version available version that satisfies the version spec. For example,
you select this option because you run this build on your self-
hosted agent and you want to always use the latest 6.x
version.

TIP
If you're using the Microsoft-hosted agents, you should leave this check box cleared. We update the Microsoft-hosted
agents on a regular basis, but they're often slightly behind the latest version. So selecting this box will result in your build
spending a lot of time updating to a newer minor version.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
NuGet Tool Installer task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Build
Use this task to find, download, and cache a specified version of NuGet and add it to the PATH.

Demands
None

YAML snippet
# NuGet tool installer
# Acquires a specific version of NuGet from the internet or the tools cache and adds it to the PATH. Use this
task to change the version of NuGet used in the NuGet tasks.
- task: NuGetToolInstaller@1
inputs:
#versionSpec: # Optional
#checkLatest: false # Optional

Arguments
A RGUM EN T DESC RIP T IO N

versionSpec A version or version range that specifies the NuGet version to


Version Spec make available on the path. Use x as a wildcard. See the list of
available NuGet versions. If you want to match a pre-release
version, the specification must contain a major, minor, patch,
and pre-release version from the list above. Examples: 5.x,
5.4.x, 5.3.1, >=5.0.0-0. If unspecified, a version will be chosen
automatically

checkLatest Always check for and download the latest available version of
Always check for new versions NuGet.exe which satisfies the version spec. Enabling this
option could cause unexpected build breaks when a new
version of NuGet is released

TIP
If you're using the Microsoft-hosted agents, you should leave this check box cleared. We update the Microsoft-hosted
agents on a regular basis, but they're often slightly behind the latest version. So selecting this box will result in your build
spending a lot of time updating to a newer minor version.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Use .NET Core task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to acquire a specific version of .NET Core from the Internet or the tools cache and add it to the PATH.
You can also use this task to change the version of .NET Core used in subsequent tasks like .NET Core cli task.
One other reason to use tool installer is if you want to decouple your pipeline from our update cycles to help avoid
a pipeline run being broken due to a change we make to our agent software.
What's New
Support for installing multiple versions side by side.
Support for patterns in version to fetch latest in minor/major version. For example, you can now specify
2.2.x to get the latest patch.
Perform Multi-level lookup. This input is only applicable to Windows based agents. It configures the .NET
Core's host process behavior for looking for a suitable shared framework on the machine. For more
information, see Multi-level SharedFX Lookup.
Installs NuGet version 4.4.1 and sets up proxy configuration if present in NuGet config.

Task Inputs
PA RA M ET ERS DESC RIP T IO N

packageType Please select whether to install only runtime or SDK


Package to install Default value: sdk

useGlobalJson Select this option to install all SDKs from global.json files.
Use global json These files are searched from
system.DefaultWorkingDirector y. You can change the
search root path by setting working directory input

workingDirectory Specify path from where global.json files should be searched


Working Directory when using `Use global json`. If empty,
system.DefaultWorkingDirector y will be considered as
the root path

version Specify version of .NET Core SDK or runtime to install.


Version Versions can be given in the following formats
2.x => Install latest in major version.
2.2.x => Install latest in major and minor version
2.2.104 => Install exact version

Find the value of version for installing SDK/Runtime, from


the releases.json. The link to releases.json of that major.minor
version can be found in releases-index file. . Like link to
releases.json for 2.2 version is
https://dotnetcli.blob.core.windows.net/dotnet/release-
metadata/2.2/releases.json
PA RA M ET ERS DESC RIP T IO N

includePreviewVersions Select if you want preview versions to be included while


Include Preview Versions searching for latest versions, such as while searching 2.2.x.
This setting is ignored if you specify an exact version, such as:
3.0.100-preview3-010431
Default value: false

installationPath Specify where .NET Core SDK/Runtime should be installed.


Path To Install .NET Core Different paths can have the following impact on .Net's
behavior.
$(Agent.ToolsDirectory): This makes the version to be
cached on the agent since this directory is not cleanup up
across pipelines. All pipelines running on the agent, would
have access to the versions installed previously using the
agent.
$(Agent.TempDirectory): This can ensure that a pipeline
doesn't use any cached version of .NET core since this folder is
cleaned up after each pipeline.
Any other path: You can configure any other path given the
agent process has access to the path. This will change the
state of the machine and impact all processes running on it.
Note that you can also configure Multi-Level Lookup setting
which can configure .NET host's probing for a suitable version.
Default value: $(Agent.ToolsDirectory)/dotnet

performMultiLevelLookup This input is only applicable to Windows based agents. This


Perform Multi Level Lookup configures the behavior of .NET host process for looking up a
suitable shared framework.
false: (default) Only versions present in the folder specified
in this task would be looked by the host process.
true: The host will attempt to look in pre-defined global
locations using multi-level lookup.
The default global locations are:
For Windows:
C:/Program Files/dotnet (64-bit processes)
C:/Program Files (x86)/dotnet (32-bit process)
You can read more about it HERE

This YAML example installs version 2.2.203 of .NET Core.

steps:
- task: UseDotNet@2
displayName: 'Use .NET Core sdk'
inputs:
packageType: sdk
version: 2.2.203
installationPath: $(Agent.ToolsDirectory)/dotnet

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Use Python Version task
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines
Use this task to select a version of Python to run on an agent, and optionally add it to PATH.

Demands
None

Prerequisites
A Microsoft-hosted agent with side-by-side versions of Python installed, or a self-hosted agent with
Agent.ToolsDirectory configured (see FAQ).
This task will fail if no Python versions are found in Agent.ToolsDirectory. Available Python versions on Microsoft-
hosted agents can be found here.

NOTE
x86 and x64 versions of Python are available on Microsoft-hosted Windows agents, but not on Linux or macOS agents.

YAML snippet
# Use Python version
# Use the specified version of Python from the tool cache, optionally adding it to the PATH
- task: UsePythonVersion@0
inputs:
#versionSpec: '3.x'
#addToPath: true
#architecture: 'x64' # Options: x86, x64 (this argument applies only on Windows agents)

Arguments
A RGUM EN T DESC RIP T IO N

versionSpec (Required) Version range or exact version of a Python version


Version spec to use.
Default value: 3.x

addToPath (Required) Whether to prepend the retrieved Python version


Add to PATH to the PATH environment variable to make it available in
subsequent tasks or scripts without using the output variable.
Default value: true

architecture (Required) The target architecture (x86, x64) of the Python


Architecture interpreter. x86 is supported only on Windows.
Default value: x64

As of version 0.150 of the task, version spec will also accept pypy2 or pypy3 .
If the task completes successfully, the task's output variable will contain the directory of the Python installation:

Remarks
After running this task with "Add to PATH," the python command in subsequent scripts will be for the highest
available version of the interpreter matching the version spec and architecture.
The versions of Python installed on the Microsoft-hosted Ubuntu and macOS images follow the symlinking
structure for Unix-like systems defined in PEP 394. For example, for Python 3.7, python3.7 is the actual
interpreter. python3 is symlinked to that interpreter, and python is a symlink to that symlink.
On the Microsoft-hosted Windows images, the interpreter is just python .
For Microsoft-hosted agents, x86 is supported only on Windows. This is because Windows can run executables
compiled for the x86 architecture with the WoW64 subsystem. Hosted Ubuntu and Hosted macOS run 64-bit
operating systems and run only 64-bit Python.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
How can I configure a self-hosted agent to use this task?
The desired Python version will have to be added to the tool cache on the self-hosted agent in order for the task to
use it. Normally the tool cache is located under the _work/_tool directory of the agent or the path can be
overridden by the environment variable AGENT_TOOLSDIRECTORY . Under that directory, create the following
directory structure based off of your Python version:

$AGENT_TOOLSDIRECTORY/
Python/
{version number}/
{platform}/
{tool files}
{platform}.complete

The version number should follow the format of 1.2.3 . The platform should either be x86 or x64 . The
tool files should be the unzipped Python version files. The {platform}.complete should be a 0 byte file that
looks like x86.complete or x64.complete and just signifies the tool has been installed in the cache properly.
As a complete, concrete example, here is how a completed download of Python 3.6.4 for x64 would look in the
tool cache:

$AGENT_TOOLSDIRECTORY/
Python/
3.6.4/
x64/
{tool files}
x64.complete

For more details on the tool cache, look here.


In order that your scripts may work as they would on Microsoft-hosted agents, we recommend following the
symlinking structure from PEP 394 on Unix-like systems.
Also note that the embeddable ZIP release of Python requires extra effort to configure for installed modules,
including pip . If possible, we recommend using the full installer to get a pip -compatible Python installation.
Use Ruby Version task
6/2/2020 • 2 minutes to read • Edit Online

Azure Pipelines
Use this task to select a version of Ruby to run on an agent, and optionally add it to PATH.

Demands
None

Prerequisites
A Microsoft-hosted agent with side-by-side versions of Ruby installed, or a self-hosted agent with
Agent.ToolsDirectory configured (see FAQ).
This task will fail if no Ruby versions are found in Agent.ToolsDirectory. Available Ruby versions on Microsoft-
hosted agents can be found here.

YAML snippet
# Use Ruby version
# Use the specified version of Ruby from the tool cache, optionally adding it to the PATH
- task: UseRubyVersion@0
inputs:
#versionSpec: '>= 2.4'
#addToPath: true # Optional

Arguments
A RGUM EN T DESC RIP T IO N

versionSpec (Required) Version range or exact version of a Ruby version to


Version spec use.
Default value: >= 2.4

addToPath (Optional) Whether to prepend the retrieved Ruby version to


Add to PATH the PATH environment variable to make it available in
subsequent tasks or scripts without using the output variable.
Default value: true

If the task completes successfully, the task's output variable will contain the directory of the Ruby installation.

Open source
This task is open source on GitHub. Feedback and contributions are welcome.

FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
How can I configure a self-hosted agent to use this task?
You can run this task on a self-hosted agent with your own Ruby versions. To run this task on a self-hosted agent,
set up Agent.ToolsDirectory by following the instructions here. The tool name to use is "Ruby."
Visual Studio Test Platform Installer task
11/2/2020 • 2 minutes to read • Edit Online

Azure DevOps Ser vices | TFS 2018 Update 2


Use this task to acquire the Microsoft test platform from nuget.org or a specified feed, and add it to the tools
cache. The installer task satisfies the 'vstest' demand and a subsequent Visual Studio Test task in a build or release
pipeline can run without needing a full Visual Studio install on the agent machine.

Demands
[none]

YAML snippet
# Visual Studio test platform installer
# Acquire the test platform from nuget.org or the tool cache. Satisfies the ‘vstest’ demand and can be used
for running tests and collecting diagnostic data using the Visual Studio Test task.
- task: VisualStudioTestPlatformInstaller@1
inputs:
#packageFeedSelector: 'nugetOrg' # Options: nugetOrg, customFeed, netShare
#versionSelector: 'latestPreRelease' # Required when packageFeedSelector == NugetOrg ||
PackageFeedSelector == CustomFeed# Options: latestPreRelease, latestStable, specificVersion
#testPlatformVersion: # Required when versionSelector == SpecificVersion
#customFeed: # Required when packageFeedSelector == CustomFeed
#username: # Optional
#password: # Optional
#netShare: # Required when packageFeedSelector == NetShare

Arguments
A RGUM EN T DESC RIP T IO N

packageFeedSelector (Required) Can be:


Package Feed Official NuGet - Use this option to acquire the test platform
package from NuGet. This option requires internet
connectivity on the agent machine.
Custom feed - Use this option to acquire the test platform
package from a custom feed or a package management feed
in Azure DevOps or TFS.
Network path - Use this option to install the test platform
from a network share. The desired version of
Microsoft.TestPlatform.nupkg file must be downloaded from
NuGet and placed on a network share that the build/release
agent can access.
Default value: nugetOrg
A RGUM EN T DESC RIP T IO N

versionSelector (Required) Pick whether to install the latest version or a


Version specific version of the Visual Studio Test Platform.
If you use the test platform installer to run Coded UI tests,
ensure that the version you choose matches the major
version of Visual Studio with which the test binaries were built.
For e.g., if the Coded UI test project was built using Visual
Studio 2017 (version 15.x), you must use test platform version
15.x.
Options:
latestPreRelease, latestStable, specificVersion
Default value: latestPreRelease

testPlatformVersion (Required) Specify the version of Visual Studio Test Platform to


Test Platform Version install on the agent. Available versions can be viewed on
NuGet.

customFeed (Required) Specify the URL of a custom feed or a package


Package Source management feed in Azure DevOps or TFS that contains the
test platform package. Public as well as private feeds can be
specified.

username (Optional) Specify the user name to authenticate with the feed
User Name specified in the Package Source argument. If using a
personal access token (PAT) in the password argument, this
input is not required.

password (Optional) Specify the password or personal access token


Password (PAT) to authenticate with the feed specified in the Package
Source argument.

netShare (Required) Specify the full UNC path to the


UNC Path Microsoft.TestPlatform.nupkg file. The desired version of
Microsoft.TestPlatform.nupkg must be downloaded from
NuGet and placed on a network share that the build/release
agent can access.
NOTE
The Visual Studio Test Platform Installer task must appear before the Visual Studio Test task in the build or
release pipeline.

The Test platform version option in the Visual Studio Test task must be set to Installed by Tools Installer .

See Run automated tests from test plans

Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Troubleshoot pipeline runs
11/2/2020 • 19 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 |Azure DevOps Ser ver 2019 | TFS
2018 - TFS 2015
This topic provides general troubleshooting guidance. For specific troubleshooting about
.NET Core, see .NET Core troubleshooting.

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release
pipelines are called definitions, runs are called builds, service connections are called service
endpoints, stages are called environments, and jobs are called phases.

You can use the following troubleshooting sections to help diagnose issues with your
pipeline. Most pipeline failures fall into one of these categories.
Pipeline won't trigger
Pipeline queues but never gets an agent
Pipeline fails to complete

Pipeline won't trigger


If a pipeline doesn't start at all, check the following common trigger related issues.
UI settings override YAML trigger setting
Pull request triggers not supported with Azure Repos
Branch filters misconfigured in CI and PR triggers
Scheduled trigger time zone conversions
UI settings override YAML scheduled triggers

NOTE
An additional reason that runs may not start is that your organization goes dormant five
minutes after the last user signs out of Azure DevOps. After that, each of your build pipelines
will run one more time. For example, while your organization is dormant:
A nightly build of code in your organization will run only one night until someone signs in
again.
CI builds of an Other Git repo will stop running until someone signs in again.

UI settings override YAML trigger setting


YAML pipelines can have their trigger and pr trigger settings overridden in the
pipeline settings UI. If your trigger or pr triggers don't seem to be firing, check that
setting. While editing your pipeline, choose ... and then Triggers .
Check the Override the YAML trigger from here setting for the types of trigger
(Continuous integration or Pull request validation ) available for your repo.

Pull request triggers not supported with Azure Repos


If your pr trigger isn't firing, and you are using Azure Repos, it is because pr triggers
aren't supported for Azure Repos. In Azure Repos Git, branch policies are used to
implement pull request build validation. For more information, see Branch policy for pull
request validation.
Branch filters misconfigured in CI and PR triggers
When you define a YAML PR or CI trigger, you can specify both include and exclude
clauses for branches and paths. Ensure that the include clause matches the details of
your commit and that the exclude clause doesn't exclude them.

IMPORTANT
When you define a YAML PR or CI trigger, only branches explicitly configured to be included will
trigger a run. Includes are processed first, and then excludes are removed from the list. If you
specify an exclude but don't specify any includes, nothing will trigger. For more information, see
Triggers.

When you define a YAML PR or CI trigger, you can specify both include and exclude
clauses for branches, tags, and paths. Ensure that the include clause matches the details
of your commit and that the exclude clause doesn't exclude them. For more information,
see Triggers.
NOTE
If you specify an exclude clause without an include clause, it is equivalent to specifying *
in the include clause.

Scheduled trigger time zone conversions


YAML scheduled triggers are set using UTC time zone. If your scheduled triggers don't
seem to be firing at the right time, confirm the conversions between UTC and your local
time zone, taking into account the day setting as well. For more information, see
Scheduled triggers.
UI settings override YAML scheduled triggers
If your YAML pipeline has both YAML scheduled triggers and UI defined scheduled
triggers, only the UI defined scheduled triggers are run. To run the YAML defined
scheduled triggers in your YAML pipeline, you must remove the scheduled triggers
defined in the pipeline settings UI. Once all UI scheduled triggers are removed, a push
must be made in order for the YAML scheduled triggers to start running. For more
information, see Scheduled triggers.

Pipeline queues but never gets an agent


If your pipeline queues but never gets an agent, check the following items.
Parallel job limits - no available agents or you have hit your free limits
You don't have enough concurrency
Your job may be waiting for approval
All available agents are in use
Demands that don't match the capabilities of an agent
Check Azure DevOps status for a service degradation

NOTE
The following scenarios won't consume a parallel job:
If you use release pipelines or multi-stage YAML pipelines, then a run consumes a parallel job
only when it's being actively deployed to a stage. While the release is waiting for an approval
or a manual intervention, it does not consume a parallel job.
When you run a server job or deploy to a deployment group using release pipelines, you
don't consume any parallel jobs.
Learn more: How a parallel job is consumed by a pipeline, Add Pre-deployment approvals,
Server jobs, Deployment groups

Parallel job limits - no available agents or you have hit your free limits
Demands that don't match the capabilities of an agent
TFS agent connection issues
Parallel job limits - no available agents or you have hit your free limits
If you are currently running other pipelines, you may not have any remaining parallel
jobs, or you may have hit your free limits.
To check your limits, navigate to Project settings , Parallel jobs .
After reviewing the limits, check concurrency to see how many jobs are currently running
and how many are available.
If you are currently running other pipelines, you may not have any remaining parallel
jobs, or you may have hit your free limits.
You don't have enough concurrency
To check how much concurrency you have:
1. To check your limits, navigate to Project settings , Parallel jobs .
You can also reach this page by navigating to
https://dev.azure.com/{org}/_settings/buildqueue?_a=concurrentJobs , or choosing
manage parallel jobs from the logs.

2. Determine which pool you want to check concurrency on (Microsoft hosted or self
hosted pools), and choose View in-progress jobs .
3. You'll see text that says Currently running X/X jobs . If both numbers are the
same then jobs will wait until currently running jobs complete.

You can view all jobs, including queued jobs, by selecting Agent pools from the
Project settings .
In this example, the concurrent job limit is one, with one job running and one
queued up. When all agents are busy running jobs, as in this example, the
following message is displayed when additional jobs are queued:
The agent request is not running because all potential agents are running other
requests. Current position in queue: 1
. In this example the job is next in the queue so its position is one.
Your job may be waiting for approval
Your pipeline may not move to the next stage because it is waiting on approval. For more
information, see Define approvals and checks.
All available agents are in use
Jobs may wait if all your agents are currently busy. To check your agents:
1. Navigate to https://dev.azure.com/{org}/_settings/agentpools

2. Select the agent pool to check, in this example FabrikamPool , and choose
Agents .

This page shows all the agents currently online/offline and in use. You can also add
additional agents to the pool from this page.
Demands that don't match the capabilities of an agent
If your pipeline has demands that don't meet the capabilities of any of your agents, your
pipeline won't start. If only some of your agents have the desired capabilities and they are
currently running other pipelines, your pipeline will be stalled until one of those agents
becomes available.
To check the capabilities and demands specified for your agents and pipelines, see
Capabilities.

NOTE
Capabilities and demands are typically used only with self-hosted agents. If your pipeline has
demands that don't match the system capabilities of the agent, unless you have explicitly
labelled the agents with matching capabilities, your pipelines won't get an agent.

TFS agent connection issues


Config fails while testing agent connection (on-premises TFS only)
Agent lost communication
TFS Job Agent not started
Misconfigured notification URL (1.x agent version)
Config fails while testing agent connection (on-premises TFS only )

Testing agent connection.


VS30063: You are not authorized to access http://<SERVER>:8080/tfs

If the above error is received while configuring the agent, log on to your TFS machine.
Start the Internet Information Services (IIS) manager. Make sure Anonymous
Authentication is enabled.

Agent lost communication


This issue is characterized by the error message:

The job has been abandoned because agent did not renew the lock. Ensure agent is
running, not sleeping, and has not lost communication with the service.

This error may indicate the agent lost communication with the server for a span of
several minutes. Check the following to rule out network or other interruptions on the
agent machine:
Verify automatic updates are turned off. A machine reboot from an update will cause a
build or release to fail with the above error. Apply updates in a controlled fashion to
avoid this type of interruption. Before rebooting the agent machine, the agent should
first be marked disabled in the pool administration page and let any running build
finish.
Verify the sleep settings are turned off.
If the agent is running on a virtual machine, avoid any live migration or other VM
maintenance operation that may severely impact the health of the machine for
multiple minutes.
If the agent is running on a virtual machine, the same operating-system-update
recommendations and sleep-setting recommendations apply to the host machine. And
also any other maintenance operations that several impact the host machine.
Performance monitor logging or other health metric logging can help to correlate this
type of error to constrained resource availability on the agent machine (disk, memory,
page file, processor, network).
Another way to correlate the error with network problems is to ping a server
indefinitely and dump the output to a file, along with timestamps. Use a healthy
interval, for example 20 or 30 seconds. If you are using Azure Pipelines, then you
would want to ping an internet domain, for example bing.com. If you are using an on-
premises TFS server, then you would want to ping a server on the same network.
Verify the network throughput of the machine is adequate. You can perform an online
speed test to check the throughput.
If you use a proxy, verify the agent is configured to use your proxy. Refer to the agent
deployment topic.
TFS Job Agent not started
This may be characterized by a message in the web console "Waiting for an agent to be
requested". Verify the TFSJobAgent (display name: Visual Studio Team Foundation
Background Job Agent) Windows service is started.
Misconfigured notification URL (1.x agent version )
This may be characterized by a message in the web console "Waiting for console output
from an agent", and the process eventually times out.
A mismatching notification URL may cause the worker to process to fail to connect to the
server. See Team Foundation Administration Console, Application Tier. The 1.x agent
listens to the message queue using the URL that it was configured with. However, when a
job message is pulled from the queue, the worker process uses the notification URL to
communicate back to the server.
Check Azure DevOps status for a service degradation
Check the Azure DevOps Service Status Portal for any issues that may cause a service
degradation, such as increased queue time for agents. For more information, see Azure
DevOps Service Status.

Pipeline fails to complete


If your pipeline gets an agent but fails to complete, check the following common issues. If
your issue doesn't seem to match one of these, see Get logs to diagnose problems.
Job time-out
Issues downloading code
My pipeline is failing on a command-line step such as MSBUILD
File or folder in use errors
Intermittent or inconsistent MSBuild failures
Process stops responding
Line endings for multiple platforms
Variables having ' (single quote) appended
Service Connection related issues
Job time -out
A pipeline may run for a long time and then fail due to job time-out. Job timeout closely
depends on the agent being used. Free Microsoft hosted agents have a max timeout of 60
minutes per job for a private repository and 360 minutes for a public repository. To
increase the max timeout for a job, you can opt for any of the following.
Buy a Microsoft hosted agent which will give you 360 minutes for all jobs, irrespective
of the repository used
Use a self-hosted agent to rule out any timeout issues due to the agent
Learn more about job timeout.

NOTE
If your Microsoft-hosted agent jobs are timing out, ensure that you haven't specified a pipeline
timeout that is less than the max timeout for a job. To check, see Timeouts.

Issues downloading code


My pipeline is failing on a checkout step
Team Foundation Version Control (TFVC) issues
My pipeline is failing on a checkout step
If you are using a checkout step on an Azure Repos Git repository in your organization
that is in a different project than your pipeline, ensure that the Limit job authorization
scope to current project setting is disabled, or follow the steps in Scoped build
identities to ensure that your pipeline has access to the repository.
When your pipeline can't access the repository due to limited job authorization scope,
you will receive the error Git fetch failed with exit code 128 and your logs will contain
an entry similar to
Remote: TF401019: The Git repository with name or identifier <your repo name> does not
exist or you do not have permissions for the operation you are attempting.

If your pipeline is failing immediately with


Could not find a project that corresponds with the repository , ensure that your project
and repository name are correct in the checkout step or the repository resource
declaration.
Team Foundation Version Control (TFVC) issues
Get sources not downloading some files
Get sources through Team Foundation Proxy
G e t so u r c e s n o t d o w n l o a d i n g so m e fi l e s

This may be characterized by a message in the log "All files up to date" from the tf get
command. Verify the built-in service identity has permission to download the sources.
Either the identity Project Collection Build Service or Project Build Service will need
permission to download the sources, depending on the selected authorization scope on
General tab of the build pipeline. In the version control web UI, you can browse the
project files at any level of the folder hierarchy and check the security settings.
G e t so u r c e s t h r o u g h Te a m F o u n d a t i o n P r o x y

The easiest way to configure the agent to get sources through a Team Foundation Proxy is
set environment variable TFSPROXY that point to the TFVC proxy server for the agent's
run as user.
Windows:

set TFSPROXY=http://tfvcproxy:8081
setx TFSPROXY=http://tfvcproxy:8081 // If the agent service is running as
NETWORKSERVICE or any service account you can't easily set user level environment
variable

macOS/Linux:

export TFSPROXY=http://tfvcproxy:8081

My pipeline is failing on a command-line step such as MSBUILD


It is helpful to narrow whether a build or release failure is the result of an Azure
Pipelines/TFS product issue (agent or tasks). Build and release failures may also result
from external commands.
Check the logs for the exact command-line executed by the failing task. Attempting to run
the command locally from the command line may reproduce the issue. It can be helpful
to run the command locally from your own machine, and/or log-in to the machine and
run the command as the service account.
For example, is the problem happening during the MSBuild part of your build pipeline
(for example, are you using either the MSBuild or Visual Studio Build task)? If so, then try
running the same MSBuild command on a local machine using the same arguments. If
you can reproduce the problem on a local machine, then your next steps are to
investigate the MSBuild problem.
File layout
The location of tools, libraries, headers, and other things needed for a build may be
different on the hosted agent than from your local machine. If a build fails because it can't
find one of these files, you can use the below scripts to inspect the layout on the agent.
This may help you track down the missing file.
Create a new YAML pipeline in a temporary location (e.g. a new repo created for the
purpose of troubleshooting). As written, the script searches directories on your path. You
may optionally edit the SEARCH_PATH= line to search other places.

# Script for Linux and macOS


pool: { vmImage: ubuntu-latest } # or whatever pool you use
steps:
- checkout: none
- bash: |
SEARCH_PATH=$PATH # or any colon-delimited list of paths
IFS=':' read -r -a PathDirs <<< "$SEARCH_PATH"
echo "##[debug] Found directories"
for element in "${PathDirs[@]}"; do
echo "$element"
done;
echo;
echo;
echo "##[debug] Found files"
for element in "${PathDirs[@]}"; do
find "$element" -type f
done
# Script for Windows
pool: { vmImage: windows-2019 } # or whatever pool you use
steps:
- checkout: none
- powershell: |
$SEARCH_PATH=$Env:Path
Write-Host "##[debug] Found directories"
ForEach ($Dir in $SEARCH_PATH -split ";") {
Write-Host "$Dir"
}
Write-Host ""
Write-Host ""
Write-Host "##[debug] Found files"
ForEach ($Dir in $SEARCH_PATH -split ";") {
Get-ChildItem $Dir -File -ErrorAction Continue | ForEach-Object -Process {
Write-Host $_.FullName
}
}

Differences between local command prompt and agent


Keep in mind, some differences are in effect when executing a command on a local
machine and when a build or release is running on an agent. If the agent is configured to
run as a service on Linux, macOS, or Windows, then it is not running within an interactive
logged-on session. Without an interactive logged-on session, UI interaction and other
limitations exist.
File or folder in use errors
File or folder in use errors are often indicated by error messages such as:
Access to the path [...] is denied.
The process cannot access the file [...] because it is being used by another
process.
Access is denied.
Can't move [...] to [...]

Troubleshooting steps:
Detect files and folders in use
Anti-virus exclusion
MSBuild and /nodeReuse:false
MSBuild and /maxcpucount:[n]
Detect files and folders in use
On Windows, tools like Process Monitor can be to capture a trace of file events under a
specific directory. Or, for a snapshot in time, tools like Process Explorer or Handle can be
used.
Anti-virus exclusion
Anti-virus software scanning your files can cause file or folder in use errors during a build
or release. Adding an anti-virus exclusion for your agent directory and configured "work
folder" may help to identify anti-virus software as the interfering process.
MSBuild and /nodeReuse:false
If you invoke MSBuild during your build, make sure to pass the argument
/nodeReuse:false (short form /nr:false ). Otherwise MSBuild process(es) will remain
running after the build completes. The process(es) remain for some time in anticipation of
a potential subsequent build.
This feature of MSBuild can interfere with attempts to delete or move a directory - due to
a conflict with the working directory of the MSBuild process(es).
The MSBuild and Visual Studio Build tasks already add /nr:false to the arguments
passed to MSBuild. However, if you invoke MSBuild from your own script, then you would
need to specify the argument.
MSBuild and /maxcpucount:[n ]
By default the build tasks such as MSBuild and Visual Studio Build run MSBuild with the
/m switch. In some cases this can cause problems such as multiple process file access
issues.
Try adding the /m:1 argument to your build tasks to force MSBuild to run only one
process at a time.
File-in-use issues may result when leveraging the concurrent-process feature of MSBuild.
Not specifying the argument /maxcpucount:[n] (short form /m:[n] ) instructs MSBuild to
use a single process only. If you are using the MSBuild or Visual Studio Build tasks, you
may need to specify "/m:1" to override the "/m" argument that is added by default.
Intermittent or inconsistent MSBuild failures
If you are experiencing intermittent or inconsistent MSBuild failures, try instructing
MSBuild to use a single-process only. Intermittent or inconsistent errors may indicate that
your target configuration is incompatible with the concurrent-process feature of MSBuild.
See MSBuild and /maxcpucount:[n].
Process stops responding
Process stops responding causes and troubleshooting steps:
Waiting for Input
Process dump
WiX project
Waiting for Input
A process that stops responding may indicate that a process is waiting for input.
Running the agent from the command line of an interactive logged on session may help
to identify whether a process is prompting with a dialog for input.
Running the agent as a service may help to eliminate programs from prompting for
input. For example in .NET, programs may rely on the
System.Environment.UserInteractive Boolean to determine whether to prompt. When
running as a Windows service, the value is false.
Process dump
Analyzing a dump of the process can help to identify what a deadlocked process is
waiting on.
WiX project
Building a WiX project when custom MSBuild loggers are enabled, can cause WiX to
deadlock waiting on the output stream. Adding the additional MSBuild argument
/p:RunWixToolsOutOfProc=true will workaround the issue.

Line endings for multiple platforms


When you run pipelines on multiple platforms, you can sometimes encounter problems
with different line endings. Historically, Linux and macOS used linefeed (LF) characters
while Windows used a carriage return plus a linefeed (CRLF). Git tries to compensate for
the difference by automatically making lines end in LF in the repo but CRLF in the
working directory on Windows.
Most Windows tools are fine with LF-only endings, and this automatic behavior can cause
more problems than it solves. If you encounter issues based on line endings, we
recommend you configure Git to prefer LF everywhere. To do this, add a .gitattributes
file to the root of your repository. In that file, add the following line:

* text eol=lf

Variables having ' (single quote ) appended


If your pipeline includes a Bash script that sets variables using the ##vso command, you
may see an additional ' appended to the value of the variable you set. This occurs
because of an interaction with set -x . The solution is to disable set -x temporarily
before setting a variable. The Bash syntax for doing that is set +x .

set +x
echo ##vso[task.setvariable variable=MY_VAR]my_value
set -x

Why does this happen?


Many Bash scripts include the set -x command to assist with debugging. Bash will trace
exactly what command was executed and echo it to stdout. This will cause the agent to
see the ##vso command twice, and the second time, Bash will have added the '
character to the end.
For instance, consider this pipeline:

steps:
- bash: |
set -x
echo ##vso[task.setvariable variable=MY_VAR]my_value

On stdout, the agent will see two lines:

##vso[task.setvariable variable=MY_VAR]my_value
+ echo '##vso[task.setvariable variable=MY_VAR]my_value'

When the agent sees the first line, MY_VAR will be set to the correct value, "my_value".
However, when it sees the second line, the agent will process everything to the end of the
line. MY_VAR will be set to "my_value'".
Service Connection related issues
To troubleshoot issues related to service connections, see Service connection
troubleshooting.

Get logs to diagnose problems


If none of the previous suggestions match your problem, you can use the information in
the logs to diagnose your failing pipeline.
Start by looking at the logs in your completed build or release. You can view logs by
navigating to the pipeline run summary and selecting the job and task. If a certain task is
failing, check the logs for that task.
In addition to viewing logs in the pipeline build summary, you can download complete
logs which include additional diagnostic information, and you can configure more
verbose logs to assist with your troubleshooting.
For detailed instructions for configuring and using logs, see Review logs to diagnose
pipeline issues.

I need more help. I found a bug. I've got a suggestion.


Where do I go?
Get subscription, billing, and technical support
Report any problems or submit feedback at Developer Community.
We welcome your suggestions:
Review logs to diagnose pipeline issues
11/2/2020 • 3 minutes to read • Edit Online

Pipeline logs provide a powerful tool for determining the cause of pipeline failures.
A typical starting point is to review the logs in your completed build or release. You can view logs by navigating to
the pipeline run summary and selecting the job and task. If a certain task is failing, check the logs for that task.
In addition to viewing logs in the pipeline build summary, you can download complete logs which include
additional diagnostic information, and you can configure more verbose logs to assist with your troubleshooting.

Configure verbose logs


To assist with troubleshooting, you can configure your logs to be more verbose.
To configure verbose logs for a single run, you can start a new build by choosing Run pipeline and
selecting Enable system diagnostics , Run .

To configure verbose logs for all runs, you can add a variable named system.debug and set its value to
true .

To configure verbose logs for a single run, you can start a new build by choosing Queue build , and setting
the value for the system.debug variable to true .
To configure verbose logs for all runs, edit the build, navigate to the Variables tab, and add a variable
named system.debug , set its value to true , and select to Allow at Queue Time .
To configure verbose logs for a YAML pipeline, add the system.debug variable in the variables section:
variables:
system.debug: true

View and download logs


To view individual logs for each step, navigate to the build results for the run, and select the job and step.

To download all logs, navigate to the build results for the run, select ..., and choose Download logs .

To download all logs, navigate to the build results for the run, choose Download all logs as zip .
In addition to the pipeline diagnostic logs, the following specialized log types are available, and may contain
information to help you troubleshoot.
Worker diagnostic logs
Agent diagnostic logs
Other logs

Worker diagnostic logs


You can get the diagnostic log of the completed build that was generated by the worker process on the build agent.
Look for the worker log file that has the date and time stamp of your completed build. For example,
worker_20160623-192022-utc_6172.log .

Agent diagnostic logs


Agent diagnostic logs provide a record of how the agent was configured and what happened when it ran. Look for
the agent log files. For example, agent_20160624-144630-utc.log . There are two kinds of agent log files:
The log file generated when you ran config.cmd . This log:
Includes this line near the top: Adding Command: configure

Shows the configuration choices made.


The log file generated when you ran run.cmd . This log:
Cannot be opened until the process is terminated.
Attempts to connect to your Azure DevOps organization or Team Foundation Server.
Shows when each job was run, and how it completed
Both logs show how the agent capabilities were detected and set.

Other logs
Inside the diagnostic logs you will find environment.txt and capabilities.txt .
The environment.txt file has various information about the environment within which your build ran. This includes
information like what tasks are run, whether or not the firewall is enabled, PowerShell version info, and some other
items. We continually add to this data to make it more useful.
The capabilities.txt file provides a clean way to see all capabilities installed on the build machine that ran your
build.

HTTP trace logs


Use built-in HTTP tracing
Use full HTTP tracing - Windows
Use full HTTP tracing - macOS and Linux

IMPORTANT
HTTP traces and trace files can contain passwords and other secrets. Do not post them on a public sites.

Use built-in HTTP tracing


If your agent is version 2.114.0 or newer, you can trace the HTTP traffic headers and write them into the diagnostic
log. Set the VSTS_AGENT_HTTPTRACE environment variable before you launch the agent.listener.

Windows:
set VSTS_AGENT_HTTPTRACE=true

macOS/Linux:
export VSTS_AGENT_HTTPTRACE=true

Use full HTTP tracing - Windows


1. Start Fiddler.
2. We recommend you listen only to agent traffic. File > Capture Traffic off (F12)
3. Enable decrypting HTTPS traffic. Tools > Fiddler Options > HTTPS tab. Decrypt HTTPS traffic
4. Let the agent know to use the proxy:

set VSTS_HTTP_PROXY=http://127.0.0.1:8888

5. Run the agent interactively. If you're running as a service, you can set as the environment variable in control
panel for the account the service is running as.
6. Restart the agent.
Use full HTTP tracing - macOS and Linux
Use Charles Proxy (similar to Fiddler on Windows) to capture the HTTP trace of the agent.
1. Start Charles Proxy.
2. Charles: Proxy > Proxy Settings > SSL Tab. Enable. Add URL.
3. Charles: Proxy > Mac OSX Proxy. Recommend disabling to only see agent traffic.

export VSTS_HTTP_PROXY=http://127.0.0.1:8888

4. Run the agent interactively. If it's running as a service, you can set in the .env file. See nix service
5. Restart the agent.
Classic release and artifacts variables
11/2/2020 • 14 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments, and
jobs are called phases.

Classic release and artifacts variables are a convenient way to exchange and transport data throughout your
pipeline. Each variable is stored as a string and its value can change between runs of your pipeline.
Variables are different from Runtime parameters which are only available at template parsing time.

NOTE
This is a reference article that covers the classic release and artifacts variables. To understand variables in YAML
pipelines, see user-defined variables.

As you compose the tasks for deploying your application into each stage in your DevOps CI/CD processes,
variables will help you to:
Define a more generic deployment pipeline once, and then customize it easily for each stage. For
example, a variable can be used to represent the connection string for web deployment, and the value
of this variable can be changed from one stage to another. These are custom variables .
Use information about the context of the particular release, stage, artifacts, or agent in which the
deployment pipeline is being run. For example, your script may need access to the location of the build
to download it, or to the working directory on the agent to create temporary files. These are default
variables .

TIP
You can view the current values of all variables for a release, and use a default variable to run a release in debug mode.

Default variables
Information about the execution context is made available to running tasks through default variables. Your
tasks and scripts can use these variables to find information about the system, release, stage, or agent they
are running in. With the exception of System.Debug , these variables are read-only and their values are
automatically set by the system. Some of the most significant variables are described in the following tables.
To view the full list, see View the current values of all variables.

Default variables - System


VA RIA B L E N A M E DESC RIP T IO N

System.TeamFoundationServerUri The URL of the service connection in TFS or Azure


Pipelines. Use this from your scripts or tasks to call Azure
Pipelines REST APIs.

Example: https://fabrikam.vsrm.visualstudio.com/

System.TeamFoundationCollectionUri The URL of the Team Foundation collection or Azure


Pipelines. Use this from your scripts or tasks to call REST
APIs on other services such as Build and Version control.

Example: https://dev.azure.com/fabrikam/

System.CollectionId The ID of the collection to which this build or release


belongs. Not available in TFS 2015.

Example: 6c6f3423-1c84-4625-995a-f7f143a1e43d

System.DefinitionId The ID of the release pipeline to which the current release


belongs. Not available in TFS 2015.

Example: 1

System.TeamProject The name of the project to which this build or release


belongs.

Example: Fabrikam

System.TeamProjectId The ID of the project to which this build or release belongs.


Not available in TFS 2015.

Example: 79f5c12e-3337-4151-be41-a268d2c73344

System.ArtifactsDirectory The directory to which artifacts are downloaded during


deployment of a release. The directory is cleared before
every deployment if it requires artifacts to be downloaded
to the agent. Same as Agent.ReleaseDirectory and
System.DefaultWorkingDirectory.

Example: C:\agent\_work\r1\a

System.DefaultWorkingDirectory The directory to which artifacts are downloaded during


deployment of a release. The directory is cleared before
every deployment if it requires artifacts to be downloaded
to the agent. Same as Agent.ReleaseDirectory and
System.ArtifactsDirectory.

Example: C:\agent\_work\r1\a

System.WorkFolder The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.RootDirectory and Agent.WorkFolder.

Example: C:\agent\_work
VA RIA B L E N A M E DESC RIP T IO N

System.Debug This is the only system variable that can be set by the
users. Set this to true to run the release in debug mode to
assist in fault-finding.

Example: true

Default variables - Release


VA RIA B L E N A M E DESC RIP T IO N

Release.AttemptNumber The number of times this release is deployed in this stage.


Not available in TFS 2015.

Example: 1

Release.DefinitionEnvironmentId The ID of the stage in the corresponding release pipeline.


Not available in TFS 2015.

Example: 1

Release.DefinitionId The ID of the release pipeline to which the current release


belongs. Not available in TFS 2015.

Example: 1

Release.DefinitionName The name of the release pipeline to which the current


release belongs.

Example: fabrikam-cd

Release.Deployment.RequestedFor The display name of the identity that triggered (started)


the deployment currently in progress. Not available in TFS
2015.

Example: Mateo Escobedo

Release.Deployment.RequestedForId The ID of the identity that triggered (started) the


deployment currently in progress. Not available in TFS
2015.

Example: 2f435d07-769f-4e46-849d-10d1ab9ba6ab

Release.DeploymentID The ID of the deployment. Unique per job.

Example: 254

Release.DeployPhaseID The ID of the phase where deployment is running.

Example: 127

Release.EnvironmentId The ID of the stage instance in a release to which the


deployment is currently in progress.

Example: 276
VA RIA B L E N A M E DESC RIP T IO N

Release.EnvironmentName The name of stage to which deployment is currently in


progress.

Example: Dev

Release.EnvironmentUri The URI of the stage instance in a release to which


deployment is currently in progress.

Example: vstfs://ReleaseManagement/Environment/276

Release.Environments.{stage-name}.status The deployment status of the stage.

Example: InProgress

Release.PrimaryArtifactSourceAlias The alias of the primary artifact source

Example: fabrikam\_web

Release.Reason The reason for the deployment. Supported values are:


ContinuousIntegration - the release started in
Continuous Deployment after a build completed.
Manual - the release started manually.
None - the deployment reason has not been specified.
Scheduled - the release started from a schedule.

Release.ReleaseDescription The text description provided at the time of the release.

Example: Critical security patch

Release.ReleaseId The identifier of the current release record.

Example: 118

Release.ReleaseName The name of the current release.

Example: Release-47

Release.ReleaseUri The URI of current release.

Example: vstfs://ReleaseManagement/Release/118

Release.ReleaseWebURL The URL for this release.

Example:
https://dev.azure.com/fabrikam/f3325c6c/_release?
releaseId=392&_a=release-summary

Release.RequestedFor The display name of identity that triggered the release.

Example: Mateo Escobedo

Release.RequestedForEmail The email address of identity that triggered the release.

Example: [email protected]
VA RIA B L E N A M E DESC RIP T IO N

Release.RequestedForId The ID of identity that triggered the release.

Example: 2f435d07-769f-4e46-849d-10d1ab9ba6ab

Release.SkipArtifactDownload Boolean value that specifies whether or not to skip


downloading of artifacts to the agent.

Example: FALSE

Release.TriggeringArtifact.Alias The alias of the artifact which triggered the release. This is
empty when the release was scheduled or triggered
manually.

Example: fabrikam\_app

Default variables - Release stage


VA RIA B L E N A M E DESC RIP T IO N

Release.Environments.{stage name}.Status The status of deployment of this release within a specified


stage. Not available in TFS 2015.

Example: NotStarted

Default variables - Agent


VA RIA B L E N A M E DESC RIP T IO N

Agent.Name The name of the agent as registered with the agent pool.
This is likely to be different from the computer name.

Example: fabrikam-agent

Agent.MachineName The name of the computer on which the agent is


configured.

Example: fabrikam-agent

Agent.Version The version of the agent software.

Example: 2.109.1

Agent.JobName The name of the job that is running, such as Release or


Build.

Example: Release

Agent.HomeDirectory The folder where the agent is installed. This folder contains
the code and resources for the agent.

Example: C:\agent
VA RIA B L E N A M E DESC RIP T IO N

Agent.ReleaseDirectory The directory to which artifacts are downloaded during


deployment of a release. The directory is cleared before
every deployment if it requires artifacts to be downloaded
to the agent. Same as System.ArtifactsDirectory and
System.DefaultWorkingDirectory.

Example: C:\agent\_work\r1\a

Agent.RootDirectory The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.WorkFolder and System.WorkFolder.

Example: C:\agent\_work

Agent.WorkFolder The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.RootDirectory and System.WorkFolder.

Example: C:\agent\_work

Agent.DeploymentGroupId The ID of the deployment group the agent is registered


with. This is available only in deployment group jobs. Not
available in TFS 2018 Update 1.

Example: 1

Default variables - General Artifact


For each artifact that is referenced in a release, you can use the following artifact variables. Not all variables
are meaningful for each artifact type. The table below lists the default artifact variables and provides
examples of the values that they have depending on the artifact type. If an example is empty, it implies that
the variable is not populated for that artifact type.
Replace the {alias} placeholder with the value you specified for the artifact alias or with the default value
generated for the release pipeline.

VA RIA B L E N A M E DESC RIP T IO N

Release.Artifacts.{alias}.DefinitionId The identifier of the build pipeline or repository.

Azure Pipelines example: 1


GitHub example: fabrikam/asp

Release.Artifacts.{alias}.DefinitionName The name of the build pipeline or repository.

Azure Pipelines example: fabrikam-ci


TFVC example: $/fabrikam
Git example: fabrikam
GitHub example: fabrikam/asp (main)
VA RIA B L E N A M E DESC RIP T IO N

Release.Artifacts.{alias}.BuildNumber The build number or the commit identifier.

Azure Pipelines example: 20170112.1


Jenkins/TeamCity example: 20170112.1
TFVC example: Changeset 3
Git example: 38629c964
GitHub example: 38629c964

Release.Artifacts.{alias}.BuildId The build identifier.

Azure Pipelines example: 130


Jenkins/TeamCity example: 130
GitHub example:
38629c964d21fe405ef830b7d0220966b82c9e11

Release.Artifacts.{alias}.BuildURI The URL for the build.

Azure Pipelines example:


vstfs://build-release/Build/130
GitHub example: https://github.com/fabrikam/asp

Release.Artifacts.{alias}.SourceBranch The full path and name of the branch from which the
source was built.

Azure Pipelines example: refs/heads/main

Release.Artifacts.{alias}.SourceBranchName The name only of the branch from which the source was
built.

Azure Pipelines example: main

Release.Artifacts.{alias}.SourceVersion The commit that was built.

Azure Pipelines example:


bc0044458ba1d9298cdc649cb5dcf013180706f7

Release.Artifacts.{alias}.Repository.Provider The type of repository from which the source was built.

Azure Pipelines example: Git

Release.Artifacts.{alias}.RequestedForID The identifier of the account that triggered the build.

Azure Pipelines example:


2f435d07-769f-4e46-849d-10d1ab9ba6ab

Release.Artifacts.{alias}.RequestedFor The name of the account that requested the build.

Azure Pipelines example: Mateo Escobedo


VA RIA B L E N A M E DESC RIP T IO N

Release.Artifacts.{alias}.Type The type of artifact source, such as Build.

Azure Pipelines example: Build


Jenkins example: Jenkins
TeamCity example: TeamCity
TFVC example: TFVC
Git example: Git
GitHub example: GitHub

Release.Artifacts.{alias}.PullRequest.TargetBranch The full path and name of the branch that is the target of a
pull request. This variable is initialized only if the release is
triggered by a pull request flow.

Azure Pipelines example: refs/heads/main

Release.Artifacts.{alias}.PullRequest.TargetBranchName The name only of the branch that is the target of a pull
request. This variable is initialized only if the release is
triggered by a pull request flow.

Azure Pipelines example: main

See also Artifact source alias

Default variables - Primary Artifact


You designate one of the artifacts as a primary artifact in a release pipeline. For the designated primary
artifact, Azure Pipelines populates the following variables.

VA RIA B L E N A M E SA M E A S

Build.DefinitionId Release.Artifacts.{Primary artifact alias}.DefinitionId

Build.DefinitionName Release.Artifacts.{Primary artifact alias}.DefinitionName

Build.BuildNumber Release.Artifacts.{Primary artifact alias}.BuildNumber

Build.BuildId Release.Artifacts.{Primary artifact alias}.BuildId

Build.BuildURI Release.Artifacts.{Primary artifact alias}.BuildURI

Build.SourceBranch Release.Artifacts.{Primary artifact alias}.SourceBranch

Build.SourceBranchName Release.Artifacts.{Primary artifact alias}.SourceBranchName

Build.SourceVersion Release.Artifacts.{Primary artifact alias}.SourceVersion

Build.Repository.Provider Release.Artifacts.{Primary artifact alias}.Repository.Provider

Build.RequestedForID Release.Artifacts.{Primary artifact alias}.RequestedForID

Build.RequestedFor Release.Artifacts.{Primary artifact alias}.RequestedFor


VA RIA B L E N A M E SA M E A S

Build.Type Release.Artifacts.{Primary artifact alias}.Type

Build.PullRequest.TargetBranch Release.Artifacts.{Primary artifact


alias}.PullRequest.TargetBranch

Build.PullRequest.TargetBranchName Release.Artifacts.{Primary artifact


alias}.PullRequest.TargetBranchName

Using default variables


You can use the default variables in two ways - as parameters to tasks in a release pipeline or in your scripts.
You can directly use a default variable as an input to a task. For example, to pass
Release.Artifacts.{Artifact alias}.DefinitionName for the artifact source whose alias is ASPNET4.CI to a
task, you would use $(Release.Artifacts.ASPNET4.CI.DefinitionName) .

To use a default variable in your script, you must first replace the . in the default variable names with _ .
For example, to print the value of artifact variable Release.Artifacts.{Artifact alias}.DefinitionName for the
artifact source whose alias is ASPNET4.CI in a PowerShell script, you would use
$env:RELEASE_ARTIFACTS_ASPNET4_CI_DEFINITIONNAME .

Note that the original name of the artifact source alias, ASPNET4.CI , is replaced by ASPNET4_CI .
View the current values of all variables
1. Open the pipelines view of the summary for the release, and choose the stage you are interested in. In
the list of steps, choose Initialize job .
2. This opens the log for this step. Scroll down to see the values used by the agent for this job.

Run a release in debug mode


Show additional information as a release executes and in the log files by running the entire release, or just
the tasks in an individual release stage, in debug mode. This can help you resolve issues and failures.
To initiate debug mode for an entire release, add a variable named System.Debug with the value true
to the Variables tab of a release pipeline.
To initiate debug mode for a single stage, open the Configure stage dialog from the shortcut menu
of the stage and add a variable named System.Debug with the value true to the Variables tab.
Alternatively, create a variable group containing a variable named System.Debug with the value true
and link this variable group to a release pipeline.

TIP
If you get an error related to an Azure RM service connection, see How to: Troubleshoot Azure Resource Manager
service connections.

Custom variables
Custom variables can be defined at various scopes.
Share values across all of the definitions in a project by using variable groups. Choose a variable
group when you need to use the same values across all the definitions, stages, and tasks in a project,
and you want to be able to change the values in a single place. You define and manage variable groups
in the Librar y tab.
Share values across all of the stages by using release pipeline variables . Choose a release pipeline
variable when you need to use the same value across all the stages and tasks in the release pipeline,
and you want to be able to change the value in a single place. You define and manage these variables
in the Variables tab in a release pipeline. In the Pipeline Variables page, open the Scope drop-down
list and select "Release". By default, when you add a variable, it is set to Release scope.
Share values across all of the tasks within one specific stage by using stage variables . Use a stage-
level variable for values that vary from stage to stage (and are the same for all the tasks in an stage).
You define and manage these variables in the Variables tab of a release pipeline. In the Pipeline
Variables page, open the Scope drop-down list and select the required stage. When you add a variable,
set the Scope to the appropriate environment.
Using custom variables at project, release pipeline, and stage scope helps you to:
Avoid duplication of values, making it easier to update all occurrences as one operation.
Store sensitive values in a way that they cannot be seen or changed by users of the release pipelines.
Designate a configuration property to be a secure (secret) variable by selecting the (padlock) icon
next to the variable.

IMPORTANT
The values of the hidden (secret) variables are securely stored on the server and cannot be viewed by users
after they are saved. During a deployment, the Azure Pipelines release service decrypts these values when
referenced by the tasks and passes them to the agent over a secure HTTPS channel.

NOTE
Creating custom variables can overwrite standard variables. For example, the PowerShell Path environment variable. If
you create a custom Path variable on a Windows agent, it will overwrite the $env:Path variable and PowerShell
won't be able to run.

Using custom variables


To use custom variables in your build and release tasks, simply enclose the variable name in parentheses and
precede it with a $ character. For example, if you have a variable named adminUserName , you can insert
the current value of that variable into a parameter of a task as $(adminUserName) .

NOTE
At present, variables in different groups that are linked to a pipeline in the same scope (e.g., job or stage) will collide
and the result may be unpredictable. Ensure that you use different names for variables across all your variable groups.

You can use custom variables to prompt for values during the execution of a release. For more information,
see Approvals.
Define and modify your variables in a script
To define or modify a variable from a script, use the task.setvariable logging command. Note that the
updated variable value is scoped to the job being executed, and does not flow across jobs or stages. Variable
names are transformed to uppercase, and the characters "." and " " are replaced by "_".
For example, Agent.WorkFolder becomes AGENT_WORKFOLDER . On Windows, you access this as
%AGENT_WORKFOLDER% or $env:AGENT_WORKFOLDER . On Linux and macOS, you use $AGENT_WORKFOLDER .

TIP
You can run a script on a:
Windows agent using either a Batch script task or PowerShell script task.
macOS or Linux agent using a Shell script task.

Batch
PowerShell
Shell
Batch script

Set the sauce and secret.Sauce variables

@echo ##vso[task.setvariable variable=sauce]crushed tomatoes


@echo ##vso[task.setvariable variable=secret.Sauce;issecret=true]crushed tomatoes with garlic

Read the variables


Arguments

"$(sauce)" "$(secret.Sauce)"

Script

@echo off
set sauceArgument=%~1
set secretSauceArgument=%~2
@echo No problem reading %sauceArgument% or %SAUCE%
@echo But I cannot read %SECRET_SAUCE%
@echo But I can read %secretSauceArgument% (but the log is redacted so I do not spoil
the secret)

Console output from reading the variables:

No problem reading crushed tomatoes or crushed tomatoes


But I cannot read
But I can read ******** (but the log is redacted so I do not spoil the secret)

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a
feature on our Azure DevOps Developer Community. Support page.
Troubleshoot Azure Resource Manager service
connections
11/2/2020 • 6 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.

This topic will help you resolve issues you may encounter when creating a connection to Microsoft Azure using
an Azure Resource Manager ARM service connection for your Azure DevOps CI/CD processes.

What happens when you create a Resource Manager service


connection?
1. In Azure DevOps, open the Service connections page from the project settings page. In TFS, open the
Ser vices page from the "settings" icon in the top menu bar.
2. Choose + New ser vice connection and select the type of service connection you need.
3. In the Add Azure Resource Manager ser vice connection dialog, provide a connection name, and
select a subscription from drop-down list of your subscriptions.
When you select OK , the system:
1. Connects to the Azure Active Directory (Azure AD) tenant for to the selected subscription.
2. Creates an application in Azure AD on behalf of the user.
3. After the application has been successfully created, assigns the application as a contributor to the selected
subscription.
4. Creates an Azure Resource Manager service connection using this application's details.

How to troubleshoot errors that may occur while creating a


connection?
Errors that may occur when the system attempts to create the service connection include:
Insufficient privileges to complete the operation
Failed to obtain an access token
A valid refresh token was not found
Failed to assign contributor role
Some subscriptions are missing from the subscription drop down menu
Insufficient privileges to complete the operation
This typically occurs when the system attempts to create an application in Azure AD on your behalf.
This is a permission issue that may be due to the following causes:
The user has only guest permission in the directory
The user is not authorized to add applications in the directory
The user has only guest permission in the directory
The best approach to resolve this issue, while granting only the minimum additional permissions to the user, is
to increase the Guest user permissions as follows.
1. Sign in to the Azure portal using an administrator account. The account should be an owner, global
administrator, or user account administrator.
2. Select Azure Active Director y in the left navigation bar.
3. Ensure you are editing the appropriate directory corresponding to the user subscription. If not, select
Switch director y and log in using the appropriate credentials if required.
4. In the MANAGE section select Users .
5. Select User settings .
6. In the External users section, select Manage external collaboration settings .
7. The External collaboration settings blade opens.
8. Change Guest user permissions are limited to No .
Alternatively, if you are prepared to give the user additional permissions (administrator-level), you can make
the user a member of the Global administrator role. To do so follow the steps below:

WARNING
Users who are assigned to the Global administrator role can read and modify every administrative setting in your Azure
AD organization. As a best practice, we recommend that you assign this role to fewer than five people in your
organization.
1. Sign in to the Azure portal using an administrator account. The account should be an owner, global
administrator, or user account administrator.
2. Select Azure Active Director y in the left navigation bar.
3. Ensure you are editing the appropriate directory corresponding to the user subscription. If not, select
Switch director y and log in using the appropriate credentials if required.
4. In the MANAGE section select Users .
5. Use the search box to filter the list and then select the user you want to manage.
6. In the MANAGE section select Director y role and change the role to Global administrator .
7. Save the change.
It typically takes 15 to 20 minutes to apply the changes globally. After this period has elapsed, the user can
retry creating the service connection.
The user is not authorized to add applications in the directory
You must have permissions to add integrated applications in the directory. The directory administrator has
permissions to change this setting.
1. Select Azure Active Director y in the left navigation bar.
2. Ensure you are editing the appropriate directory corresponding to the user subscription. If not, select
Switch director y and log in using the appropriate credentials if required.
3. In the MANAGE section select Users .
4. Select User settings .
5. In the App registrations section, change Users can register applications to Yes .
Create the service principal manually with the user already having required permissions in Azure Active Directory
You can also create the service principal with an existing user who already has the required permissions in
Azure Active Directory. For more information, see Create an Azure Resource Manager service connection with
an existing service principal.
Failed to obtain an access token or a valid refresh token was not found
These errors typically occur when your session has expired.
To resolve these issues:
1. Sign out of Azure Pipelines or TFS.
2. Open an InPrivate or incognito browser window and navigate to https://visualstudio.microsoft.com/team-
services/.
3. If you are prompted to sign out, do so.
4. Sign in using the appropriate credentials.
5. Choose the organization you want to use from the list.
6. Select the project you want to add the service connection to.
7. Create the service connection you need by opening the Settings page. Then, select Ser vices > New
ser vice connection > Azure Resource Manager .
Failed to assign Contributor role
This error typically occurs when you do not have Write permission for the selected Azure subscription when
the system attempts to assign the Contributor role.
To resolve this issue, ask the subscription administrator to assign you the appropriate role.
Some subscriptions are missing from the list of subscriptions
To fix this issue you will need to modify the supported account types and who can use your application. To do
so, follow the steps below:
1. Sign in to the Azure portal.
2. If you have access to multiple tenants, use the Director y + subscription filter in the top menu to
select the tenant in which you want to register an application.
3. Search for and select Azure Active Director y .
4. Under Manage , select App registrations .
5. Select you application from the list of registered applications.
6. Under Essentials , select Suppor ted account types .
7. Under Suppor ted account types , Who can use this application or access this API? select Accounts in
any organizational director y .
8. Select Save .

What authentication mechanisms are supported? How do Managed


Identities work?
Azure Resource Manager service connection can connect to a Microsoft Azure subscription using Service
Principal Authentication (SPA) or Managed Identity Authentication. Managed identities for Azure resources
provides Azure services with an automatically managed identity in Azure Active Directory. You can use this
identity to authenticate to any service that supports Azure AD authentication, without persisting credentials in
code or in the service connection. See Assigning roles to learn about managed identities for virtual machines.

NOTE
Managed identities are not supported in Microsoft Hosted Agents. You will have to set-up a self hosted agent on an
Azure VM and configure managed identity for the virtual machine.

Help and support


See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature
on our Azure DevOps Developer Community. Support page.
YAML schema reference
11/2/2020 • 41 minutes to read • Edit Online

Azure Pipelines
This article is a detailed reference guide to Azure Pipelines YAML pipelines. It includes a catalog of all
supported YAML capabilities and the available options.

The best way to get started with YAML pipelines is to read the quickstart guide. After that, to learn how to
configure your YAML pipeline for your needs, see conceptual topics like Build variables and Jobs.

To learn how to configure your YAML pipeline for your needs, see conceptual topics like Build variables
and Jobs.

Pipeline structure
A pipeline is one or more stages that describe a CI/CD process. Stages are the major divisions in a pipeline.
The stages "Build this app," "Run these tests," and "Deploy to preproduction" are good examples.
A stage is one or more jobs, which are units of work assignable to the same machine. You can arrange both
stages and jobs into dependency graphs. Examples include "Run this stage before that one" and "This job
depends on the output of that job."
A job is a linear series of steps. Steps can be tasks, scripts, or references to external templates.
This hierarchy is reflected in the structure of a YAML file like:
Pipeline
Stage A
Job 1
Step 1.1
Step 1.2
...
Job 2
Step 2.1
Step 2.2
...
Stage B
...
Simple pipelines don't require all of these levels. For example, in a single-job build you can omit the
containers for stages and jobs because there are only steps. And because many options shown in this article
aren't required and have good defaults, your YAML definitions are unlikely to include all of them.
A pipeline is one or more jobs that describe a CI/CD process. A job is a unit of work assignable to the same
machine. You can arrange jobs into dependency graphs like "This job depends on the output of that job."
A job is a linear series of steps. Steps can be tasks, scripts, or references to external templates.
This hierarchy is reflected in the structure of a YAML file like:
Pipeline
Job 1
Step 1.1
Step 1.2
...
Job 2
Step 2.1
Step 2.2
...
For single-job pipelines, you can omit the jobs container because there are only steps. And because many
options shown in this article aren't required and have good defaults, your YAML definitions are unlikely to
include all of them.
Conventions
Here are the syntax conventions used in this article:
To the left of : is a literal keyword used in pipeline definitions.
To the right of : is a data type. The data type can be a primitive type like string or a reference to a rich
structure defined elsewhere in this article.
The notation [ datatype ] indicates an array of the mentioned data type. For instance, [ string ] is an
array of strings.
The notation { datatype : datatype } indicates a mapping of one data type to another. For instance,
{ string: string } is a mapping of strings to strings.
The symbol | indicates there are multiple data types available for the keyword. For instance,
job | templateReference means either a job definition or a template reference is allowed.

YAML basics
This document covers the schema of an Azure Pipelines YAML file. To learn the basics of YAML, see Learn
YAML in Y Minutes. Azure Pipelines doesn't support all YAML features. Unsupported features include anchors,
complex keys, and sets. Also, unlike standard YAML, Azure Pipelines depends on seeing stage , job , task ,
or a task shortcut like script as the first key in a mapping.

Pipeline
Schema
Example

name: string # build numbering format


resources:
pipelines: [ pipelineResource ]
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: # several syntaxes, see specific section
trigger: trigger
pr: pr
stages: [ stage | templateReference ]

If you have a single stage, you can omit the stages keyword and directly specify the jobs keyword:

# ... other pipeline-level keywords


jobs: [ job | templateReference ]
If you have a single stage and a single job, you can omit the stages and jobs keywords and directly specify
the steps keyword:

# ... other pipeline-level keywords


steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

name: string # build numbering format


resources:
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: # several syntaxes, see specific section
trigger: trigger
pr: pr
jobs: [ job | templateReference ]

If you have a single job, you can omit the jobs keyword and directly specify the steps keyword:

# ... other pipeline-level keywords


steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

Learn more about:


Pipelines with multiple jobs
Containers and repositories in pipelines
Triggers
Variables
Build number formats

Stage
A stage is a collection of related jobs. By default, stages run sequentially. Each stage starts only after the
preceding stage is complete.
Use approval checks to manually control when a stage should run. These checks are commonly used to
control deployments to production environments.
Checks are a mechanism available to the resource owner. They control when a stage in a pipeline consumes a
resource. As an owner of a resource like an environment, you can define checks that are required before a
stage that consumes the resource can start.
Currently, manual approval checks are supported on environments. For more information, see Approvals.
Schema
Example

stages:
- stage: string # name of the stage (A-Z, a-z, 0-9, and underscore)
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
variables: # several syntaxes, see specific section
jobs: [ job | templateReference]

Learn more about stages, conditions, and variables.


Job
A job is a collection of steps run by an agent or on a server. Jobs can run conditionally and might depend on
earlier jobs.
Schema
Example

jobs:
- job: string # name of the job (A-Z, a-z, 0-9, and underscore)
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
strategy:
parallel: # parallel strategy; see the following "Parallel" topic
matrix: # matrix strategy; see the following "Matrix" topic
maxParallel: number # maximum number of matrix jobs to run simultaneously
continueOnError: boolean # 'true' if future jobs should run even if this job fails; defaults to
'false'
pool: pool # see the following "Pool" schema
workspace:
clean: outputs | resources | all # what to clean up before the job runs
container: containerReference # container to run this job inside of
timeoutInMinutes: number # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: number # how much time to give 'run always even if cancelled tasks' before
killing them
variables: # several syntaxes, see specific section
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
services: { string: string | container } # container resources to run as a service container

For more information about workspaces, including clean options, see the workspace topic in Jobs.
Learn more about variables, steps, pools, and server jobs.

NOTE
If you have only one stage and one job, you can use single-job syntax as a shorter way to describe the steps to run.

Container reference
A container is supported by jobs.
Schema
Example

container: string # Docker Hub image reference or resource alias

container:
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # endpoint for a private container registry
env: { string: string } # list of environment variables to add
# you can also use any of the other supported container attributes

Strategies
The matrix and parallel keywords specify mutually exclusive strategies for duplicating a job.
Matrix
Use of a matrix generates copies of a job, each with different input. These copies are useful for testing against
different configurations or platform versions.
Schema
Example

strategy:
matrix: { string1: { string2: string3 } }
maxParallel: number

For each occurrence of string1 in the matrix, a copy of the job is generated. The name string1 is the copy's
name and is appended to the name of the job. For each occurrence of string2, a variable called string2 with
the value string3 is available to the job.

NOTE
Matrix configuration names must contain only basic Latin alphabet letters (A-Z and a-z), digits (0-9), and underscores (
_ ). They must start with a letter. Also, their length must be 100 characters or fewer.

The optional maxParallel keyword specifies the maximum number of simultaneous matrix legs to run at
once.
If maxParallel is unspecified or set to 0, no limit is applied.
If maxParallel is unspecified, no limit is applied.

NOTE
The matrix syntax doesn't support automatic job scaling but you can implement similar functionality using the
each keyword. For an example, see nedrebo/parameterized-azure-jobs.

Parallel
This strategy specifies how many duplicates of a job should run. It's useful for slicing up a large test matrix.
The Visual Studio Test task understands how to divide the test load across the number of scheduled jobs.
Schema
Example

strategy:
parallel: number

Deployment job
A deployment job is a special type of job. It's a collection of steps to run sequentially against the environment.
In YAML pipelines, we recommend that you put your deployment steps in a deployment job.
Schema
Example
jobs:
- deployment: string # name of the deployment job, A-Z, a-z, 0-9, and underscore. The word "deploy" is
a keyword and is unsupported as the deployment name.
displayName: string # friendly name to display in the UI
pool: # see pool schema
name: string # Use only global level variables for defining a pool name. Stage/job level
variables are not supported to define pool name.
demands: string | [ string ]
workspace:
clean: outputs | resources | all # what to clean up before the job runs
dependsOn: string
condition: string
continueOnError: boolean # 'true' if future jobs should run even if this job fails;
defaults to 'false'
container: containerReference # container to run this job inside
services: { string: string | container } # container resources to run as a service container
timeoutInMinutes: nonEmptyString # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: nonEmptyString # how much time to give 'run always even if cancelled tasks'
before killing them
variables: # several syntaxes, see specific section
environment: string # target environment name and optionally a resource name to record the deployment
history; format: <environment-name>.<resource-name>
strategy:
runOnce: #rolling, canary are the other strategies that are supported
deploy:
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

Steps
A step is a linear sequence of operations that make up a job. Each step runs in its own process on an agent
and has access to the pipeline workspace on a local hard drive. This behavior means environment variables
aren't preserved between steps but file system changes are.
Schema
Example

steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

For more information about steps, see the schema references for:
Script
Bash
pwsh
PowerShell
Checkout
Task
Step templates
All steps, regardless of whether they're documented in this article, support the following properties:
displayName
name
condition
continueOnError
enabled
env
timeoutInMinutes

Variables
You can add hard-coded values directly or reference variable groups. Specify variables at the pipeline, stage,
or job level.
Schema
Example
For a simple set of hard-coded variables, use this mapping syntax:

variables: { string: string }

To include variable groups, switch to this sequence syntax:

variables:
- name: string # name of a variable
value: string # value of the variable
- group: string # name of a variable group

You can repeat name / value pairs and group .


Variables can also be set as read only to enhance security.

variables:
- name: myReadOnlyVar
value: myValue
readonly: true

You can also include variables from templates.

Template references
NOTE
Be sure to see the full template expression syntax, which is all forms of ${{ }} .

You can export reusable sections of your pipeline to a separate file. These separate files are known as
templates. Azure Pipelines supports four kinds of templates:
Azure Pipelines supports four kinds of templates:
Stage
Job
Step
Variable
You can also use templates to control what is allowed in a pipeline and to define how parameters can be used.
Parameter
You can export reusable sections of your pipeline to separate files. These separate files are known as
templates. Azure DevOps Server 2019 supports these two kinds of templates:
Job
Step
Templates themselves can include other templates. Azure Pipelines supports a maximum of 50 unique
template files in a single pipeline.
Stage templates
You can define a set of stages in one file and use it multiple times in other files.
Schema
Example
In the main pipeline:

- template: string # name of template to include


parameters: { string: any } # provided parameters

In the included template:

parameters: { string: any } # expected parameters


stages: [ stage ]

Job templates
You can define a set of jobs in one file and use it multiple times in other files.
Schema
Example
In the main pipeline:

- template: string # name of template to include


parameters: { string: any } # provided parameters

In the included template:

parameters: { string: any } # expected parameters


jobs: [ job ]

See templates for more about working with job templates.


Step templates
You can define a set of steps in one file and use it multiple times in another file.
Schema
Example
In the main pipeline:

steps:
- template: string # reference to template
parameters: { string: any } # provided parameters

In the included template:


parameters: { string: any } # expected parameters
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

See templates for more about working with templates.


Variable templates
You can define a set of variables in one file and use it multiple times in other files.
Schema
Example
In the main pipeline:

- template: string # name of template file to include


parameters: { string: any } # provided parameters

In the included template:

parameters: { string: any } # expected parameters


variables: [ variable ]

NOTE
The variables keyword uses two syntax forms: sequence and mapping. In mapping syntax, all keys are variable
names and their values are variable values. To use variable templates, you must use sequence syntax. Sequence syntax
requires you to specify whether you're mentioning a variable ( name ), a variable group ( group ), or a template (
template ). See the variables topic for more.

Parameters
You can use parameters in templates and pipelines.
Schema
YAML Example
Template Example
The type and name fields are required when defining parameters. See all parameter data types.

parameters:
- name: string # name of the parameter; required
type: enum # data types, see below
default: any # default value; if no default, then the parameter MUST be given by the user at
runtime
values: [ string ] # allowed list of values (for some data types)

Types
DATA T Y P E N OT ES

string string

number may be restricted to values: , otherwise any number-like


string is accepted
DATA T Y P E N OT ES

boolean true or false

object any YAML structure

step a single step

stepList sequence of steps

job a single job

jobList sequence of jobs

deployment a single deployment job

deploymentList sequence of deployment jobs

stage a single stage

stageList sequence of stages

The step, stepList, job, jobList, deployment, deploymentList, stage, and stageList data types all use standard
YAML schema format. This example includes string, number, boolean, object, step, and stepList.
parameters:
- name: myString
type: string
default: a string
- name: myMultiString
type: string
default: default
values:
- default
- ubuntu
- name: myNumber
type: number
default: 2
values:
- 1
- 2
- 4
- 8
- 16
- name: myBoolean
type: boolean
default: true
- name: myObject
type: object
default:
foo: FOO
bar: BAR
things:
- one
- two
- three
nested:
one: apple
two: pear
count: 3
- name: myStep
type: step
default:
script: echo my step
- name: mySteplist
type: stepList
default:
- script: echo step one
- script: echo step two

trigger: none

jobs:
- job: stepList
steps: ${{ parameters.mySteplist }}
- job: myStep
steps:
- ${{ parameters.myStep }}

Resources
A resource is any external service that is consumed as part of your pipeline. An example of a resource is
another CI/CD pipeline that produces:
Artifacts like Azure Pipelines or Jenkins.
Code repositories like GitHub, Azure Repos, or Git.
Container-image registries like Azure Container Registry or Docker hub.
Resources in YAML represent sources of pipelines, containers, repositories, and types. For more information
on Resources, see here.
General schema

resources:
pipelines: [ pipeline ]
repositories: [ repository ]
containers: [ container ]

Pipeline resource
If you have an Azure pipeline that produces artifacts, your pipeline can consume the artifacts by using the
pipeline keyword to define a pipeline resource. You can also enable pipeline-completion triggers.

Schema
Example

resources:
pipelines:
- pipeline: string # identifier for the pipeline resource
project: string # project for the build pipeline; optional input for current project
source: string # source pipeline definition name
branch: string # branch to pick the artifact, optional; defaults to all branches
version: string # pipeline run number to pick artifact, optional; defaults to last successfully
completed run
trigger: # optional; triggers are not enabled by default.
branches:
include: [string] # branches to consider the trigger events, optional; defaults to all branches.
exclude: [string] # branches to discard the trigger events, optional; defaults to none.

IMPORTANT
When you define a resource trigger, if its pipeline resource is from the same repo as the current pipeline, triggering
follows the same branch and commit on which the event is raised. But if the pipeline resource is from a different repo,
the current pipeline is triggered on the branch specified by the Default branch for manual and scheduled builds
setting. For more information, see Branch considerations for pipeline completion triggers.

The pipeline resource metadata as predefined variables


In each run, the metadata for a pipeline resource is available to all jobs as these predefined variables:

resources.pipeline.<Alias>.projectName
resources.pipeline.<Alias>.projectID
resources.pipeline.<Alias>.pipelineName
resources.pipeline.<Alias>.pipelineID
resources.pipeline.<Alias>.runName
resources.pipeline.<Alias>.runID
resources.pipeline.<Alias>.runURI
resources.pipeline.<Alias>.sourceBranch
resources.pipeline.<Alias>.sourceCommit
resources.pipeline.<Alias>.sourceProvider
resources.pipeline.<Alias>.requestedFor
resources.pipeline.<Alias>.requestedForID

You can consume artifacts from a pipeline resource by using a download task. See the download keyword.
Container resource
Container jobs let you isolate your tools and dependencies inside a container. The agent launches an instance
of your specified container then runs steps inside it. The container keyword lets you specify your container
images.
Service containers run alongside a job to provide various dependencies like databases.
Schema
Example

resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container
mapDockerSocket: bool # whether to map in the Docker daemon socket; defaults to true
mountReadOnly: # volumes to mount read-only - all default to false
externals: boolean # components required to talk to the agent
tasks: boolean # tasks required by the job
tools: boolean # installable tools like Python and Ruby
work: boolean # the work directory

resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container
mapDockerSocket: bool # whether to map in the Docker daemon socket; defaults to true

resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container

Repository resource
If your pipeline has templates in another repository, you must let the system know about that repository. The
repository keyword lets you specify an external repository.

If your pipeline has templates in another repository, or if you want to use multi-repo checkout with a
repository that requires a service connection, you must let the system know about that repository. The
repository keyword lets you specify an external repository.

Schema
Example
resources:
repositories:
- repository: string # identifier (A-Z, a-z, 0-9, and underscore)
type: enum # see the following "Type" topic
name: string # repository name (format depends on `type`)
ref: string # ref name to use; defaults to 'refs/heads/master'
endpoint: string # name of the service connection to use (for types that aren't Azure Repos)
trigger: # CI trigger for this repository, no CI trigger if skipped (only works for Azure Repos)
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
tags:
include: [ string ] # tag names which will trigger a build
exclude: [ string ] # tag names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build

Type
Pipelines support the following values for the repository type: git , github , and bitbucket . The git type
refers to Azure Repos Git repos.
If you specify type: git , the name value refers to another repository in the same project. An example
is name: otherRepo . To refer to a repo in another project within the same organization, prefix the name
with that project's name. An example is name: OtherProject/otherRepo .
If you specify type: github , the name value is the full name of the GitHub repo and includes the user
or organization. An example is name: Microsoft/vscode . GitHub repos require a GitHub service
connection for authorization.
If you specify type: bitbucket , the name value is the full name of the Bitbucket Cloud repo and
includes the user or organization. An example is name: MyBitbucket/vscode . Bitbucket Cloud repos
require a Bitbucket Cloud service connection for authorization.

Triggers
Push trigger
Pull request trigger
Scheduled trigger
Pipeline trigger

NOTE
Trigger blocks can't contain variables or template expressions.

Push trigger
A push trigger specifies which branches cause a continuous integration build to run. If you specify no push
trigger, pushes to any branch trigger a build. Learn more about triggers and how to specify them.
Schema
Example
There are three distinct syntax options for the trigger keyword: a list of branches to include, a way to disable
CI triggers, and the full syntax for complete control.
List syntax:
trigger: [ string ] # list of branch names

Disablement syntax:

trigger: none # will disable CI builds entirely

Full syntax:

trigger:
batch: boolean # batch changes if true; start a new build for every push if false (default)
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
tags:
include: [ string ] # tag names which will trigger a build
exclude: [ string ] # tag names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build

If you specify an exclude clause without an include clause for branches , tags , or paths , it is equivalent to
specifying * in the include clause.

trigger:
batch: boolean # batch changes if true; start a new build for every push if false (default)
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build

IMPORTANT
When you specify a trigger, only branches that you explicitly configure for inclusion trigger a pipeline. Inclusions are
processed first, and then exclusions are removed from that list. If you specify an exclusion but no inclusions, nothing
triggers.

PR trigger
A pull request trigger specifies which branches cause a pull request build to run. If you specify no pull request
trigger, pull requests to any branch trigger a build. Learn more about pull request triggers and how to specify
them.

IMPORTANT
YAML PR triggers are supported only in GitHub and Bitbucket Cloud. If you use Azure Repos Git, you can configure a
branch policy for build validation to trigger your build pipeline for validation.

IMPORTANT
YAML PR triggers are supported only in GitHub. If you use Azure Repos Git, you can configure a branch policy for build
validation to trigger your build pipeline for validation.
Schema
Example
There are three distinct syntax options for the pr keyword: a list of branches to include, a way to disable PR
triggers, and the full syntax for complete control.
List syntax:

pr: [ string ] # list of branch names

Disablement syntax:

pr: none # will disable PR builds entirely; will not disable CI triggers

Full syntax:

pr:
autoCancel: boolean # indicates whether additional pushes to a PR should cancel in-progress runs for
the same PR. Defaults to true
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build

pr:
autoCancel: boolean # indicates whether additional pushes to a PR should cancel in-progress runs for
the same PR. Defaults to true
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
drafts: boolean # For GitHub only, whether to build draft PRs, defaults to true

If you specify an exclude clause without an include clause for branches or paths , it is equivalent to
specifying * in the include clause.

IMPORTANT
When you specify a pull request trigger, only branches that you explicitly configure for inclusion trigger a pipeline.
Inclusions are processed first, and then exclusions are removed from that list. If you specify an exclusion but no
inclusions, nothing triggers.

Scheduled trigger
YAML scheduled triggers are unavailable in either this version of Azure DevOps Server or Visual Studio Team
Foundation Server. You can use scheduled triggers in the classic editor.
A scheduled trigger specifies a schedule on which branches are built. If you specify no scheduled trigger, no
scheduled builds occur. Learn more about scheduled triggers and how to specify them.
Schema
Example
schedules:
- cron: string # cron syntax defining a schedule in UTC time
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes
since the last successful scheduled run. The default is false.

NOTE
If you specify an exclude clause without an include clause for branches , it is equivalent to specifying * in the
include clause.

Pipeline trigger
Pipeline completion triggers are configured using a pipeline resource. For more information, see Pipeline
completion triggers.

Pool
The pool keyword specifies which pool to use for a job of the pipeline. A pool specification also holds
information about the job's strategy for running.
In Azure DevOps Server 2019 you can specify a pool at the job level in YAML, and at the pipeline level in the
pipeline settings UI. In Azure DevOps Server 2019.1 you can also specify a pool at the pipeline level in YAML if
you have a single implicit job.
You can specify a pool at the pipeline, stage, or job level.
The pool specified at the lowest level of the hierarchy is used to run the job.
Schema
Example
The full syntax is:

pool:
name: string # name of the pool to run this job in
demands: string | [ string ] # see the following "Demands" topic
vmImage: string # name of the VM image you want to use; valid only in the Microsoft-hosted pool

If you use a Microsoft-hosted pool, choose an available virtual machine image.


If you use a private pool and don't need to specify demands, you can shorten the syntax to:

pool: string # name of the private pool to run this job in

Learn more about conditions and timeouts.


Demands
The demands keyword is supported by private pools. You can check for the existence of a capability or a
specific string.
Schema
Example
pool:
demands: [ string ]

Environment
The environment keyword specifies the environment or its resource that is targeted by a deployment job of
the pipeline. An environment also holds information about the deployment strategy for running the steps
defined inside the job.
Schema
Example
The full syntax is:

environment: # create environment and/or record deployments


name: string # name of the environment to run this job on.
resourceName: string # name of the resource in the environment to record the deployments against
resourceId: number # resource identifier
resourceType: string # type of the resource you want to target. Supported types - virtualMachine,
Kubernetes
tags: string | [ string ] # tag names to filter the resources in the environment
strategy: # deployment strategy
runOnce: # default strategy
deploy:
steps:
- script: echo Hello world

If you specify an environment or one of its resources but don't need to specify other properties, you can
shorten the syntax to:

environment: environmentName.resourceName
strategy: # deployment strategy
runOnce: # default strategy
deploy:
steps:
- script: echo Hello world

Server
The server value specifies a server job. Only server tasks like invoking an Azure function app can be run in a
server job.
Schema
Example
When you use server , a job runs as a server job rather than an agent job.

pool: server

Script
The script keyword is a shortcut for the command-line task. The task runs a script using cmd.exe on
Windows and Bash on other platforms.
Schema
Example

steps:
- script: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
workingDirectory: string # initial working directory for the step
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
target:
container: string # where this step will run; values are the container name or the word 'host'
commands: enum # whether to process all logging commands from this step; values are `any` (default)
or `restricted`
timeoutInMinutes: number
env: { string: string } # list of environment variables to add

If you don't specify a command mode, you can shorten the target structure to:

- script:
target: string # container name or the word 'host'

Learn more about conditions, timeouts, and step targets.

Bash
The bash keyword is a shortcut for the shell script task. The task runs a script in Bash on Windows, macOS,
and Linux.
Schema
Example

steps:
- bash: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
workingDirectory: string # initial working directory for the step
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
target:
container: string # where this step will run; values are the container name or the word 'host'
commands: enum # whether to process all logging commands from this step; values are `any` (default)
or `restricted`
timeoutInMinutes: number
env: { string: string } # list of environment variables to add

If you don't specify a command mode, you can shorten the target structure to:

- bash:
target: string # container name or the word 'host'

Learn more about conditions, timeouts, and step targets.

pwsh
pwsh
The pwsh keyword is a shortcut for the PowerShell task when that task's pwsh value is set to true . The task
runs a script in PowerShell Core on Windows, macOS, and Linux.
Schema
Example

steps:
- pwsh: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
errorActionPreference: enum # see the following "Error action preference" topic
ignoreLASTEXITCODE: boolean # see the following "Ignore last exit code" topic
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
workingDirectory: string # initial working directory for the step
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
timeoutInMinutes: number
env: { string: string } # list of environment variables to add

NOTE
Each PowerShell session lasts only for the duration of the job in which it runs. Tasks that depend on what has been
bootstrapped must be in the same job as the bootstrap.

Learn more about conditions and timeouts.

PowerShell
The powershell keyword is a shortcut for the PowerShell task. The task runs a script in Windows PowerShell.
Schema
Example

steps:
- powershell: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
errorActionPreference: enum # see the following "Error action preference" topic
ignoreLASTEXITCODE: boolean # see the following "Ignore last exit code" topic
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
workingDirectory: string # initial working directory for the step
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
timeoutInMinutes: number
env: { string: string } # list of environment variables to add

NOTE
Each PowerShell session lasts only for the duration of the job in which it runs. Tasks that depend on what has been
bootstrapped must be in the same job as the bootstrap.
Learn more about conditions and timeouts.
Error action preference
Unless otherwise specified, the error action preference defaults to the value stop , and the line
$ErrorActionPreference = 'stop' is prepended to the top of your script.

When the error action preference is set to stop, errors cause PowerShell to terminate the task and return a
nonzero exit code. The task is also marked as Failed.
Schema
Example

errorActionPreference: stop | continue | silentlyContinue

Ignore last exit code


The last exit code returned from your script is checked by default. A nonzero code indicates a step failure, in
which case the system appends your script with:
if ((Test-Path -LiteralPath variable:\LASTEXITCODE)) { exit $LASTEXITCODE }

If you don't want this behavior, specify ignoreLASTEXITCODE: true .


Schema
Example

ignoreLASTEXITCODE: boolean

Learn more about conditions and timeouts.

Publish
The publish keyword is a shortcut for the Publish Pipeline Artifact task. The task publishes (uploads) a file or
folder as a pipeline artifact that other jobs and pipelines can consume.
Schema
Example

steps:
- publish: string # path to a file or folder
artifact: string # artifact name
displayName: string # friendly name to display in the UI

Learn more about publishing artifacts.

Download
The download keyword is a shortcut for the Download Pipeline Artifact task. The task downloads artifacts
associated with the current run or from another Azure pipeline that is associated as a pipeline resource.
Schema
Example
steps:
- download: [ current | pipeline resource identifier | none ] # disable automatic download if "none"
artifact: string ## artifact name, optional; downloads all the available artifacts if not specified
patterns: string # patterns representing files to include; optional
displayName: string # friendly name to display in the UI

Artifact download location


Artifacts from the current pipeline are downloaded to $(Pipeline.Workspace )/.
Artifacts from the associated pipeline resource are downloaded to $(Pipeline.Workspace )/<pipeline
resource identifier>/.
Automatic download in deployment jobs
All available artifacts from the current pipeline and from the associated pipeline resources are automatically
downloaded in deployment jobs and made available for your deployment. To prevent downloads, specify
download: none .

Learn more about downloading artifacts.

Checkout
Nondeployment jobs automatically check out source code. Use the checkout keyword to configure or
suppress this behavior.
Schema
Example

steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # if true, execute `execute git clean -ffdx && git reset --hard HEAD` before fetching
fetchDepth: number # the depth of commits to ask Git to fetch; defaults to no limit
lfs: boolean # whether to download Git-LFS files; defaults to false
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules; defaults to not checking out submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1);
defaults to a directory called `s`
persistCredentials: boolean # if 'true', leave the OAuth token in the Git config after the initial
fetch; defaults to false

steps:
- checkout: self | none | repository name # self represents the repo where the initial Pipelines YAML
file was found
clean: boolean # if true, run `execute git clean -ffdx && git reset --hard HEAD` before fetching
fetchDepth: number # the depth of commits to ask Git to fetch; defaults to no limit
lfs: boolean # whether to download Git-LFS files; defaults to false
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules; defaults to not checking out submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1);
defaults to a directory called `s`
persistCredentials: boolean # if 'true', leave the OAuth token in the Git config after the initial
fetch; defaults to false

NOTE
In addition to the cleaning option available using checkout , you can also configuring cleaning in a workspace. For
more information about workspaces, including clean options, see the workspace topic in Jobs.
To avoid syncing sources at all:

steps:
- checkout: none

NOTE
If you're running the agent in the Local Service account and want to modify the current repository by using git
operations or loading git submodules, give the proper permissions to the Project Collection Build Service Accounts
user.

- checkout: self
submodules: true
persistCredentials: true

To check out multiple repositories in your pipeline, use multiple checkout steps:

- checkout: self
- checkout: git://MyProject/MyRepo
- checkout: MyGitHubRepo # Repo declared in a repository resource

For more information, see Check out multiple repositories in your pipeline.

Task
Tasks are the building blocks of a pipeline. There's a catalog of tasks available to choose from.
Schema
Example

steps:
- task: string # reference to a task and version, e.g. "VSBuild@1"
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
target:
container: string # where this step will run; values are the container name or the word 'host'
commands: enum # whether to process all logging commands from this step; values are `any` (default)
or `restricted`
timeoutInMinutes: number
inputs: { string: string } # task-specific inputs
env: { string: string } # list of environment variables to add

If you don't specify a command mode, you can shorten the target structure to:

- task:
target: string # container name or the word 'host'

Learn more about conditions, timeouts, and step targets.

Syntax highlighting
Syntax highlighting is available for the pipeline schema via a Visual Studio Code extension. You can download
Visual Studio Code, install the extension, and check out the project on GitHub. The extension includes a JSON
schema for validation.
You also can obtain a schema that's specific to your organization (that is, it contains installed custom tasks)
from the Azure DevOps REST API yamlschema endpoint.
Expressions
11/2/2020 • 18 minutes to read • Edit Online

Azure Pipelines | TFS 2018 | TFS 2017.3

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.

Expressions can be used in many places where you need to specify a string, boolean, or number value when
authoring a pipeline. The most common use of expressions is in conditions to determine whether a job or
step should run.

# Expressions are used to define conditions for a step, job, or stage


steps:
- task: ...
condition: <expression>

Another common use of expressions is in defining variables. Expressions can be evaluated at compile time
or at run time. Compile time expressions can be used anywhere; runtime expressions can be used in
variables and conditions.

# Two examples of expressions used to define variables


# The first one, a, is evaluated when the YAML file is compiled into a plan.
# The second one, b, is evaluated at runtime.
# Note the syntax ${{}} for compile time and $[] for runtime expressions.
variables:
a: ${{ <expression> }}
b: $[ <expression> ]

The difference between runtime and compile time expression syntaxes is primarily what context is available.
In a compile-time expression ( ${{ <expression> }} ), you have access to parameters and statically defined
variables . In a runtime expression ( $[ <expression> ] ), you have access to more variables but no
parameters.
In this example, a runtime expression sets the value of $(isMain) . A static variable in a compile expression
sets the value of $(compileVar) .

variables:
staticVar: 'my value' # static variable
compileVar: ${{ variables.staticVar }} # compile time expression
isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/master')] # runtime expression

steps:
- script: |
echo ${{variables.staticVar}} # outputs my value
echo $(compileVar) # outputs my value
echo $(isMain) # outputs True

An expression can be a literal, a reference to a variable, a reference to a dependency, a function, or a valid


nested combination of these.

Literals
As part of an expression, you can use boolean, null, number, string, or version literals.

# Examples
variables:
someBoolean: ${{ true }} # case insensitive, so True or TRUE also works
someNumber: ${{ -1.2 }}
someString: ${{ 'a b c' }}
someVersion: ${{ 1.2.3 }}

Boolean
True and False are boolean literal expressions.
Null
Null is a special literal expression that's returned from a dictionary miss, e.g. ( variables['noSuch'] ). Null can
be the output of an expression but cannot be called directly within an expression.
Number
Starts with '-', '.', or '0' through '9'.
String
Must be single-quoted. For example: 'this is a string' .
To express a literal single-quote, escape it with a single quote. For example:
'It''s OK if they''re using contractions.' .

You can use a pipe character ( | ) for multiline strings.

myKey: |
one
two
three

Version
A version number with up to four segments. Must start with a number and contain two or three period ( . )
characters. For example: 1.2.3.4 .

Variables
As part of an expression, you may access variables using one of two syntaxes:
Index syntax: variables['MyVar']
Property dereference syntax: variables.MyVar

In order to use property dereference syntax, the property name must:


Start with a-Z or _
Be followed by a-Z 0-9 or _

Depending on the execution context, different variables are available.


If you create pipelines using YAML, then pipeline variables are available.
If you create build pipelines using classic editor, then build variables are available.
If you create release pipelines using classic editor, then release variables are available.
Variables are always strings. If you want to use typed values, then you should use parameters instead.

Functions
The following built-in functions can be used in expressions.
and
Evaluates to True if all parameters are True
Min parameters: 2. Max parameters: N
Casts parameters to Boolean for evaluation
Short-circuits after first False
Example: and(eq(variables.letters, 'ABC'), eq(variables.numbers, 123))

coalesce
Evaluates the parameters in order, and returns the value that does not equal null or empty-string.
Min parameters: 2. Max parameters: N
Example: coalesce(variables.couldBeNull, variables.couldAlsoBeNull, 'literal so it always works')
contains
Evaluates True if left parameter String contains right parameter
Min parameters: 2. Max parameters: 2
Casts parameters to String for evaluation
Performs ordinal ignore-case comparison
Example: contains('ABCDE', 'BCD') (returns True)
containsValue
Evaluates True if the left parameter is an array, and any item equals the right parameter. Also evaluates
True if the left parameter is an object, and the value of any property equals the right parameter.
Min parameters: 2. Max parameters: 2
If the left parameter is an array, convert each item to match the type of the right parameter. If the left
parameter is an object, convert the value of each property to match the type of the right parameter. The
equality comparison for each specific item evaluates False if the conversion fails.
Ordinal ignore-case comparison for Strings
Short-circuits after the first match

NOTE
There is no literal syntax in a YAML pipeline for specifying an array. This function is of limited use in general pipelines.
It's intended for use in the pipeline decorator context with system-provided arrays such as the list of steps.

counter
This function can only be used in an expression that defines a variable. It cannot be used as part of a
condition for a step, job, or stage.
Evaluates a number that is incremented with each run of a pipeline.
Parameters: 2. prefix and seed .
Prefix is a string expression. A separate value of counter is tracked for each unique value of prefix
Seed is the starting value of the counter
You can create a counter that is automatically incremented by one in each execution of your pipeline. When
you define a counter, you provide a prefix and a seed . Here is an example that demonstrates this.

variables:
major: 1
# define minor as a counter with the prefix as variable major, and seed as 100.
minor: $[counter(variables['major'], 100)]

steps:
- bash: echo $(minor)

The value of minor in the above example in the first run of the pipeline will be 100. In the second run it will
be 101, provided the value of major is still 1.
If you edit the YAML file, and update the value of the variable major to be 2, then in the next run of the
pipeline, the value of minor will be 100. Subsequent runs will increment the counter to 101, 102, 103, ...
Later, if you edit the YAML file, and set the value of major back to 1, then the value of the counter resumes
where it left off for that prefix. In this example, it resumes at 102.
Here is another example of setting a variable to act as a counter that starts at 100, gets incremented by 1 for
every run, and gets reset to 100 every day.

NOTE
pipeline.startTimeis not available outside of expressions. pipeline.startTime formats
system.pipelineStartTime into a date and time object so that it is available to work with expressions. The default
time zone for pipeline.startTime is UTC. You can change the time zone for your organization.

jobs:
- job:
variables:
a: $[counter(format('{0:yyyyMMdd}', pipeline.startTime), 100)]
steps:
- bash: echo $(a)

Here is an example of having a counter that maintains a separate value for PRs and CI runs.

variables:
patch: $[counter(variables['build.reason'], 0)]

Counters are scoped to a pipeline. In other words, its value is incremented for each run of that pipeline.
There are no project-scoped counters.
endsWith
Evaluates True if left parameter String ends with right parameter
Min parameters: 2. Max parameters: 2
Casts parameters to String for evaluation
Performs ordinal ignore-case comparison
Example: endsWith('ABCDE', 'DE') (returns True)
eq
Evaluates True if parameters are equal
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Returns False if conversion fails.
Ordinal ignore-case comparison for Strings
Example: eq(variables.letters, 'ABC')
format
Evaluates the trailing parameters and inserts them into the leading parameter string
Min parameters: 1. Max parameters: N
Example: format('Hello {0} {1}', 'John', 'Doe')
Uses .NET custom date and time format specifiers for date formatting ( yyyy , yy , MM , M , dd , d , HH ,
H , m , mm , ss , s , f , ff , ffff , K )
Example: format('{0:yyyyMMdd}', pipeline.startTime) . In this case pipeline.startTime is a special date
time object variable.
Escape by doubling braces. For example: format('literal left brace {{ and literal right brace }}')
ge
Evaluates True if left parameter is greater than or equal to the right parameter
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Errors if conversion fails.
Ordinal ignore-case comparison for Strings
Example: ge(5, 5) (returns True)
gt
Evaluates True if left parameter is greater than the right parameter
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Errors if conversion fails.
Ordinal ignore-case comparison for Strings
Example: gt(5, 2) (returns True)
in
Evaluates True if left parameter is equal to any right parameter
Min parameters: 1. Max parameters: N
Converts right parameters to match type of left parameter. Equality comparison evaluates False if
conversion fails.
Ordinal ignore-case comparison for Strings
Short-circuits after first match
Example: in('B', 'A', 'B', 'C') (returns True)
join
Concatenates all elements in the right parameter array, separated by the left parameter string.
Min parameters: 2. Max parameters: 2
Each element in the array is converted to a string. Complex objects are converted to empty string.
If the right parameter is not an array, the result is the right parameter converted to a string.
In this example, a semicolon gets added between each item in the array. The parameter type is an object.
parameters:
- name: myArray
type: object
default:
- FOO
- BAR
- ZOO

variables:
A: ${{ join(';',parameters.myArray) }}

steps:
- script: echo $A # outputs FOO;BAR;ZOO

le
Evaluates True if left parameter is less than or equal to the right parameter
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Errors if conversion fails.
Ordinal ignore-case comparison for Strings
Example: le(2, 2) (returns True)
length
Returns the length of a string or an array, either one that comes from the system or that comes from a
parameter
Min parameters: 1. Max parameters 1
Example: length('fabrikam') returns 8
lower
Converts a string or variable value to all lowercase characters
Min parameters: 1. Max parameters 1
Returns the lowercase equivalent of a string
Example: lower('FOO') returns foo
lt
Evaluates True if left parameter is less than the right parameter
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Errors if conversion fails.
Ordinal ignore-case comparison for Strings
Example: lt(2, 5) (returns True)
ne
Evaluates True if parameters are not equal
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Returns True if conversion fails.
Ordinal ignore-case comparison for Strings
Example: ne(1, 2) (returns True)
not
Evaluates True if parameter is False
Min parameters: 1. Max parameters: 1
Converts value to Boolean for evaluation
Example: not(eq(1, 2)) (returns True)
notIn
Evaluates True if left parameter is not equal to any right parameter
Min parameters: 1. Max parameters: N
Converts right parameters to match type of left parameter. Equality comparison evaluates False if
conversion fails.
Ordinal ignore-case comparison for Strings
Short-circuits after first match
Example: notIn('D', 'A', 'B', 'C') (returns True)
or
Evaluates True if any parameter is true
Min parameters: 2. Max parameters: N
Casts parameters to Boolean for evaluation
Short-circuits after first True
Example: or(eq(1, 1), eq(2, 3)) (returns True, short-circuits)
replace
Returns a new string in which all instances of a string in the current instance are replaced with another
string
Min parameters: 3. Max parameters: 3
replace(a, b, c) : returns a, with all instances of b replaced by c
Example:
replace('https://www.tinfoilsecurity.com/saml/consume','https://www.tinfoilsecurity.com','http://server')
(returns http://server/saml/consume )
startsWith
Evaluates true if left parameter string starts with right parameter
Min parameters: 2. Max parameters: 2
Casts parameters to String for evaluation
Performs ordinal ignore-case comparison
Example: startsWith('ABCDE', 'AB') (returns True)
upper
Converts a string or variable value to all uppercase characters
Min parameters: 1. Max parameters 1
Returns the uppercase equivalent of a string
Example: upper('bah') returns BAH
xor
Evaluates True if exactly one parameter is True
Min parameters: 2. Max parameters: 2
Casts parameters to Boolean for evaluation
Example: xor(True, False) (returns True)

Job status check functions


You can use the following status check functions as expressions in conditions, but not in variable definitions.
always
Always evaluates to True (even when canceled). Note: A critical failure may still prevent a task from
running. For example, if getting sources failed.
canceled
Evaluates to True if the pipeline was canceled.
failed
For a step, equivalent to eq(variables['Agent.JobStatus'], 'Failed') .
For a job:
With no arguments, evaluates to True only if any previous job in the dependency graph failed.
With job names as arguments, evaluates to True only if any of those jobs failed.
succeeded
For a step, equivalent to in(variables['Agent.JobStatus'], 'Succeeded', 'SucceededWithIssues')
For a job:
With no arguments, evaluates to True only if all previous jobs in the dependency graph
succeeded or partially succeeded.
If the previous job succeeded but a dependency further upstream failed,
succeeded('previousJobName') will return true. When you just use dependsOn: previousJobName , it
will fail because all of the upstream dependencies were not successful. To only evaluate the
previous job, use succeeded('previousJobName') in a condition.
With job names as arguments, evaluates to True if all of those jobs succeeded or partially
succeeded.
Evaluates to False if the pipeline is canceled.
succeededOrFailed
For a step, equivalent to
in(variables['Agent.JobStatus'], 'Succeeded', 'SucceededWithIssues', 'Failed')

For a job:
With no arguments, evaluates to True regardless of whether any jobs in the dependency graph
succeeded or failed.
With job names as arguments, evaluates to True whether any of those jobs succeeded or failed.

This is like always() , except it will evaluate False when the pipeline is canceled.

Conditional insertion
You can use an if clause to conditionally assign the value or a variable or set inputs for tasks. Conditionals
only work when using template syntax.
For templates, you can use conditional insertion when adding a sequence or mapping. Learn more about
conditional insertion in templates.
Conditionally assign a variable
variables:
${{ if eq(variables['Build.SourceBranchName'], 'master') }}: # only works if you have a master branch
stageName: prod

pool:
vmImage: 'ubuntu-latest'

steps:
- script: echo ${{variables.stageName}}

Conditionally set a task input

pool:
vmImage: 'ubuntu-latest'

steps:
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(Pipeline.Workspace)'
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
artifact: 'prod'
${{ if ne(variables['Build.SourceBranchName'], 'master') }}:
artifact: 'dev'
publishLocation: 'pipeline'

Dependencies
Expressions can use the dependencies context to reference previous jobs or stages. You can use
dependencies to:
Reference the job status of a previous job
Reference the stage status of a previous stage
Reference output variables in the previous job in the same stage
Reference output variables in the previous stage in a stage
Reference output variables in a job in a previous stage in the following stage
The context is called dependencies for jobs and stages and works much like variables. Inside a job, if you
refer to an output variable from a job in another stage, the context is called stageDependencies .
If you experience issues with output variables having quote characters ( ' or " ) in them, see this
troubleshooting guide.
Stage to stage dependencies
Structurally, the dependencies object is a map of job and stage names to results and outputs . Expressed
as JSON, it would look like:

"dependencies": {
"<STAGE_NAME>" : {
"result": "Succeeded|SucceededWithIssues|Skipped|Failed|Canceled",
"outputs": {
"jobName.stepName.variableName": "value"
}
},
"...": {
// another stage
}
}
Use this form of dependencies to map in variables or check conditions at a stage level. In this example, Stage
B runs whether Stage A is successful or skipped.

stages:
- stage: A
condition: false
jobs:
- job: A1
steps:
- script: echo Job A1
- stage: B
condition: in(dependencies.A.result, 'Succeeded', 'SucceededWithIssues', 'Skipped')
jobs:
- job: B1
steps:
- script: echo Job B1

Stages can also use output variables from another stage. In this example, Stage B depends on a variable in
Stage A.

stages:
- stage: A
jobs:
- job: A1
steps:
- bash: echo "##vso[task.setvariable variable=shouldrun;isOutput=true]true"
# or on Windows:
# - script: echo ##vso[task.setvariable variable=shouldrun;isOutput=true]true
name: printvar

- stage: B
condition: and(succeeded(), eq(dependencies.A.outputs['A1.printvar.shouldrun'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B

NOTE
By default, each stage in a pipeline depends on the one just before it in the YAML file. If you need to refer to a stage
that isn't immediately prior to the current one, you can override this automatic default by adding a dependsOn
section to the stage.

Job to job dependencies within one stage


At the job level within a single stage, the dependencies data doesn't contain stage-level information.

"dependencies": {
"<JOB_NAME>": {
"result": "Succeeded|SucceededWithIssues|Skipped|Failed|Canceled",
"outputs": {
"stepName.variableName": "value1"
}
},
"...": {
// another job
}
}
In this example, Job A will always be skipped and Job B will run. Job C will run, since all of its dependencies
either succeed or are skipped.

jobs:
- job: a
condition: false
steps:
- script: echo Job A
- job: b
steps:
- script: echo Job B
- job: c
dependsOn:
- a
- b
condition: |
and
(
in(dependencies.a.result, 'Succeeded', 'SucceededWithIssues', 'Skipped'),
in(dependencies.b.result, 'Succeeded', 'SucceededWithIssues', 'Skipped')
)
steps:
- script: echo Job C

In this example, Job B depends on an output variable from Job A.

jobs:
- job: A
steps:
- bash: echo "##vso[task.setvariable variable=shouldrun;isOutput=true]true"
# or on Windows:
# - script: echo ##vso[task.setvariable variable=shouldrun;isOutput=true]true
name: printvar

- job: B
condition: and(succeeded(), eq(dependencies.A.outputs['printvar.shouldrun'], 'true'))
dependsOn: A
steps:
- script: echo hello from B

Job to job dependencies across stages


At the job level, you can also reference outputs from a job in a previous stage. This requires using the
stageDependencies context.

"stageDependencies": {
"<STAGE_NAME>" : {
"<JOB_NAME>": {
"result": "Succeeded|SucceededWithIssues|Skipped|Failed|Canceled",
"outputs": {
"stepName.variableName": "value"
}
},
"...": {
// another job
}
},
"...": {
// another stage
}
}
In this example, job B1 will run whether job A1 is successful or skipped. Job B2 will check the value of the
output variable from job A1 to determine whether it should run.

trigger: none

pool:
vmImage: 'ubuntu-latest'

stages:
- stage: A
jobs:
- job: A1
steps:
- bash: echo "##vso[task.setvariable variable=shouldrun;isOutput=true]true"
# or on Windows:
# - script: echo ##vso[task.setvariable variable=shouldrun;isOutput=true]true
name: printvar

- stage: B
dependsOn: A
jobs:
- job: B1
condition: in(stageDependencies.A.A1.result, 'Succeeded', 'SucceededWithIssues', 'Skipped')
steps:
- script: echo hello from Job B1
- job: B2
condition: eq(stageDependencies.A.A1.outputs['printvar.shouldrun'], 'true')
steps:
- script: echo hello from Job B2

Filtered arrays
When operating on a collection of items, you can use the * syntax to apply a filtered array. A filtered array
returns all objects/elements regardless their names.
As an example, consider an array of objects named foo . We want to get an array of the values of the id
property in each object in our array.

[
{ "id": 1, "a": "avalue1"},
{ "id": 2, "a": "avalue2"},
{ "id": 3, "a": "avalue3"}
]

We could do the following:


foo.*.id

This tells the system to operate on foo as a filtered array and then select the id property.
This would return:

[ 1, 2, 3 ]

Type casting
Values in an expression may be converted from one type to another as the expression gets evaluated. When
an expression is evaluated, the parameters are coalesced to the relevant data type and then turned back into
strings.
For example, in this YAML, the values true and false are converted to 1 and 0 when the expression is
evaluated. The function lt() returns True when the left parameter is less than the right parameter.

variables:
firstEval: $[lt(false, true)] # 0 vs. 1, True
secondEval: $[lt(true, false)] # 1 vs. 0, False

steps:
- script: echo $(firstEval)
- script: echo $(secondEval)

In this example, the values variables.emptyString and the empty string both evaluate as empty strings. The
function coalesce() evaluates the parameters in order, and returns the first value that does not equal null or
empty-string.

variables:
coalesceLiteral: $[coalesce(variables.emptyString, '', 'literal value')]

steps:
- script: echo $(coalesceLiteral) # outputs literal value

Detailed conversion rules are listed further below.

F RO M / TO B O O L EA N N UL L N UM B ER ST RIN G VERSIO N

Boolean - - Yes Yes -

Null Yes - Yes Yes -

Number Yes - - Yes Partial

String Yes Partial Partial - Partial

Version Yes - - Yes -

Boolean
To number:
False → 0
True → 1
To string:
False → 'false'
True → 'true'
Null
To Boolean: False
To number: 0
To string: '' (the empty string)
Number
To Boolean: 0 → False , any other number → True
To version: Must be greater than zero and must contain a non-zero decimal. Must be less than
Int32.MaxValue (decimal component also).
To string: Converts the number to a string with no thousands separator and no decimal separator.
String
To Boolean: '' (the empty string) → False , any other string → True
To null: '' (the empty string) → Null , any other string not convertible
To number: '' (the empty string) → 0, otherwise, runs C#'s Int32.TryParse using InvariantCulture and
the following rules: AllowDecimalPoint | AllowLeadingSign | AllowLeadingWhite | AllowThousands |
AllowTrailingWhite. If TryParse fails, then it's not convertible.
To version: runs C#'s Version.TryParse . Must contain Major and Minor component at minimum. If
TryParse fails, then it's not convertible.

Version
To Boolean: True
To string: Major.Minor or Major.Minor.Build or Major.Minor.Build.Revision.

FAQ
I want to do something that is not supported by expressions. What options do I have for extending
Pipelines functionality?
You can customize your Pipeline with a script that includes an expression. For example, this snippet takes the
BUILD_BUILDNUMBER variable and splits it with Bash. This script outputs two new variables, $MAJOR_RUN and
$MINOR_RUN , for the major and minor run numbers. The two variables are then used to create two pipeline
variables, $major and $minor with task.setvariable. These variables are available to downstream steps. To
share variables across pipelines see Variable groups.

steps:
- bash: |
MAJOR_RUN=$(echo $BUILD_BUILDNUMBER | cut -d '.' -f1)
echo "This is the major run number: $MAJOR_RUN"
echo "##vso[task.setvariable variable=major]$MAJOR_RUN"

MINOR_RUN=$(echo $BUILD_BUILDNUMBER | cut -d '.' -f2)


echo "This is the minor run number: $MINOR_RUN"
echo "##vso[task.setvariable variable=minor]$MINOR_RUN"

- bash: echo "My pipeline variable for major run is $(major)"


- bash: echo "My pipeline variable for minor run is $(minor)"
File matching patterns reference
6/9/2020 • 2 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015

Pattern syntax
A pattern is a string or list of newline-delimited strings. File and directory names are compared to patterns to
include (or sometimes exclude) them in a task. You can build up complex behavior by stacking multiple
patterns. See fnmatch for a full syntax guide.
Match characters
Most characters are used as exact matches. What counts as an "exact" match is platform-dependent: the
Windows filesystem is case-insensitive, so the pattern "ABC" would match a file called "abc". On case-sensitive
filesystems, that pattern and name would not match.
The following characters have special behavior.
* matches zero or more characters within a file or directory name. See examples.
? matches any single character within a file or directory name. See examples.
[] matches a set or range of characters within a file or directory name. See examples.
** recursive wildcard. For example, /hello/**/* matches all descendants of /hello .

Extended globbing
?(hello|world) - matches hello or world zero or one times
*(hello|world) - zero or more occurrences
+(hello|world) - one or more occurrences
@(hello|world) - exactly once
!(hello|world) - not hello or world

Note, extended globs cannot span directory separators. For example, +(hello/world|other) is not valid.
Comments
Patterns that begin with # are treated as comments.
Exclude patterns
Leading ! changes the meaning of an include pattern to exclude. You can include a pattern, exclude a subset
of it, and then re-include a subset of that: this is known as an "interleaved" pattern.
Multiple ! flips the meaning. See examples.
You must define an include pattern before an exclude one. See examples.
Escaping
Wrapping special characters in [] can be used to escape literal glob characters in a file name. For example the
literal file name hello[a-z] can be escaped as hello[[]a-z] .
Slash
/ is used as the path separator on Linux and macOS. Most of the time, Windows agents accept / . Occasions
where the Windows separator ( \ ) must be used are documented.
Examples
Basic pattern examples
Asterisk examples
Example 1: Given the pattern *Website.sln and files:

ConsoleHost.sln
ContosoWebsite.sln
FabrikamWebsite.sln
Website.sln

The pattern would match:

ContosoWebsite.sln
FabrikamWebsite.sln
Website.sln

Example 2: Given the pattern *Website/*.proj and paths:

ContosoWebsite/index.html
ContosoWebsite/ContosoWebsite.proj
FabrikamWebsite/index.html
FabrikamWebsite/FabrikamWebsite.proj

The pattern would match:

ContosoWebsite/ContosoWebsite.proj
FabrikamWebsite/FabrikamWebsite.proj

Question mark examples


Example 1: Given the pattern log?.log and files:

log1.log
log2.log
log3.log
script.sh

The pattern would match:

log1.log
log2.log
log3.log

Example 2: Given the pattern image.??? and files:

image.tiff
image.png
image.ico

The pattern would match:


image.png
image.ico

Character set examples


Example 1: Given the pattern Sample[AC].dat and files:

SampleA.dat
SampleB.dat
SampleC.dat
SampleD.dat

The pattern would match:

SampleA.dat
SampleC.dat

Example 2: Given the pattern Sample[A-C].dat and files:

SampleA.dat
SampleB.dat
SampleC.dat
SampleD.dat

The pattern would match:

SampleA.dat
SampleB.dat
SampleC.dat

Example 3: Given the pattern Sample[A-CEG].dat and files:

SampleA.dat
SampleB.dat
SampleC.dat
SampleD.dat
SampleE.dat
SampleF.dat
SampleG.dat
SampleH.dat

The pattern would match:

SampleA.dat
SampleB.dat
SampleC.dat
SampleE.dat
SampleG.dat

Recursive wildcard examples


Given the pattern **/*.ext and files:
sample1/A.ext
sample1/B.ext
sample2/C.ext
sample2/D.not

The pattern would match:

sample1/A.ext
sample1/B.ext
sample2/C.ext

Exclude pattern examples


Given the pattern:

*
!*.xml

and files:

ConsoleHost.exe
ConsoleHost.pdb
ConsoleHost.xml
Fabrikam.dll
Fabrikam.pdb
Fabrikam.xml

The pattern would match:

ConsoleHost.exe
ConsoleHost.pdb
Fabrikam.dll
Fabrikam.pdb

Double exclude
Given the pattern:

*
!*.xml
!!Fabrikam.xml

and files:

ConsoleHost.exe
ConsoleHost.pdb
ConsoleHost.xml
Fabrikam.dll
Fabrikam.pdb
Fabrikam.xml

The pattern would match:


ConsoleHost.exe
ConsoleHost.pdb
Fabrikam.dll
Fabrikam.pdb
Fabrikam.xml

Folder exclude
Given the pattern:

**
!sample/**

and files:

ConsoleHost.exe
ConsoleHost.pdb
ConsoleHost.xml
sample/Fabrikam.dll
sample/Fabrikam.pdb
sample/Fabrikam.xml

The pattern would match:

ConsoleHost.exe
ConsoleHost.pdb
ConsoleHost.xml
File transforms and variable substitution reference
11/2/2020 • 8 minutes to read • Edit Online

Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017

NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.

Some tasks, such as the Azure App Service Deploy task version 3 and later and the IIS Web App Deploy task, allow
users to configure the package based on the environment specified. These tasks use msdeploy.exe , which
supports the overriding of values in the web.config file with values from the parameters.xml file. However, file
transforms and variable substitution are not confined to web app files . You can use these techniques with any
XML or JSON files.

NOTE
File transforms and variable substitution are also supported by the separate File Transform task for use in Azure Pipelines.
You can use the File Transform task to apply file transformations and variable substitutions on any configuration and
parameters files.

Configuration substitution is specified in the File Transform and Variable Substitution Options section of the
settings for the tasks. The transformation and substitution options are:
XML transformation
XML variable substitution
JSON variable substitution
When the task runs, it first performs XML transformation, XML variable substitution, and JSON variable
substitution on configuration and parameters files. Next, it invokes msdeploy.exe , which uses the
parameters.xml file to substitute values in the web.config file.

XML Transformation
XML transformation supports transforming the configuration files ( *.config files) by following Web.config
Transformation Syntax and is based on the environment to which the web package will be deployed. This option is
useful when you want to add, remove or modify configurations for different environments. Transformation will be
applied for other configuration files including Console or Windows service application configuration files (for
example, FabrikamSer vice.exe.config ).
Configuration transform file naming conventions
XML transformation will be run on the *.config file for transformation configuration files named
*.Release.config or *.<stage>.config and will be executed in the following order:

1. *.Release.config (for example, fabrikam.Release.config )


2. *.<stage>.config (for example, fabrikam.Production.config )
For example, if your package contains the following files:
Web.config
Web.Debug.config
Web.Release.config
Web.Production.config
and your stage name is Production , the transformation is applied for Web.config with Web.Release.config
followed by Web.Production.config .
XML transformation example
1. Create a Web Application package with the necessary configuration and transform files. For example, use
the following configuration files:
Configuration file

<?xml version="1.0" encoding="utf-8"?>


<configuration>
<connectionStrings>
<add name="DefaultConnection"
connectionString="Data Source=(LocalDb)\\MSDB;DbFilename=aspcore-local.mdf;" />
</connectionStrings>
<appSettings>
<add key="webpages:Version" value="3.0.0.0" />
<add key="webpages:Enabled" value="false" />
</appSettings>
<system.web>
<authentication mode="None" />
<compilation targetFramework="4.5" debug="true" />
</system.web>
</configuration>

Transform file

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<connectionStrings>
<add name="MyDB"
connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated
Security=True"
xdt:Transform="Insert" />
</connectionStrings>
<appSettings>
<add xdt:Transform="Replace" xdt:Locator="Match(key)" key="webpages:Enabled" value="true" />
</appSettings>
<system.web>
<compilation xdt:Transform="RemoveAttributes(debug)" />
</system.web>
</configuration>

This example transform configuration file does three things:


It adds a new database connection string inside the ConnectionStrings element.
It modifies value of Webpages:Enabled inside the appSettings element.
It removes the debug attribute from the compilation element inside the System.Web element.

For more information, see Web.config Transformation Syntax for Web Project Deployment Using Visual
Studio

2. Create a release pipeline with a stage named Release .


3. Add an Azure App Ser vice Deploy task and set (tick) the XML transformation option.

4. Save the release pipeline and start a new release.


5. Open the Web.config file to see the transformations from Web.Release.config .

<?xml version="1.0" encoding="utf-8"?>


<configuration>
<connectionStrings>
<add name="DefaultConnection"
connectionString="Data Source=(LocalDb)\\MSDB;DbFilename=aspcore-local.mdf;" />
<add name="MyDB"
connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated
Security=True" />
</connectionStrings>
<appSettings>
<add key="webpages:Version" value="3.0.0.0" />
<add key="webpages:Enabled" value="true" />
</appSettings>
<system.web>
<authentication mode="None" />
<compilation targetFramework="4.5" />
</system.web>
</configuration>

XML transformation notes


You can use this technique to create a default package and deploy it to multiple stages.
XML transformation takes effect only when the configuration file and transform file are in the same folder
within the specified package.
By default, MSBuild applies the transformation as it generates the web package if the <DependentUpon>
element is already present in the transform file in the *.csproj file. In such cases, the Azure App Ser vice
Deploy task will fail because there is no further transformation applied on the Web.config file. Therefore, it
is recommended that the <DependentUpon> element is removed from all the transform files to disable any
build-time configuration when using XML transformation.
Set the Build Action property for each of the transformation files ( Web.config ) to Content so that the files
are copied to the root folder.
...
<Content Include="Web.Debug.config">
<DependentUpon>Web.config</DependentUpon>
</Content>
<Content Include="Web.Release.config">
<DependentUpon>Web.config</DependentUpon>
</Content>
...

XML variable substitution


This feature enables you to modify configuration settings in configuration files ( *.config files) inside web
packages and XML parameters files ( parameters.xml ). In this way, the same package can be configured based on
the environment to which it will be deployed.
Variable substitution takes effect only on the applicationSettings , appSettings , connectionStrings , and
configSections elements of configuration files. If you are looking to substitute values outside of these elements
you can use a ( parameters.xml ) file, however you will need to use a 3rd party pipeline task to handle the variable
substitution.
XML variable substitution example
As an example, consider the task of changing the following values in Web.config :

<?xml version="1.0" encoding="utf-8"?>


<configuration>
<configSection>
<section name="entityFramework" />
</configSection>
<connectionStrings>
<!-- Change connectionString in this line: -->
<add name="DefaultConnection"
connectionString="Data Source=(LocalDB)\LocalDB;FileName=Local.mdf" />
</connectionStrings>
<appSettings>
<add key="ClientValidationEnabled" value="true" />
<add key="UnobstructiveJavascriptEnabled" value="true" />
<!-- Change AdminUserName in this line: -->
<add key="AdminUserName" value="__AdminUserName__" />
<!-- Change AdminPassword in this line: -->
<add key="AdminPassword" value="__AdminPassword__" />
</appSettings>
<entityFramework>
<defaultConnectionFactory type="System.Data.Entity.LocalDbConnectionFactory">
<parameters></parameters>
</defaultConnectionFactory>
<providers>
<!-- Change invariantName in this line: -->
<provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer" />
</providers>
</entityFramework>
</configuration>

1. Create a release pipeline with a stage named Release .


2. Add an Azure App Ser vice Deploy task and set (tick) the XML variable substitution option.
3. Define the required values in release pipeline variables:

NAME VA L UE SEC URE SC O P E

DefaultConnection Data Source= No Release


(ProdDB)\MSSQLProdDB;A
ttachFileName=Local.mdf

AdminUserName ProdAdminName No Release

AdminPassword [your-password] Yes Release

invariantName System.Data.SqlClientExten No Release


sion

4. Save the release pipeline and start a new release.


5. Open the Web.config file to see the variable substitutions.

<?xml version="1.0" encoding="utf-8"?>


<configuration>
<configSection>
<section name="entityFramework" />
</configSection>
<connectionStrings>
<add name="DefaultConnection"
connectionString="Data Source=(ProdDB)\MSSQLProdDB;AttachFileName=Local.mdf" />
</connectionStrings>
<appSettings>
<add key="ClientValidationEnabled" value="true" />
<add key="UnobstructiveJavascriptEnabled" value="true" />
<add key="AdminUserName" value="ProdAdminName" />
<add key="AdminPassword" value="*password_masked_for_display*" />
</appSettings>
<entityFramework>
<defaultConnectionFactory type="System.Data.Entity.LocalDbConnectionFactory">
<parameters></parameters>
</defaultConnectionFactory>
<providers>
<provider invariantName="System.Data.SqlClientExtension"
type="System.Data.Entity.SqlServer" />
</providers>
</entityFramework>
</configuration>

XML variable substitution notes


By default, ASP.NET applications have a default parameterized connection attribute. These values are
overridden only in the parameters.xml file inside the web package.
Because substitution occurs before deployment, the user can override the values in Web.config using
parameters.xml (inside the web package) or a setparameters file.

JSON variable substitution


This feature substitutes values in the JSON configuration files. It overrides the values in the specified JSON
configuration files (for example, appsettings.json ) with the values matching names of release pipeline and stage
variables.
To substitute variables in specific JSON files, provide newline-separated list of JSON files. File names must be
specified relative to the root folder. For example, if your package has this structure:

/WebPackage(.zip)
/---- content
/----- website
/---- appsettings.json
/---- web.config
/---- [other folders]
/--- archive.xml
/--- systeminfo.xml

and you want to substitute values in appsettings.json , enter the relative path from the root folder; for example
content/website/appsettings.json . Alternatively, use wildcard patterns to search for specific JSON files. For
example, **/appsettings.json returns the relative path and name of files named appsettings.json .
JSON variable substitution example
As an example, consider the task of overriding values in this JSON file:

{
"Data": {
"DefaultConnection": {
"ConnectionString": "Data Source=(LocalDb)\\MSDB;AttachDbFilename=aspcore-local.mdf;"
},
"DebugMode": "enabled",
"DBAccess": {
"Administrators": ["Admin-1", "Admin-2"],
"Users": ["Vendor-1", "vendor-3"]
},
"FeatureFlags": {
"Preview": [
{
"newUI": "AllAccounts"
},
{
"NewWelcomeMessage": "Newusers"
}
]
}
}
}

The task is to override the values of ConnectionString , DebugMode , the first of the Users values, and
NewWelcomeMessage at the respective places within the JSON file hierarchy.
Classic
YAML
1. Create a release pipeline with a stage named Release .
2. Add an Azure App Ser vice Deploy task and enter a newline-separated list of JSON files to substitute the
variable values in the JSON variable substitution textbox. Files names must be relative to the root folder.
You can use wildcards to search for JSON files. For example: **/*.json means substitute values in all the
JSON files within the package.

3. Define the required substitution values in release pipeline or stage variables.

NAME VA L UE SEC URE SC O P E

Data.DebugMode disabled No Release

Data.DefaultConnection.C Data Source= No Release


onnectionString (prodDB)\MSDB;AttachDb
Filename=prod.mdf;

Data.DBAccess.Users.0 Admin-3 Yes Release

Data.FeatureFlags.Preview. AllAccounts No Release


1.NewWelcomeMessage

4. Save the release pipeline and start a new release.


5. After the transformation, the JSON will contain the following:
{
"Data": {
"DefaultConnection": {
"ConnectionString": "Data Source=(prodDB)\MSDB;AttachDbFilename=prod.mdf;"
},
"DebugMode": "disabled",
"DBAccess": {
"Administrators": ["Admin-1", "Admin-2"],
"Users": ["Admin-3", "vendor-3"]
},
"FeatureFlags": {
"Preview": [
{
"newUI": "AllAccounts"
},
{
"NewWelcomeMessage": "AllAccounts"
}
]
}
}
}
'''

JSON variable substitution notes


To substitute values in nested levels of the file, concatenate the names with a period ( . ) in hierarchical
order.
A JSON object may contain an array whose values can be referenced by their index. For example, to
substitute the first value in the Users array shown above, use the variable name DBAccess.Users.0 . To
update the value in NewWelcomeMessage , use the variable name
FeatureFlags.Preview.1.NewWelcomeMessage . However, the file transform task has the ability to transform
entire arrays in JSON files. You can also use DBAccess.Users = ["NewUser1","NewUser2","NewUser3"] .
Only String substitution is supported for JSON variable substitution.
Substitution is supported for only UTF-8 and UTF-16 LE encoded files.
If the file specification you enter does not match any file, the task will fail.
Variable name matching is case-sensitive.
Variable substitution is applied for only the JSON keys predefined in the object hierarchy. It does not create
new keys.
If a variable name includes periods ("."), the transformation will attempt to locate the item within the
hierarchy. For example, if the variable name is first.second.third , the transformation process will search
for:

"first" : {
"second": {
"third" : "value"
}
}

as well as "first.second.third" : "value" .


Logging commands
11/2/2020 • 7 minutes to read • Edit Online

NOTE
Use UTF-8 formatting for logging commands.

Overview
Logging commands are how tasks and scripts communicate with the agent. They cover actions like creating new
variables, marking a step as failed, and uploading artifacts.
The general format for a logging command is:

##vso[area.action property1=value;property2=value;...]message

There are also a few formatting commands with a slightly different syntax:

##[command]message

To invoke a logging command, echo the command via standard output.


Bash
PowerShell

#!/bin/bash
echo "##vso[task.setvariable variable=testvar;]testvalue"

File paths should be given as absolute paths: rooted to a drive on Windows, or beginning with / on Linux and
macOS.

Formatting commands
These commands are messages to the log formatter in Azure Pipelines. They mark specific log lines as errors,
warnings, collapsible sections, and so on.
The formatting commands are:

##[group]Beginning of a group
##[warning]Warning message
##[error]Error message
##[debug]Debug text
##[command]Command-line being run
##[endgroup]

Those commands will render in the logs like this:


That block of commands can also be collapsed, and looks like this:

Task commands
LogIssue: Log an error or warning
##vso[task.logissue]error/warning message

Usage
Log an error or warning message in the timeline record of the current task.
Properties
type = or warning (Required)
error
sourcepath = source file location
linenumber = line number
columnnumber = column number
code = error or warning code

Example: Log an error


Bash
PowerShell

#!/bin/bash
echo "##vso[task.logissue type=error]Something went very wrong."
exit 1

TIP
exit 1 is optional, but is often a command you'll issue soon after an error is logged. If you select Control Options:
Continue on error , then the exit 1 will result in a partially successful build instead of a failed build.

Example: Log a warning about a specific place in a file


Bash
PowerShell

#!/bin/bash
echo "##vso[task.logissue
type=warning;sourcepath=consoleapp/main.cs;linenumber=1;columnnumber=1;code=100;]Found something that could be
a problem."

SetProgress: Show percentage completed


##vso[task.setprogress]current operation

Usage
Set progress and current operation for the current task.
Properties
value = percentage of completion
Example
Bash
PowerShell

echo "Begin a lengthy process..."


for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
do
echo "Lengthy process is complete."

To see how it looks, save and queue the build, and then watch the build run. Observer that a progress indicator
changes when the task runs this script.
Complete: Finish timeline
##vso[task.complete]current operation

Usage
Finish the timeline record for the current task, set task result and current operation. When result not provided, set
result to succeeded.
Properties
result =
Succeeded The task succeeded.
SucceededWithIssues The task ran into problems. The build will be completed as partially succeeded at
best.
Failed The build will be completed as failed. (If the Control Options: Continue on error option is
selected, the build will be completed as partially succeeded at best.)
Example

##vso[task.complete result=Succeeded;]DONE

LogDetail: Create or update a timeline record for a task


##vso[task.logdetail]current operation

Usage
Creates and updates timeline records. This is primarily used internally by Azure Pipelines to report about steps,
jobs, and stages. While customers can add entries to the timeline, they won't typically be shown in the UI.
The first time we see ##vso[task.detail] during a step, we create a "detail timeline" record for the step. We can
create and update nested timeline records base on id and parentid .
Task authors must remember which GUID they used for each timeline record. The logging system will keep track of
the GUID for each timeline record, so any new GUID will result a new timeline record.
Properties
id = Timeline record GUID (Required)
parentid = Parent timeline record GUID
type = Record type (Required for first time, can't overwrite)
name = Record name (Required for first time, can't overwrite)
order = order of timeline record (Required for first time, can't overwrite)
starttime = Datetime
finishtime = Datetime
progress = percentage of completion
state = Unknown | Initialized | InProgress | Completed
result = Succeeded | SucceededWithIssues | Failed

Examples
Create new root timeline record:

##vso[task.logdetail id=new guid;name=project1;type=build;order=1]create new timeline record

Create new nested timeline record:

##vso[task.logdetail id=new guid;parentid=exist timeline record guid;name=project1;type=build;order=1]create


new nested timeline record

Update exist timeline record:

##vso[task.logdetail id=existing timeline record guid;progress=15;state=InProgress;]update timeline record

SetVariable: Initialize or modify the value of a variable


##vso[task.setvariable]value

Usage
Sets a variable in the variable service of taskcontext. The first task can set a variable, and following tasks are able to
use the variable. The variable is exposed to the following tasks as an environment variable.
When issecret is set to true , the value of the variable will be saved as secret and masked out from log. Secret
variables are not passed into tasks as environment variables and must instead be passed as inputs.
Properties
variable = variable name (Required)
issecret = boolean (Optional, defaults to false)
isoutput = boolean (Optional, defaults to false)
isreadonly = boolean (Optional, defaults to false)

Examples
Bash
PowerShell
Set the variables:

echo "##vso[task.setvariable variable=sauce;]crushed tomatoes"


echo "##vso[task.setvariable variable=secretSauce;issecret=true]crushed tomatoes with garlic"
echo "##vso[task.setvariable variable=outputSauce;isoutput=true]canned goods"

Read the variables:

echo "Non-secrets automatically mapped in, sauce is $SAUCE"


echo "Secrets are not automatically mapped in, secretSauce is $SECRETSAUCE"
echo "You can use macro replacement to get secrets, and they'll be masked in the log: $(secretSauce)"
echo "Future jobs can also see $OUTPUTSAUCE"
Console output:

Non-secrets automatically mapped in, sauce is crushed tomatoes


Secrets are not automatically mapped in, secretSauce is
You can use macro replacement to get secrets, and they'll be masked in the log: ***
Future jobs can also see canned goods

SetEndpoint: Modify a service connection field


##vso[task.setendpoint]value

Usage
Set a service connection field with given value. Value updated will be retained in the endpoint for the subsequent
tasks that execute within the same job.
Properties
id = service connection ID (Required)
field = field type, one of authParameter , dataParameter , or url (Required)
key = key (Required, unless field = url )

Examples

##vso[task.setendpoint id=000-0000-0000;field=authParameter;key=AccessToken]testvalue
##vso[task.setendpoint id=000-0000-0000;field=dataParameter;key=userVariable]testvalue
##vso[task.setendpoint id=000-0000-0000;field=url]https://example.com/service

AddAttachment: Attach a file to the build


##vso[task.addattachment]value

Usage
Upload and attach attachment to current timeline record. These files are not available for download with logs.
These can only be referred to by extensions using the type or name values.
Properties
type = attachment type (Required)
name = attachment name (Required)
Example

##vso[task.addattachment type=myattachmenttype;name=myattachmentname;]c:\myattachment.txt

UploadSummary: Add some Markdown content to the build summary


##vso[task.uploadsummary]local file path

Usage
Upload and attach summary markdown to current timeline record. This summary shall be added to the
build/release summary and not available for download with logs. The summary should be in UTF-8 or ASCII
format.
Examples

##vso[task.uploadsummary]c:\testsummary.md

It is a short hand form for the command


##vso[task.addattachment type=Distributedtask.Core.Summary;name=testsummaryname;]c:\testsummary.md

UploadFile: Upload a file that can be downloaded with task logs


##vso[task.uploadfile]local file path

Usage
Upload user interested file as additional log information to the current timeline record. The file shall be available
for download along with task logs.
Example

##vso[task.uploadfile]c:\additionalfile.log

PrependPath: Prepend a path to the PATH environment variable


##vso[task.prependpath]local file path

Usage
Update the PATH environment variable by prepending to the PATH. The updated environment variable will be
reflected in subsequent tasks.
Example

##vso[task.prependpath]c:\my\directory\path

Artifact commands
Associate: Initialize an artifact
##vso[artifact.associate]artifact location

Usage
Create an artifact link. Artifact location must be a file container path, VC path or UNC share path.
Properties
artifactname = artifact name (Required)
type = container | filepath | versioncontrol | gitref | tfvclabel , artifact type (Required)
Examples

##vso[artifact.associate type=container;artifactname=MyServerDrop]#/1/build

##vso[artifact.associate type=filepath;artifactname=MyFileShareDrop]\\MyShare\MyDropLocation

##vso[artifact.associate type=versioncontrol;artifactname=MyTfvcPath]$/MyTeamProj/MyFolder

##vso[artifact.associate type=gitref;artifactname=MyTag]refs/tags/MyGitTag

##vso[artifact.associate type=tfvclabel;artifactname=MyTag]MyTfvcLabel

Upload: Upload an artifact


##vso[artifact.upload]local file path

Usage
Upload a local file into a file container folder, and optionally publish an artifact as artifactname .
Properties
containerfolder = folder that the file will upload to, folder will be created if needed. (Required)
artifactname = artifact name
Example

##vso[artifact.upload containerfolder=testresult;artifactname=uploadedresult;]c:\testresult.trx

Build commands
UploadLog: Upload a log
##vso[build.uploadlog]local file path

Usage
Upload user interested log to build's container " logs\tool " folder.
Example

##vso[build.uploadlog]c:\msbuild.log

UpdateBuildNumber: Override the automatically generated build number


##vso[build.updatebuildnumber]build number

Usage
You can automatically generate a build number from tokens you specify in the pipeline options. However, if you
want to use your own logic to set the build number, then you can use this logging command.
Example

##vso[build.updatebuildnumber]my-new-build-number

AddBuildTag: Add a tag to the build


##vso[build.addbuildtag]build tag

Usage
Add a tag for current build.
Example

##vso[build.addbuildtag]Tag_UnitTestPassed

Release commands
UpdateReleaseName: Rename current release
##vso[release.updatereleasename]release name

Usage
Update the release name for the running release.
NOTE
Supported in Azure DevOps and Azure DevOps Server beginning in version 2020.

Example

##vso[release.updatereleasename]my-new-release-name
Artifact policy checks
6/12/2020 • 2 minutes to read • Edit Online

Artifact policies are enforced before deploying to critical environments such as production. These policies are
evaluated against all the deployable artifacts in the given pipeline run and block the deployment if the artifacts
don't comply. Adding a check to evaluate Artifact requires the custom policy to be configured. This guide describes
how custom policies can be created.

NOTE
Currently, the supported artifact types are for container images and Kubernetes environments

Prerequisites
Use Rego for defining policy that is easy to read and write.
Familiarize yourself with Rego query language. Basics will do.
To support structured document models like JSON, Rego extends Datalog. Rego queries are assertions on data
stored in OPA. These queries can be used to define policies that enumerate instances of data that violate the
expected state of the system.

Creating custom policies


Below are the sample policies shared. Based on your requirements, you can build your own set of policies.
Check specific project/pipeline
This policy checks if the images are built by Azure Pipelines and Pipeline-foo. For this to work, the pipeline
definition should override the name field to something like:
AzureDevOps_$(BuildDefinitionName)_$(Date:yyyyMMdd)$(Rev:.r) . See more about naming pipeline runs
here.

allowedBuilder := "AzureDevOps_pipeline-foo"

checkBuilder[errors] {
trace("Check if images are built by Azure Pipelines")
resourceUri := values[index].build.resourceUri
image := fetchImage(resourceUri)
builder := values[index].build.build.provenance.builderVersion
trace(sprintf("%s: builder", [builder]))
not startswith(builder, "allowedBuilder")
errors := sprintf("%s: image not built by Azure Pipeline [%s]", [image,builder])
}

fetchRegistry(uri) = reg {
out := regex.find_n("//.*/", uri, 1)
reg = trim(out[0], "/")
}

fetchImage(uri) = img {
out := regex.find_n("/.*@", uri, 1)
img := trim(out[0], "/@")
}
Check allowed registries
This policy checks if the images are from allowed registries only.

allowlist = {
"gcr.io/myrepo",
"raireg1.azurecr.io"
}

checkregistries[errors] {
trace(sprintf("Allowed registries: %s", [concat(", ", allowlist)]))
resourceUri := values[index].image.resourceUri
registry := fetchRegistry(resourceUri)
image := fetchImage(resourceUri)
not allowlist[registry]
errors := sprintf("%s: source registry not permitted", [image])
}

fetchRegistry(uri) = reg {
out := regex.find_n("//.*/", uri, 1)
reg = trim(out[0], "/")
}

fetchImage(uri) = img {
out := regex.find_n("/.*@", uri, 1)
img := trim(out[0], "/@")
}

Check forbidden ports


This policy checks for any forbidden ports exposed in the container image.

forbiddenPorts = {
"80",
"22"
}

checkExposedPorts[errors] {
trace(sprintf("Checking for forbidden exposed ports: %s", [concat(", ", forbiddenPorts)]))
layerInfos := values[index].image.image.layerInfo
layerInfos[x].directive == "EXPOSE"
resourceUri := values[index].image.resourceUri
image := fetchImage(resourceUri)
ports := layerInfos[x].arguments
trace(sprintf("exposed ports: %s", [ports]))
forbiddenPorts[ports]
errors := sprintf("%s: image exposes forbidden port %s", [image,ports])
}

fetchRegistry(uri) = reg {
out := regex.find_n("//.*/", uri, 1)
reg = trim(out[0], "/")
}

fetchImage(uri) = img {
out := regex.find_n("/.*@", uri, 1)
img := trim(out[0], "/@")
}

Check prior deployments


This policy checks if the image has been pre-deployed to one/more of the environments before being deployed to
specific environment/resources with Check configured.
predeployedEnvironments = {
"env/resource1",
"env2/resource3"
}

checkDeployedEnvironments[errors] {
trace(sprintf("Checking if the image has been pre-deployed to one of: [%s]", [concat(", ",
predeployedEnvironments)]))
deployments := values[index].deployment
deployedAddress := deployments[i].deployment.address
trace(sprintf("deployed to : %s",[deployedAddress]))
resourceUri := deployments[i].resourceUri
image := fetchImage(resourceUri)
not predeployedEnvironments[deployedAddress]
trace(sprintf("%s: fails pre-deployed environment condition. found %s", [image,deployedAddress]))
errors := sprintf("image %s fails pre-deployed environment condition. found %s", [image,deployedAddress])
}

fetchRegistry(uri) = reg {
out := regex.find_n("//.*/", uri, 1)
reg = trim(out[0], "/")
}

fetchImage(uri) = img {
out := regex.find_n("/.*@", uri, 1)
img := trim(out[0], "/@")
}
Securing Azure Pipelines
2/26/2020 • 2 minutes to read • Edit Online

Azure Pipelines poses unique security challenges. You can use a pipeline to run scripts or deploy code to production
environments. But you want to ensure your CI/CD pipelines don't become avenues to run malicious code. You also
want to ensure only code you intend to deploy is deployed. Security must be balanced with giving teams the
flexibility and power they need to run their own pipelines.

NOTE
Azure Pipelines is one among a collection of Azure DevOps services, all built on the same secure infrastructure in Azure. To
understand the main concepts around security for all of Azure DevOps services, see Azure DevOps Data Protection Overview
and Azure DevOps Security and Identity.

Traditionally, organizations implemented security through draconian lock-downs. Code, pipelines, and production
environments had severe restrictions on access and use. In small organizations with a small number of users and
projects, this stance was relatively easy to manage. However, that's not the case in larger organizations. Where
many users have contributor access to code, one must "assume breach". Assuming breach means behaving as if an
adversary has contributor access to some (if not all) of the repositories.
The goal in this case is to prevent that adversary from running malicious code in the pipeline. Malicious code may
steal secrets or corrupt production environments. Another goal is to prevent lateral exposure to other projects,
pipelines, and repositories from the compromised pipeline.
This series of topics outlines recommendations to help you put together a secure YAML-based CI/CD pipeline. It
also covers the places where you can make trade-offs between security and flexibility. The series also assumes
familiarity with Azure Pipelines, the core Azure DevOps security constructs, and Git.
Topics covered:
Incremental approach to improving security
Repository protection
Pipeline resources
Project structure
Security through templates
Variables and parameters
Shared infrastructure
Other security considerations
Plan how to secure your YAML pipelines
2/26/2020 • 2 minutes to read • Edit Online

We recommend that you use an incremental approach to secure your pipelines. Ideally, you would implement all of
the guidance that we offer. But don't be daunted by the number of recommendations. And don't hold off making
some improvements just because you can't make all the changes right now.

Security recommendations depend on each other


Security recommendations have complex interdependencies. Your security posture depends heavily on which
recommendations you choose to implement. The recommendations that you choose, in turn, depend on the
concerns of your DevOps and security teams. They also depend on the policies and practices of your organization.
You might choose to tighten security in one critical area and accept less security but more convenience in another
area. For example, if you use extends templates to require all builds to run in containers, then you might not need
a separate agent pool for each project.

Begin with a nearly empty template


A good place to start is by enforcing extension from a nearly empty template. This way, as you start to apply
security practices, you have a centralized place that already catches every pipeline.
For more information, see Templates.

Next steps
After you plan your security approach, consider how your repositories provide protection.
Repository protection
5/14/2020 • 2 minutes to read • Edit Online

Source code, the pipeline's YAML file, and necessary scripts & tools are all stored in a version control repository.
Permissions and branch policies must be employed to ensure changes to the code and pipeline are safe. Also, you
should review default access control for repositories.
Because of Git's design, protection at a branch level will only carry you so far. Users with push access to a repo can
usually create new branches. If you use GitHub open-source projects, anyone with a GitHub account can fork your
repository and propose contributions back. Since pipelines are associated with a repository and not with specific
branches, you must assume the code and YAML files are untrusted.

Forks
If you build public repositories from GitHub, you must consider your stance on fork builds. Forks are especially
dangerous since they come from outside your organization. To protect your products from contributed code,
consider the following recommendations.

NOTE
The following recommendations apply primarily to building public repos from GitHub.

Don't provide secrets to fork builds


By default, the pipelines you create do not build forks. If you decide to build forks, secrets and protected resources
are not made available to the jobs in those pipelines by default. Don't turn off this latter protection.
NOTE
Even if you enable fork builds to access secrets, Azure Pipelines restricts the access token used for fork builds. It has more
limited access to open resources than a normal access token. You cannot disable this protection.

Consider manually triggering fork builds


You can turn off automatic fork builds and instead use pull request comments as a way to manually building these
contributions. This setting will give you an opportunity to review the code before triggering a build.
Use Microsoft-hosted agents for fork builds
Don't run builds from forks on self-hosted agents. By doing so, you are effectively providing a path to external
organizations to run outside code on machines inside your corporate network. Use Microsoft-hosted agents or
some form of network isolation for your self-hosted agents.

User branches
Users in your organization with the right permissions can create new branches containing new or updated code.
That code can run through the same pipeline as your protected branches. Further, if the YAML file in the new
branch is changed, then the updated YAML will be used to run the pipeline. While this design allows for great
flexibility and self-service, not all changes are safe (whether made maliciously or not).
If your pipeline consumes source code or is defined in Azure Repos, you must fully understand the Azure Repos
permissions model. In particular, a user with Create Branch permission at the repository level can introduce code
to the repo even if that user lacks Contribute permission.

Next steps
Next, learn about the additional protection offered by checks on protected resources.
Pipeline resources
11/2/2020 • 3 minutes to read • Edit Online

Azure Pipelines offers security mechanisms beyond just protecting the YAML file and source code. When pipelines
run, access to resources goes through a system called checks. Checks can suspend or even fail a pipeline run in
order to keep resources safe. A pipeline can access two types of resources, protected and open.

Protected resources
Your pipelines often have access to secrets. For instance, to sign your build, you need a signing certificate. To
deploy to a production environment, you need a credential to that environment. In Azure Pipelines, all of the
following are considered protected resources:
agent pools
variable groups
secure files
service connections
environments
"Protected" means:
They can be made accessible to specific users and specific pipelines within the project. They cannot be accessed
by users and pipelines outside of a project.
You can run additional manual or automated checks every time a pipeline uses one of these resources.

Protecting repository resources


Repositories can optionally be protected. At the organization or project level, you may choose to limit the scope of
the Azure Pipelines access token to mentioned repositories. When you do this, Azure Pipelines will add two
additional protections:
1. The access token given to the agent for running jobs will only have access to repositories explicitly mentioned
in the resources section of the pipeline.
2. Repositories added to the pipeline will have to be authorized by someone with read access to the repository
the first time that pipeline uses the repository.
This setting is on by default for all organizations created after May 2020. Organizations created before that should
enable it in Organization settings .

Open resources
All the other resources in a project are considered open resources. Open resources include:
artifacts
pipelines
test plans
work items
You'll learn more about which pipelines can access what resources in the section on projects.
User permissions
The first line of defense for protected resources is user permissions. In general, ensure that you only give
permissions to users who require them. All protected resources have a similar security model. A member of user
role for a resource can:
Remove approvers and checks configured on that resource
Grant access to other users or pipelines to use that resource

Pipeline permissions
When you use YAML pipelines, user permissions are not enough to secure your protected resources. You can
easily copy the name of a protected resource (for example, a service connection for your production environment)
and include that in a different pipeline. Pipeline permissions protect against such copying. For each of the
protected resources, ensure that you have disabled the option to grant access to "all pipelines". Instead, explicitly
granted access to specific pipelines that you trust.
Checks
In YAML, a combination of user and pipeline permissions is not enough to fully secure your protected resources.
Pipeline permissions to resources are granted to the whole pipeline. Nothing prevents an adversary from creating
another branch in your repository, injecting malicious code, and using the same pipeline to access that resource.
Even without malicious intent, most pipelines need a second set of eyes look over changes (especially to the
pipeline itself) before deploying to production. Checks allow you to pause the pipeline run until certain
conditions are met:
Manual approval check . Every run that uses a project protected resource is blocked for your manual
approval before proceeding. This gives you the opportunity to review the code and ensure that it is coming
from the right branch.
Protected branch check . If you have manual code review processes in place for some of your branches, you
can extend this protection to pipelines. Configure a protected branch check on each of your resources. This will
automatically stop your pipeline from running on top of any user branches.
Next steps
Next, consider how you group resources into a project structure.
Recommendations to securely structure projects in
your pipeline
2/26/2020 • 2 minutes to read • Edit Online

Beyond the scale of individual resources, you should also consider groups of resources. In Azure DevOps,
resources are grouped by team projects. It's important to understand what resources your pipeline can access
based on project settings and containment.
Every job in your pipeline receives an access token. This token has permissions to read open resources. In some
cases, pipelines might also update those resources. In other words, your user account might not have access to a
certain resource, but scripts and tasks that run in your pipeline might have access to that resource. The security
model in Azure DevOps also allows access to these resources from other projects in the organization. If you
choose to shut off pipeline access to some of these resources, then your decision applies to all pipelines in a
project. A specific pipeline can't be granted access to an open resource.

Separate projects
Given the nature of open resources, you should consider managing each product and team in a separate project.
This practice ensures that a pipeline from one product can't access open resources from another product. In this
way, you prevent lateral exposure. When multiple teams or products share a project, you can't granularly isolate
their resources from one another.
If your Azure DevOps organization was created before August 2019, then runs might be able to access open
resources in all of your organization's projects. Your organization administrator must review a key security setting
in Azure Pipelines that enables project isolation for pipelines. You can find this setting at Azure DevOps >
Organization settings > Pipelines > Settings . Or go directly to this Azure DevOps location:
https://dev.azure.com/ORG-NAME/_settings/pipelinessettings.
Next steps
After you've set up the right project structure, enhance runtime security by using templates.
Security through templates
11/2/2020 • 5 minutes to read • Edit Online

Checks on protected resources are the basic building block of security for Azure Pipelines. Checks work no matter
the structure - the stages and jobs - of your pipeline. If several pipelines in your team or organization have the
same structure, you can further simplify security using templates.
Azure Pipelines offers two kinds of templates: includes and extends . Included templates behave like #include in
C++: it's as if you paste the template's code right into the outer file, which references it. To continue the C++
metaphor, extends templates are more like inheritance: the template provides the outer structure of the pipeline
and a set of places where the template consumer can make targeted alterations.

Use extends templates


For the most secure pipelines, we recommend starting with extends templates. By providing the outer structure, a
template can prevent malicious code from getting into your pipeline. You can still use includes , both in the
template and in the final pipeline, to factor out common pieces of configuration. To use an extends template, your
pipeline might look like the below example.

# template.yml
parameters:
- name: usersteps
type: stepList
default: []
steps:
- ${{ each step in parameters.usersteps }}:
- ${{ step }}

# azure-pipelines.yml
resources:
repositories:
- repository: templates
type: git
name: MyProject/MyTemplates
ref: refs/tags/v1

extends:
template: template.yml@templates
parameters:
usersteps:
- script: echo This is my first step
- script: echo This is my second step

When you set up extends templates, consider anchoring them to a particular Git branch or tag. That way, if
breaking changes need to be made, existing pipelines won't be affected. The examples above use this feature.

Security features enforced through YAML


There are several protections built into the YAML syntax, and an extends template can enforce the usage of any or
all of them.
Step targets
Restrict some steps to run in a container instead of the host. Without access to the agent's host, user steps can't
modify agent configuration or leave malicious code for later execution. Run code on the host first to make the
container more secure. For instance, we recommend limiting access to network. Without open access to the
network, user steps will be unable to access packages from unauthorized sources, or upload code and secrets to a
network location.

resources:
containers:
- container: builder
image: mysecurebuildcontainer:latest
steps:
- script: echo This step runs on the agent host, and it could use docker commands to tear down or limit the
container's network
- script: echo This step runs inside the builder container
target: builder

Agent logging command restrictions


Restrict what services the Azure Pipelines agent will provide to user steps. Steps request services using "logging
commands" (specially formatted strings printed to stdout). In restricted mode, most of the agent's services such as
uploading artifacts and attaching test results are unavailable.

# this task will fail because its `target` property instructs the agent not to allow publishing artifacts
- task: PublishBuildArtifacts@1
inputs:
artifactName: myartifacts
target:
commands: restricted

Conditional insertion of stages or jobs


Restrict stages and jobs to run under specific conditions. Conditions can help, for example, to ensure that you are
only building certain branches.

jobs:
- job: buildNormal
steps:
- script: echo Building the normal, unsensitive part
- ${{ if eq(variables['Build.SourceBranchName'], 'refs/heads/master') }}:
- job: buildMasterOnly
steps:
- script: echo Building the restricted part that only builds for master branch

Require certain syntax with extends templates


Templates can iterate over and alter/disallow any YAML syntax. Iteration can force the use of particular YAML
syntax including the above features.
A template can rewrite user steps and only allow certain approved tasks to run. You can, for example, prevent
inline script execution.

WARNING
In the example below, only the literal step type "script" is prevented. For full lockdown of ad-hoc scripts, you would also need
to block "bash", "pwsh", "powershell", and the tasks which back these steps.
# template.yml
parameters:
- name: usersteps
type: stepList
default: []
steps:
- ${{ each step in parameters.usersteps }}:
- ${{ each pair in step }}:
${{ if ne(pair.key, 'script') }}:
${{ pair.key }}: ${{ pair.value }}

# azure-pipelines.yml
extends:
template: template.yml
parameters:
usersteps:
- task: MyTask@1
- script: echo This step will be stripped out and not run!
- task: MyOtherTask@2

Type -safe parameters


Templates and their parameters are turned into constants before the pipeline runs. Template parameters provide
type safety to input parameters. For instance, it can restrict which pools can be used in a pipeline by offering an
enumeration of possible options rather than a freeform string.

# template.yml
parameters:
- name: userpool
type: string
default: Azure Pipelines
values:
- Azure Pipelines
- private-pool-1
- private-pool-2

pool: ${{ parameters.userpool }}


steps:
- script: # ... removed for clarity

# azure-pipelines.yml
extends:
template: template.yml
parameters:
userpool: private-pool-1

Set required templates


To require that a specific template gets used, you can set the required template check for a resource or
environment. The required template check can be used when extending from a template.
You can check on the status of a check when viewing a pipeline job. When a pipeline doesn't extend from the
require template, the check will fail and the run will stop. You will see that your check failed.
When the required template is used, you'll see that your check passed.

Here the template params.yml is required with an approval on the resource. To trigger the pipeline to fail,
comment out the reference to params.yml .

# params.yml
parameters:
- name: yesNo
type: boolean
default: false
- name: image
displayName: Pool Image
type: string
default: ubuntu-latest
values:
- windows-latest
- vs2017-win2016
- ubuntu-latest
- ubuntu-16.04
- macOS-latest
- macOS-10.14

steps:
- script: echo ${{ parameters.yesNo }}
- script: echo ${{ parameters.image }}

# azure-pipeline.yml

resources:
containers:
- container: my-container
endpoint: my-service-connection
image: mycontainerimages

extends:
template: params.yml
parameters:
yesNo: true
image: 'windows-latest'

Additional steps
A template can add steps without the pipeline author having to include them. These steps can be used to run
credential scanning or static code checks.
# template to insert a step before and after user steps in every job
parameters:
jobs: []

jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: CredScan@1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry@1 # Post steps
condition: always()

Next steps
Next, learn about taking inputs safely through variables and parameters.
How to securely use variables and parameters in
your pipeline
2/26/2020 • 2 minutes to read • Edit Online

This article discusses how to securely use variables and parameters to gather input from pipeline users.

Variables
Variables can be a convenient way to collect information from the user up front. You can also use variables to pass
data from step to step within a pipeline.
But use variables with caution. Newly created variables, whether they're defined in YAML or written by a script, are
read-write by default. A downstream step can change the value of a variable in a way that you don't expect.
For instance, imagine your script reads:

msbuild.exe myproj.proj -property:Configuration=$(MyConfig)

A preceding step could set MyConfig to Debug & deltree /y c: . Although this example would only delete the
contents of your build agent, you can imagine how this setting could easily become far more dangerous.
You can make variables read-only. System variables like Build.SourcesDirectory , task output variables, and queue-
time variables are always read-only. Variables that are created in YAML or created at run time by a script can be
designated as read-only. When a script or task creates a new variable, it can pass the isReadonly=true flag in its
logging command to make the variable read-only.
In YAML, you can specify read-only variables by using a specific key:

variables:
- name: myReadOnlyVar
value: myValue
readonly: true

Queue-time variables are exposed to the end user who manually runs a pipeline. As originally designed, this
concept was only for the UI. The underlying API would accept user overrides of any variable, even variables that
weren't designated as queue-time variables. This arrangement was confusing and insecure. So we've added a
setting that makes the API accept only variables that can be set at queue time. We recommend that you turn on
this setting.

Parameters
Unlike variables, pipeline parameters can't be changed by a pipeline while it's running. Parameters have data types
such as number and string , and they can be restricted to a subset of values. Restricting the parameters is useful
when a user-configurable part of the pipeline should take a value only from a constrained list. The setup ensures
that the pipeline won't take arbitrary data.

Next steps
After you secure your inputs, you also need to secure your shared infrastructure.
Recommendations to secure shared infrastructure in
Azure Pipelines
2/26/2020 • 2 minutes to read • Edit Online

Protected resources in Azure Pipelines are an abstraction of real infrastructure. Follow these recommendations to
protect the underlying infrastructure.

Use Microsoft-hosted pools


Microsoft-hosted pools offer isolation and a clean VM for each run of a pipeline. If possible, use Microsoft-hosted
pools rather than self-hosted pools.

Separate agents for each project


An agent can be bound to only one pool. You might want to share agents across projects by sharing the pool with
multiple projects. In other words, multiple projects might run jobs on the same agent, one after another. Although
this practice saves infrastructure costs, it can allow lateral movement.
To eliminate that form of lateral movement and to prevent one project from "poisoning" an agent for another
project, keep separate agent pools with separate agents for each project.

Use low-privileged accounts to run agents


It's tempting but dangerous to run the agent under an identity that can directly access Azure DevOps resources.
This problematic setup is common in organizations that use Azure Active Directory (Azure AD). If you run the agent
under an identity that's backed by Azure AD, then it can directly access Azure DevOps APIs without using the job's
access token. You should instead run the agent as a nonprivileged local account such as Network Service.
Azure DevOps has a group that's misleadingly named Project Collection Service Accounts. By inheritance,
members of Project Collection Service Accounts are also members of Project Collection Administrators. Customers
sometimes run their build agents by using an identity that's backed by Azure AD and that's a member of Project
Collection Service Accounts. If adversaries run a pipeline on one of these build agents, then they can take over the
entire Azure DevOps organization.
We've also seen self-hosted agents run under highly privileged accounts. Often, these agents use privileged
accounts to access secrets or production environments. But if adversaries run a compromised pipeline on one of
these build agents, then they can access those secrets. Then the adversaries can move laterally through other
systems that are accessible through those accounts.
To keep your systems secure, use the lowest-privilege account to run self-hosted agents. For example, use your
machine account or a managed service identity. Let Azure Pipelines manage access to secrets and environments.

Minimize the scope of service connections


Service connections must be able to access only the resources that they require. For instance, an Azure service
connection should use Azure Resource Manager and service principals that are scoped to the resources that they
need to access. They shouldn't have broad contributor rights for the entire Azure subscription.
When you create a new Azure Resource Manager Service Connection, always select a resource group. Ensure that
your resource group contains only the VMs or resources that the build requires. Similarly, when you configure the
GitHub app, grant access only to the repositories that you want to build by using Azure Pipelines.
Next steps
Consider a few general recommendations for security.
Other security considerations
11/2/2020 • 2 minutes to read • Edit Online

There are a handful of other things you should consider when securing pipelines.

Relying on PATH
Relying on the agent's PATH setting is dangerous. It may not point where you think it does, since a previous script
or tool could have altered it. For security-critical scripts and binaries, always use a fully qualified path to the
program.

Logging of secrets
Azure Pipelines attempts to scrub secrets from logs wherever possible. This filtering is on a best-effort basis and
cannot catch every way that secrets can be leaked. Avoid echoing secrets to the console, using them in command
line parameters, or logging them to files.

Lock down containers


Containers have a few system-provided volume mounts mapping in the tasks, the workspace, and external
components required to communicate with the host agent. You can mark any or all of these volumes read-only.

resources:
containers:
- container: example
image: ubuntu:18.04
mountReadOnly:
externals: true
tasks: true
tools: true
work: false # the default; shown here for completeness

Most people should mark the first three read-only and leave work as read-write. If you know you won't write to
the work directory in a given job or step, go ahead and make work read-only as well. If you have tasks in your
pipeline which self-modify, you may need to leave tasks read-write.

Control available tasks


You can disable the ability to install and run tasks from the Marketplace. This will allow you greater control over
the code which executes in a pipeline. You may also disable all the in-the-box tasks (except Checkout, which is a
special action on the agent). We recommend that you don't disable in-the-box tasks under most circumstances.
Tasks directly installed with tfx are always available. With both of these features enabled, only those tasks are
available.

Use the Auditing service


A number of pipeline events are recorded in the Auditing service. Review the audit log periodically to ensure no
malicious changes have slipped past. Visit https://dev.azure.com/ORG-NAME/_settings/audit to get started.

Next steps
Return to the overview and make sure you've covered every topic.
Learn how to add continuous security validation to
your CI/CD pipeline
11/2/2020 • 9 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2015
Are you planning Azure DevOps continuous integration and deployment pipelines? You probably have a few
questions, such as:
How do you ensure your application is safe?
How do you add continuous security validation to your CI/CD pipeline?
DevOps practices are allowing businesses to stay ahead of the competition by delivering new features faster than
ever before. As the frequency of production deployments increases, this business agility cannot come at the
expense of security. With continuous delivery, how do you ensure your applications are secure and stay secure?
How can you find and fix security issues early in the process? This begins with practices commonly referred to as
DevSecOps. DevSecOps incorporates the security team and their capabilities into your DevOps practices making
security a responsibility of everyone on the team. This article will walk you through how to help ensure your
application is secure by adding continuous security validation to your CI/CD pipeline.
Security needs to shift from an afterthought to being evaluated at every step of the process. Securing applications
is a continuous process that encompasses secure infrastructure, designing an architecture with layered security,
continuous security validation, and monitoring for attacks.
Continuous security validation should be added at each step from development through production to help ensure
the application is always secure. The goal of this approach is to switch the conversation with the security team from
approving each release to approving the CI/CD process and having the ability to monitor and audit the process at
any time. When building greenfield applications, the diagram below highlights the key validation points in the
CI/CD pipeline. Depending on your platform and where your application is at in its lifecycle, you may need to
consider implementing the tools gradually. Especially if your product is mature and you haven't previously run any
security validation against your site or application.
IDE / Pull Request
Validation in the CI/CD begins before the developer commits his or her code. Static code analysis tools in the IDE
provide the first line of defense to help ensure that security vulnerabilities are not introduced into the CI/CD
process. The process for committing code into a central repository should have controls to help prevent security
vulnerabilities from being introduced. Using Git source control in Azure DevOps with branch policies provides a
gated commit experience that can provide this validation. By enabling branch policies on the shared branch, a pull
request is required to initiate the merge process and ensure that all defined controls are being executed. The pull
request should require a code review, which is the one manual but important check for identifying new issues
being introduced into your code. Along with this manual check, commits should be linked to work items for
auditing why the code change was made and require a continuous integration (CI) build process to succeed before
the push can be completed.

CI (Continuous Integration)
The CI build should be executed as part of the pull request (PR-CI) process discussed above and once the merge is
complete. Typically, the primary difference between the two runs is that the PR-CI process doesn't need to do any of
the packaging/staging that is done in the CI build. These CI builds should run static code analysis tests to ensure
that the code is following all rules for both maintenance and security. Several tools can be used for this.
Visual Studio Code Analysis and the Roslyn Security Analyzers
Checkmarx - A Static Application Security Testing (SAST) tool
BinSkim - A binary static analysis tool that provides security and correctness results for Windows portable
executables
Other 3rd party tools
Many of the tools seamlessly integrate into the Azure Pipelines build process. Visit the VSTS Marketplace for more
information on the integration capabilities of these tools.
In addition to code quality being verified with the CI build, two other tedious or ignored validations are scanning
3rd party packages for vulnerabilities and OSS license usage. Often when we ask about 3rd party package
vulnerabilities and the licenses, the response is fear or uncertainty. Those organizations that are trying to manage
3rd party packages vulnerabilities and/or OSS licenses, explain that their process for doing so is tedious and
manual. Fortunately, there are a couple of tools by WhiteSource Software that can make this identification process
almost instantaneous. The tool runs through each build and reports all of the vulnerabilities and the licenses of the
3rd party packages. WhiteSource Bolt is a new option, which includes a 6-month license with your Visual Studio
Subscription. Bolt provides a report of these items but doesn't include the advanced management and alerting
capabilities that the full product offers. With new vulnerabilities being regularly discovered, your build reports
could change even though your code doesn't. Checkmarx includes a similar WhiteSource Bolt integration so there
could be some overlap between the two tools. See, Manage your open source usage and security as reported by
your CI/CD pipeline for more information about WhiteSource and the Azure Pipelines integration.

Application Deployment to DEV and TEST


Once your code quality is verified, and the application is deployed to a lower environment like development or QA,
the process should verify that there are not any security vulnerabilities in the running application. This can be
accomplished by executing automated penetration test against the running application to scan it for vulnerabilities.
There are different levels of tests that are categorized into passive tests and active tests. Passive tests scan the target
site as is but don't try to manipulate the requests to expose additional vulnerabilities. These can run fast and are
usually a good candidate for a CI process that you want to complete in a few minutes. Whereas the Active Scan can
be used to simulate many techniques that hackers commonly use to attack websites. These tests can also be
referred to dynamic or fuzz tests because the tests are often trying a large number of different combinations to see
how the site reacts to verify that it doesn't reveal any information. These tests can run for much longer, and typically
you don't want to cut these off at any particular time. These are better executed nightly as part of a separate Azure
DevOps release.
One tool to consider for penetration testing is OWASP ZAP. OWASP is a worldwide not-for-profit organization
dedicated to helping improve the quality of software. ZAP is a free penetration testing tool for beginners to
professionals. ZAP includes an API and a weekly docker container image that can be integrated into your
deployment process. The detailed how-to steps are outside the scope of this article. Refer to the OWASP ZAP VSTS
extension repo for details on how to set up the integration. Here we're going to explain the benefits of including this
into your process.
The application CI/CD pipeline should run within a few minutes, so you don't want to include any long-running
processes. The baseline scan is designed to identify vulnerabilities within a couple of minutes making it a good
option for the application CI/CD pipeline. The Nightly OWASP ZAP can spider the website and run the full Active
Scan to evaluate the most combinations of possible vulnerabilities. OWASP ZAP can be installed on any machine in
your network, but we like to use the OWASP Zap/Weekly docker container within Azure Container Services. This
allows for the latest updates to the image and also allows being able to spin up multiple instances of the image so
several applications within an enterprise can be scanned at the same time. The following figure outlines the steps
for both the Application CI/CD pipeline and the longer running Nightly OWASP ZAP pipeline.

In addition to validating the application, the infrastructure should also be validated to check for any vulnerabilities.
When using the public cloud such as Azure, deploying the application and shared infrastructure is easy, so it is
important to validate that everything has been done securely. Azure includes many tools to help report and prevent
these vulnerabilities including Security Center and Azure Policies. Also, we have set up a scanner that can ensure
any public endpoints and ports have been added to an allow list or else it will raise an infrastructure issue. This is
run as part of the Network pipeline to provide immediate verification, but it also needs to be executed each night to
ensure that there aren't any resources publicly exposed that should not be.
Once the scans have completed, the Azure Pipelines release is updated with a report that includes the results and
bugs are created in the team's backlog. Resolved bugs will close if the vulnerability has been fixed and move back
into in-progress if the vulnerability still exists.
The benefit of using this is that the vulnerabilities are created as bugs that provide actionable work that can be
tracked and measured. False positives can be suppressed using OWASP ZAP's context file, so only vulnerabilities
that are true vulnerabilities are surfaced.

Even with continuous security validation running against every change to help ensure new vulnerabilities are not
introduced, hackers are continuously changing their approaches, and new vulnerabilities are being discovered.
Good monitoring tools allow you to help detect, prevent, and remediate issues discovered while your application is
running in production. Azure provides a number of tools that provide detection, prevention, and alerting using
rules such as OWASP Top 10 / modSecurity and now even using machine learning to detect anomalies and unusual
behavior to help identify attackers.
Minimize security vulnerabilities by taking a holistic and layered approach to security including secure
infrastructure, application architecture, continuous validation, and monitoring. DevSecOps practices enable your
entire team to incorporate these security capabilities throughout the entire lifecycle of your application. Establishing
continuous security validation into your CI/CD pipeline can allow your application to stay secure while you are
improving the deployment frequency to meet needs of your business to stay ahead of the competition.
Reference information
BinSkim - A binary static analysis tool that provides security and correctness results for Windows portable
executables
Checkmarx - A Static Application Security Testing (SAST) tool
Manage your open source usage and security as reported by your CI/CD pipeline
OWASP
OWASP ZAP VSTS extension
WhiteSource Software
Visual Studio Code Analysis and the Roslyn Security Analyzers

Authors: Mike Douglas | Find the origin of this article and connect with the ALM | DevOps Rangers here

(c) 2017 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views
expressed in this document, including URL and other Internet Web site references, may change without notice. You
bear the risk of using it.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You
may copy and use this document for your internal, reference purposes.
Build and Deployment Automation Case Study for
World Wide Time Keeping: Higher Quality and Faster
Delivery in an Increasingly Agile World
11/2/2020 • 8 minutes to read • Edit Online

Author: Vaibhav Rajeev Thombre

October 2015
In an Agile world, delivering quick and frequent releases for large, complex systems with multiple components
becomes cumbersome and time-consuming if done manually, because each component has a high degree of
complexity and requires a lot of resource intervention and configuration to ensure that it works as expected.
That's why many teams opt for Build and Deployment Automation to ensure faster releases and reduce manual
intervention. However, automating multiple components of a system has its own challenges. Even though releases
can be automated in silos, if we need a one-click deployment for the entire system, we need to have an automation
framework that can automate an entire custom workflow.
Throughout this paper, we give insight on our project - World Wide Time Keeping - and how we implemented build
and deployment automation using Gated Check-ins, Code Analysis, and Fortify Integrations. We discuss build and
deployment automation by using PowerShell scripts and how we can create custom workflows and deploy all at
once using Release Management. We also talk about how these can help you cut down your engineering cycle time
and play an important role in hitting Production Ready at Code Complete (PRCC) goals. This lets you have a
Continuous Integration Continuous Delivery (CICD) Project and helps you go faster, without introducing issues.
This content is useful for SWE teams who are working in an Agile model for large, complex systems and want to cut
down their release cycles and deliver faster. We assume that readers have a fundamental knowledge of Engineering
Cycles and their phases (Develop/Test/Build/Deploy) and a fundamental knowledge of Agile practices and delivery
cycles.

Build automation
Many teams have multiple requirements for build, but the following practices can be applied to most teams. You
may adopt the whole approach or just implement the components that work out best for you.
Daily Builds: Have a build pipeline for scheduled builds. Aim for a daily schedule with builds released to the
internal SWE environment by the end of each day.
One-click builds for non-internal environments: For Integration/UAT environments, you automate the builds.
Instead of scheduling them on a per day basis, you can trigger them by queuing them in VSTF. (The reason for not
scheduling them is that a build is not required on Integration/UAT environments on a daily basis. Rather, they tend
to happen on an as-needed basis. This will depend on your team's needs and you can adopt the rhythm that works
best for your team.)
Gated Check-ins: Set up gated check-ins to ensure that only code that complies and passed unit testing gets
checked in. It ensures that code quality remains high and that there are no broken builds. Integrate Fortify and
Code Analysis to get further insight into code quality.
Code Analysis Integrations: To get insight into whether the code is of good quality or if any changes need to be
made, integrate Code Analysis into the build pipelines and set the threshold to low. The changes can be identified
and fixed early, which is required in the Agile world.
For tify Integrations: Use Fortify for security-based checks of the build pipelines associated with your check-ins
and daily builds. This ensures that any security vulnerabilities are identified as soon as possible and can be fixed
quickly.

Deployment automation
Use deployment scripts
Deployments for internal SWE environment: Set up the internal SWE environments deployments with the
daily automated builds by integrating the build pipelines with the deployment scripts. All the checked-in changes
will then be deployed at the end of each day, without any manual intervention.
This way, the latest build is present in the SWE environment in case you would like to demo the product to
stakeholders.
Deployments for Integration/UAT environments: For Integration/UAT environments, you can integrate the
scripts with the build pipelines without scheduling them and trigger them on an as-needed basis. Because you have
set up one-click builds for them, when the build completes successfully, the scripts get executed at the end and the
product is deployed. Therefore, you do not have to manually deploy the system. Instead it's deployed automatically
by simply queuing a build.
The release pipeline
In theory, a release pipeline is a process that dictates how you deliver software to your end users. In practice, a
release pipeline is an implementation of that pattern. The pipeline begins with code in version control and ends
with code deployed to the production environment. In between, a lot can happen. Code is compiled, environments
are configured, many types of tests run, and finally, the code is considered "done". By done, we mean that the code
is in production. Anything you successfully put through the release pipeline should be something you would give to
your customers. Here is a diagram based on the one you will see on Jez Humble's Continuous Delivery website. It is
an example of what can occur as code moves through a release pipeline.
Use Release Management
If your team is working on Azure-based components - web apps, services, web jobs, and so on - you can use
Release Management for automating deployments.
Release Management consists of various pre-created components which you can configure and use either
independently or in conjunction with other components through workflows.
You might face pain points when you manually deploy an entire system. For a large complex system with multiple
components, like service, web jobs, and dacpac scripts, here are example pain points:
A large amount of time goes into configuration of each component
Deployment needs to be done separately for each, adding to the overall deployment time.
Multiple resources have to be engaged to ensure that the deployments happen as expected.
How Release Management (RM) solves them:
RM allows you to create custom workflows which sequence the deployment to ensure that the components
get deployed as soon as their dependencies have been deployed.
Configurations can be stored in RM to ensure that configuration per deployment is not required.
It automates the entire workflow which ensures manual intervention is not required and resources can be
utilized for functional tasks.
Key takeaways
Set up Automated Builds scheduled for the rhythm that works best for your product and Implement Gated
Check-ins.
Integrate Code Analysis and Fortify into the build setup to improve the code quality and security of the
application
Set up daily automated deployments to the internal SWE environments and set up one click deployments to
environments like UAT and Prod.
Use Release Management to set up custom workflows for your releases and triggering them with a single
click.
To use Release Management, you need to set up the following components:
RM Ser ver : Is the central repository for configuration and release information.
Build Agent : This is a machine (physical or VM) that you set up at your end on which you will run all your
builds and deployments.
Environments : This signifies the environment which will be used in conjunction with your machine that you
have set up.
Release Paths : You need to create Release Paths for the multiple releases that you want to automate for
multiple environments - internal SWE envs, INT, UAT, and so on.
Build Components : The build component is used configure the build and change any environment specific
configurations. It picks up the build from the remote machine in which VSTF auto-generates the builds as
per the build pipeline and runs the configuration changes that are defined within it.
Release Templates : Release template defines the workflow that you have set up as per your specific needs
of deployment. It also defines the sequence in which the RM components are to get executed. You need to
integrate your build pipeline from Team Foundation Server (TFS) with the release template to enable
continuous delivery. You can either pick up the latest build or select the build.
Conclusion
In this paper, we discussed the various engineering practices we can use for enabling faster product delivery with
higher quality. We discussed:
Build Automation : Builds can be set up for triggering on a schedule or on an ad-hoc basis just by a single
click. It can vary based on the rhythm that works best for your team. Gated check-ins should be set up on top
of the build pipelines to accept only the check-ins which meet the criteria bar.
Code Analysis and For tify Integration : The build pipelines should be integrated with Code Analysis and
Fortify to trigger on a schedule and also with the Gated Check-ins. Code Analysis will improve the code
quality and Fortify will point out the security-based gaps in the application, if any.
Deployment Automation : You can integrate PowerShell scripts with your build pipelines to achieve
deployment automation. You can also use Release Management to set up custom workflows and integrate it
with your TFS to pick up the latest builds or even select builds.
We also discussed the benefits that we found by taking up these practices:
Minimal wastage of time due to automations of build, deploy phases
Higher code quality due to Gated check-ins (with integrated Test Automation), Code Analysis, and Fortify
Integration
Faster delivery
Will enable you to hit Production Ready at Code Complete (PRCC)
Will enable you to hit Continuous Integration & Continuous Delivery targets (CI/CD)

References
[1] Visual Studio team, Automate deployments with Release Management, MSDN Article
[2] Visual Studio team, Build and Deploy Continuously, MSDN Article
[3] Visual Studio team, Building a Release Pipeline with Team Foundation Server 2012, MSDN Article
(c) 2015 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views
expressed in this document, including URL and other Internet Web site references, may change without notice. You
bear the risk of using it.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You
may copy and use this document for your internal, reference purposes.
Explore how to progressively expose your Azure
DevOps extension releases in production to validate,
before impacting all users
11/2/2020 • 7 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2013
In today's fast-paced, feature-driven markets, it's important to continuously deliver value and receive feedback on
features quickly and continuously. Partnering with end users to get early versions of features vetted out is valuable.
Are you planning to build and deploy Azure DevOps extensions to production? You probably have a few questions,
such as:
How do you embrace DevOps to deliver changes and value faster?
How do you mitigate the risk of deploying to production?
How do you automate the build and deployment?
This topic aims to answer these questions and share learnings using rings with Azure DevOps extensions. For an
insight into the Microsoft guidelines, read Configuring your release pipelines for safe deployments.

One or more rings to rule your deployments


Deployment rings were first discussed in Jez Humble's book. They support the production-first DevOps mindset
and limit impact on end users, while gradually deploying and validating changes in production. Impact (also called
blast radius ), is evaluated through observation, testing, analysis of telemetry, and user feedback.

Considerations
Before you convert your deployment infrastructure to a ringed deployment model, it's important to consider:
Who are your primary types of users? For example, early adopters and users.
What's your application topology?
What's the value of embracing ringed deployment model?
What's the cost to convert your current infrastructure to a ringed deployment model?

User types
In the shown example, users fall into three general buckets in production:
Canaries who voluntarily test bleeding edge features as soon as they are available.
Early adopters who voluntarily preview releases, considered more refined than the canary bits.
Users who consume the products, after passing through canaries and early adopters.
NOTE
It's important to weigh out which users in your value chain are best suited for each of these buckets. Communicating the
opportunity to provide feedback, as well as the risk levels at each tier, is critical to setting expectations and ensuring success.

Application topology
Next you need to map the topology of your application to the ringed deployment model. Limit the impact of
change on end users and to continuously deliver value. Value includes both the value delivered to the end user and
the value (return-on-investment) of converting your existing infrastructure.

NOTE
The ringed deployment model is not a silver bullet! Start small, prototype, and continuously compare impact, value, and cost.

At the application level, the composition of Azure DevOps extensions is innocuous, easy to digest, scale, and deploy
independently. Each extension:
Has one of more web and script files
Interfaces with Core client
Interfaces with REST client and REST APIs
Persists state in cache or resilient storage

At the infrastructure level, the extensions are published to the Visual Studio marketplace. Once installed in
organization, they are hosted by the Azure DevOps service portal, with state persisted to Azure storage and/or the
extension data storage.
The extension topology is perfectly suited for the ring deployment model and to publish the extension to each
deployment ring:
A private development version for the canary ring
A private preview version for the early adopter ring
A public production version for the Users ring

TIP
By publishing your extension as private, you're effectively limiting and controlling their exposure for users you explicitly invite.

Moving changes through deployment rings


Let's observe how a change triggers and moves through the ring-based deployment process, using the Azure
DevOps Developer Tools Build Tasks extension.
Azure DevOps Developer Tools Build Tasks extension is the secret sauce, used to package and publish Azure
DevOps extensions to the Visual Studio Marketplace.

1. A developer from the Countdown Widget extension project commits a change to the GitHub repository.
2. The commit triggers a continuous integration build.
3. The new build triggers a continuous deployment trigger, which automatically starts the Canaries
environment deployment.
4. The Canaries deployment publishes a private extension to the marketplace and shares it with predefined
organizations. Only the Canaries are impacted by the change.
5. The Canaries deployment triggers the Early Adopter environment deployment. A pre-deployment
approval gate requires any one of the authorized users to approve the release.
6. The Early Adopter deployment publishes a private extension to the marketplace and shares it with
predefined organizations. Both the Canaries and Early Adopter are impacted by the change.
7. The Early Adopter deployment triggers the Users environment deployment. A stricter pre-deployment
approval gate requires all of the authorized users to approve the release.

8. The Users deployment publishes a public extension to the marketplace. At this stage, everyone who has
installed the extension in their organization is affected by the change.
9. It's key to realize that the impact ("blast radius") increases as your change moves through the rings. Exposing
the change to the Canaries and the Early Adopters , is giving two opportunities to validate the change and
hotfix critical bugs before a release to production.

NOTE
Review CI/CD Pipelines and Approvals for detailed documentation of pipelines and the approval features for releases.

Dealing with monitoring and noise


You need effective monitoring and actionable alerts to detect and mitigate issues. Determine what type of data is
important, for example infrastructure issues, violations, and feature usage. Focus on actionable alerts to avoid users
ignoring them and missing high priority issues.

TIP
Start with high-level views of your data, visual dashboards that you can watch from afar, and drill-down as needed. Perform,
regular housekeeping of your views and remove all noise. A visual dashboard tells a far better story than hundreds of
notification emails, often filtered and forgotten by email rules.

Using the Team Project Health and out-of-the-box extensions you can build overview of your pipelines, lead and
cycle times, and other information. In the sample dashboard, it's evident that there are 34 successful builds, 21
successful releases, 1 failed release, and 2 releases in progress.
What's the value?
Using a ring-deployment strategy you can gather feedback to validate your hypothesis. You can decommission old
releases and distribute new releases without the risk of affecting all users.
Here's a summary of how the ALM | DevOps Ranger engineering process evolved with ring deployment models.

B EF O RE USIN G RIN GS IM PA C T ED A REA W IT H RIN GS

Manual and error prone Build Automated and consistent

Manual and error prone Release Automated and consistent

Hours Time to build (TTB) Seconds

Days Time to release (TTR) Minutes

Call from user Issue detection Proactive

Days to weeks Issue resolution Minutes to days

Key takeaways:
Consistent and reliable automation
Reduced response times
Canaries experience the pain, not the users

Is there a dependency on feature flags?


No, rings and feature flags are symbiotic. Feature flags give you fine-grained control of features included in your
change. For example, if you're not fully confident about a feature you can use feature flags to hide the feature in
one or all of the deployment rings. For example, you could enable all features in the canaries ring, and fine-tune a
subset for the early adopters and production users, as shown. See Feature Flags or Rings for more information.

LaunchDarkly provides an extension for Azure DevOps Services & Team Foundation Server. It integrates with Azure
Pipelines and gives you "run-time" control of features deployed with your ring deployment process.

Conclusion
Now that you've covered the concepts of rings, you should be confident to explore ways to improve your CI/CD
pipelines. While the use of rings adds a level of complexity, having a game plan to address feature management
and rapid customer feedback is invaluable.

Q&A
How do you know that a change can be deployed to the next ring?
Your goal should be to have a consistent checklist for the users approving a release. See aka.ms/vsarDoD for an
example definition of done checklist.
How long do you wait before you push a change to the next ring?
There is no fixed duration or "cool off" period. It depends on how long it takes for you to complete all release
validations successfully.
How do you manage a hotfix?
The ring deployment model allows you to process a hotfix like any other change. The sooner an issue is caught, the
sooner a hotfix can be deployed, with no impact to downstream rings.
How do you deal with variables that span (shared) release environments?
Refer to Default and custom release variables.
How can you manage secrets used by the pipeline?
Refer to Azure Key Vault to safeguard cryptographic keys and other secrets used by your pipelines.

Reference information
CI/CD pipeline examples
Configuring your release pipelines for safe deployments
DevOps @ Microsoft

NOTE

Authors: Josh Garverick, Willy Schaub Find the origin of this article and connect with the ALM DevOps Rangers
here

(c) 2017 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views
expressed in this document, including URL and other Internet Web site references, may change without notice. You
bear the risk of using it.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You
may copy and use this document for your internal, reference purposes.
Explore how to progressively expose your features in
production for some or all users
11/2/2020 • 6 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2013
In today's fast-paced, feature-driven markets, it's important to continuously deliver value and receive feedback on
features quickly and continuously. Partnering with end users to get early versions of features vetted out is valuable.
Are you planning to continuously integrate features into your application while they're under development? You
probably have a few questions, such as:
How can you toggle features to hide, disable, or enable features at run-time?
How can you revert a change deployed to production without rolling back your release?
How can you present users with variants of a feature, to determine which one performs better?
This topic aims to answer these questions and share an implementation of feature flags (FF) and A|B testing used
with Azure DevOps extensions.

Considerations
Before you introduce feature flags to your engineering process, it's important to consider:
Which users are you planning to target? For example, do you want to target specific or all users?
Would you like users to decide which features they want to use?
What's the value of embracing feature flags as part of your engineering process?
What's the cost to implement feature flags in your engineering process?
Before you flip your first feature flag in production, take the time to read:
"A Rough Patch", by Brian Harry
"Feature Flags with Branching", by LaunchDarkly

What are Feature Flags (FF)?


NOTE
A feature flag is also known as a feature toggle, feature switch, feature flipper, or conditional feature. They were popularized
by Martin Fowler.

Feature flags support a customer-first DevOps mindset, to enable (expose) and disable (hide) features in a solution,
even before they are complete and ready for release.
View a feature flag as an ON | OFF switch for a specific feature. As shown, you can deploy a solution to production
that includes both an email and a print feature. If the feature flag is set (ON), you'll email, else you'll print.
When you combine a feature flag with an experiment, led by a hypothesis, you introduce A|B testing. For example,
you could run an experiment to determine if the email (A) or the print (B) feature will result in a higher user
satisfaction.

NOTE
A|B testing is also known as Split Testing. It's based on a hypothesis that's defined as:
For {user} who {action} the {solution} is a {how} that {value} unlike {competition} we {do better}

As shown, the email feature (option A) is more popular with your users and wins.

Evaluating Feature Flag solutions


As outlined in how to implement feature flags and A|B testing, the ALM | DevOps Rangers evaluated a number of FF
frameworks and solutions.
They chose the LaunchDarkly solution for several reasons:
It's a "software as a service" (SaaS) solution
No custom solution to maintain
No upgrades - you're always using the latest and greatest
No servers - LaunchDarkly takes care of the machines that LaunchDarkly runs on
Always on and optimized for the Internet
It's integrated with Azure DevOps Services and Team Foundation Server (TFS)
It's simple and cost-effective for an open-source project

Common scenarios
You have a CI/CD pipeline for every Azure DevOps extension you're hosting on the marketplace. You are using a
ring deployment model and manual release approval checkpoints. The checkpoints are manual and time
consuming, but necessary to minimize the chance of breaking the early-adopter and production user
environments, forcing an expensive roll-back. You're looking for an engineering process, which enables you to:
Continuously deploy to production
Never roll back in production
Fine-tune the user experience in production
You have probably guessed it - feature flags!
Enable or disable a feature for everyone
You would like to include hidden features in your release and enable them for all users in production. For example,
you want to be able to collect verbose logging data for troubleshooting. Using a feature flag, you can enable and
disable verbose logging as needed.

Enable or disable a feature for selected users


With this scenario, you can target specific users or groups of users. For example, you could enable the verbose
logging feature for a specific user experiencing a problem or enable a preview feature for early adopters.

Enable | disable a feature as selected by user


Lastly, you'd like to give the users a list of preview features and allow each user to decide which feature to enable
when. This scenario is key for feature validation, A|B testing, and giving the user flexibility and choice.

Manage features with feature flags in your engineering process


To protect the flags from malicious users, you need to generate and pass the hash of the user key to the
LaunchDarkly API calls. As Azure DevOps extensions can only use client-side code, the ALM | DevOps Rangers
chose Azure Functions to help generate the hash, as shown. Read how we checked and fixed the 503 error and
Performance issue in our Azure Function for details.

Administration of feature flags is straight-forward.


1. You have a different environment for each extension, allowing you to have different feature flag values for Early
Adopters and Users.
2. Optionally target specific users
3. Optionally target users that match custom rules
4. You have a default for each feature flag
You have granular control of each feature flag.

What's the value?


You're able to:
Decouple deployment of releases and exposure of features
Make changes (enable|disable features) without redeployment
Fine-tune a user's features and experience
Enable a user to optionally select preview features
Hide an incomplete or faulty feature

What's the cost?


Aside from the licensing and maintenance cost of a feature flag service, you're adding technical debt to your code:
With a true or false feature flag, your doubling your code and test paths
With multi-value feature flag, you'll add even more code and test paths
You'll need to identify and remove stale feature flags
Understand and test the implications of flipping a feature flag

TIP
To minimize the costs associated with the use of feature flags, keep feature flags short lived and prevent multiple feature flags
from interfering with each other by affecting the same functionality.

Conclusion
Now that you've covered the concepts and considerations of feature flags, you should be confident to explore ways
to improve your CI/CD pipelines. While feature flags come at a cost, having a game plan to manage exposed
features at run-time is invaluable.

Q&A
How does the Azure DevOps team use feature flags?
Buck’s feature flags blog post and the presentation/article are great sources to get an understanding of the custom-
built feature flag system used with Team Foundation Server (TFS) and Azure DevOps Services.
How do the ALM | DevOps Rangers use feature flags?
The Rangers use the LaunchDarkly SaaS solution. You can find their learnings in this blog series.
When should you remove feature flags?
As Buck states, “Many feature flags go away and the teams themselves take care of that." The feature teams decide
when to go delete the feature flags. It can get unwieldy after a while, so there’s some natural motivation to go clean
it up.
Is there a dependency on deployment rings?
No, rings and feature flags are symbiotic. Read Feature Flags or Rings for details.

Reference information
CI/CD pipeline examples
DevOps @ Microsoft
How to implement feature flags and A|B testing

Authors: Willy Schaub | Find the origin of this article and connect with the ALM | DevOps Rangers here

(c) 2017 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views
expressed in this document, including URL and other Internet Web site references, may change without notice. You
bear the risk of using it.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You
may copy and use this document for your internal, reference purposes.
Get started with Azure DevOps CLI
11/2/2020 • 2 minutes to read • Edit Online

Azure DevOps Ser vices | Azure DevOps Ser ver 2020


With the Azure DevOps extension for Azure Command Line Interface (CLI), you can manage many Azure
DevOps Services from the command line. CLI commands enable you streamline your tasks with faster and
flexible interactive canvas, bypassing user interface workflows.

NOTE
The Azure DevOps Command Line Interface (CLI) is available for Azure DevOps Server 2020 and Azure DevOps
Services.

To start using the Azure DevOps extension for Azure CLI, perform the following steps:
1. Install Azure CLI: Follow the instructions provided in Install the Azure CLI to set up your Azure CLI
environment. At a minimum, your Azure CLI version must be 2.10.1. You can use az --version to
validate.
2. Add the Azure DevOps extension:

az extension add --name azure-devops

You can use az extension list or az extension show --name azure-devops to confirm the installation.
3. Sign in: Run az login to sign in. Note that we support only interactive or log in using user name
and password with az login . To sign in using a Personal Access Token (PAT), see Sign in via Azure
DevOps Personal Access Token (PAT). When connecting to an on-premises server instance, sign in
using a PAT may be required to run select commands.
4. Configure defaults: We recommend you set the default configuration for your organization and
project. Otherwise, you can set these within the individual commands themselves.

az devops configure --defaults organization=https://dev.azure.com/contoso project=ContosoWebApp

If you're connecting to an Azure DevOps Server, specify the URL for your server instance. For
example:

az devops configure --defaults organization=https://ServerName/CollectionName


project=ProjectName

Command usage
Adding the Azure DevOps Extension adds devops , pipelines , artifacts , boards , and repos groups. For
usage and help content for any command, enter the -h parameter, for example:
$ az devops -h

Group
az devops : Manage Azure DevOps organization level operations.
Related Groups
az pipelines: Manage Azure Pipelines
az boards: Manage Azure Boards
az repos: Manage Azure Repos
az artifacts: Manage Azure Artifacts.

Subgroups:
admin : Manage administration operations.
extension : Manage extensions.
project : Manage team projects.
security : Manage security related operations.
service-endpoint : Manage service endpoints/service connections.
team : Manage teams.
user : Manage users.
wiki : Manage wikis.

Commands:
configure : Configure the Azure DevOps CLI or view your configuration.
feedback : Displays information on how to provide feedback to the Azure DevOps CLI team.
invoke : This command will invoke request for any DevOps area and resource. Please use
only json output as the response of this command is not fixed. Helpful docs -
https://docs.microsoft.com/rest/api/azure/devops/.
login : Set the credential (PAT) to use for a particular organization.
logout : Clear the credential for all or a particular organization.

Open items in browser


You can use --open switch to open any artifact in Azure DevOps portal in your default browser.
For example :

az pipelines build show --id 1 --open

This command shows the details of build with id 1 on the command-line and also opens it in the default
browser.

Related articles
Sign in via Azure DevOps Personal Access Token (PAT)
Output formats
Command Reference
Azure DevOps CLI Extension GitHub Repo

You might also like