What Is Azure Pipelines PDF
What Is Azure Pipelines PDF
Azure Pipelines
What is Azure Pipelines?
CI, CD, YAML & Classic
Get started
Sign up for Azure Pipelines
Create your first pipeline
Create your first pipeline from the CLI
Clone or import a pipeline
Customize your pipeline
Multi-stage pipelines user experience
Pipeline basics
Key concepts
Repositories
Supported repositories
Azure Repos Git
GitHub
GitHub Enterprise Server
Bitbucket Cloud
Bitbucket Server
TFVC
Subversion
Multiple repositories
Triggers
Types of triggers
Scheduled triggers
Pipeline completion triggers
Release triggers (classic)
Tasks & templates
Task types & usage
Task groups
Template types & usage
Add a custom task extension
Jobs & stages
Specify jobs in your pipeline
Define container jobs
Add stages, dependencies & conditions
Deployment jobs
Author a custom pipeline decorator
Pipeline decorator context
Specify conditions
Specify demands
Library, variables & secure files
Library & shared resources
Define variables
Use predefined variables
Use runtime parameters
Use classic release and artifacts variables
Use secrets from Azure Key Vault
Approvals, checks, & gates
Release approval and gates overview
Define approvals & checks
Define a gate
Use approvals and gates
Use approvals for release deployment control
Pipeline runs
Pipeline run sequence
Job access tokens
Pipeline reports
View pipeline reports
Add pipeline widgets to a dashboard
Test Results Trend (Advanced)
Ecosystems & integration
Ecosystem support
.NET Core
.NET Framework
JavaScript and Node.js apps
Python
Python to web app
Anaconda
C/C++
Java
Java apps
Java to web App
Java to web app with MicroProfile
Java to Azure Function
Android
Go
PHP
PHP to web app
Ruby
Xamarin
Xcode
GitHub Actions
Build apps
Build multiple branches
Build on multiple platforms
Use service containers
Cross-platform scripts
Run a PowerShell script
Run Git commands
Reduce build time using caching
Configure build run numbers
Classic Build options
Run pipeline tests
About pipeline tests
Set up parallel testing (VSTest)
Set up parallel testing (Test Runner)
Enable Test Impact Analysis (TIA)
Enable flaky test management
Run UI tests
Run UI tests with Selenium
Requirements traceability
Review test results
Review test results
Review test Analytics
Review code coverage
Review pull request code coverage
Deploy apps
Deploy apps to environments
Define and target environments
Kubernetes resource
Virtual machine resource
Deploy apps using VMs
Linux virtual machines
Deploy apps to Azure
Deploy a Linux web app - ARM template
Deploy a data pipeline with Azure
Data pipeline overview
Build a data pipeline
Azure Government Cloud
Azure Resource Manager
Azure SQL database
Azure App Service
Azure Stack
Function App on Container
Function App on Linux
Function App on Windows
Web App on Linux
Web App on Linux Container
Web App on Windows
Deploy apps (Classic)
Release pipelines
Deploy from multiple branches
Deploy pull request builds
Classic CD pipelines
Pipelines with PowerShell DSC
Stage templates in Azure Pipelines
Deploy apps to Azure (Classic)
Azure Web App (Classic)
Azure Web App for Containers (Classic)
Azure Kubernetes Service (Classic)
Azure IoT Edge (Classic)
Azure Cosmos DB CI/CD (Classic)
Azure Policy Compliance (Classic)
Deploy apps to VMs (Classic)
Linux VMs (Classic)
Windows VMs (Classic)
IIS servers (WinRM) (Classic)
Extend IIS Server deployments (Classic)
SCVMM (Classic)
VMware (Classic)
Deploy apps using containers
Build images
Push images
Content trust
Kubernetes
Deploy manifests
Bake manifests
Multi-cloud deployments
Deployment strategies
Azure Container Registry
Azure Kubernetes Service
Kubernetes canary deployments
Azure Machine Learning
Consume & publish artifacts
About artifacts
Publish & download artifacts
Build artifacts
Releases in Azure Pipelines
Release artifacts and artifact sources
Maven
npm
NuGet
Python
Symbols
Universal
Restore NuGet packages
Restore & publish NuGet packages (Jenkins)
Create & use resources
About resources
Add resources to a pipeline
Add & use variable groups
Secure files
Manage service connections
Manage agents & agent pools
About agents & agent pools
Add & manage agent pools
Microsoft-hosted agents
Self-hosted Linux agents
Self-hosted macOS agents
Self-hosted Windows agents
Windows agents (TFS 2015)
Scale set agents
Run an agent behind a web proxy
Run an agent in Docker
Use a self-signed certificate
Create & use deployment groups
Provision deployment groups
Provision agents for deployment groups
Add a deployment group job to a release pipeline
Deploying to Azure VMs using deployment groups in Azure Pipelines
Configure security & settings
Set retention policies
Configure and pay for parallel jobs
Pipeline permissions and security roles
Add users to contribute to pipelines
Grant version control permissions to the build service
Integrate with 3rd party software
Microsoft Teams
Slack
Integrate with ServiceNow (Classic)
Integrate with Jenkins (Classic)
Automate infrastructure deployment with Terraform
Migrate
Migrate from Jenkins
Migrate from Travis
Migrate from XAML builds
Migrate from Lab Management
Pipeline tasks
Task index
Build tasks
.NET Core CLI
Android build
Android signing
Ant
Azure IoT Edge
CMake
Docker
Docker Compose
Go
Gradle
Grunt
gulp
Index Sources & Publish Symbols
Jenkins Queue Job
Maven
MSBuild
Visual Studio Build
Xamarin.Android
Xamarin.iOS
Xcode
Xcode Package iOS
Utility tasks
Archive files
Azure Network Load Balancer
Bash
Batch script
Command line
Copy and Publish Build Artifacts
Copy Files
cURL Upload Files
Decrypt File
Delay
Delete Files
Download Build Artifacts
Download Fileshare Artifacts
Download GitHub Release
Download Package
Download Pipeline Artifact
Download Secure File
Extract Files
File Transform
FTP Upload
GitHub Release
Install Apple Certificate
Install Apple Provisioning Profile
Install SSH Key
Invoke Azure Function
Invoke REST API
Jenkins Download Artifacts
Manual Intervention
PowerShell
Publish Build Artifacts
Publish Pipeline Artifact
Publish to Azure Service Bus
Python Script
Query Azure Monitor Alerts
Query Work Items
Service Fabric PowerShell
Shell script
Update Service Fabric Manifests
Test tasks
App Center Test
Cloud-based Apache JMeter Load Test
Cloud-based Load Test
Cloud-based Web Performance Test
Container Structure Test Task
Publish Code Coverage Results
Publish Test Results
Run Functional Tests
Visual Studio Test
Visual Studio Test Agent Deployment
Package tasks
CocoaPods
Conda Environment
Maven Authenticate
npm
npm Authenticate
NuGet
NuGet Authenticate
PyPI Publisher
Python Pip Authenticate
Python Twine Upload Authenticate
Universal Packages
Xamarin Component Restore
Previous versions
NuGet Installer 0.*
NuGet Restore 1.*
NuGet Packager 0.*
NuGet Publisher 0.*
NuGet Command 0.*
Pip Authenticate 0.*
Twine Authenticate 0.*
Deploy tasks
App Center Distribute
Azure App Service Deploy
Azure App Service Manage
Azure App Service Settings
Azure CLI
Azure Cloud PowerShell Deployment
Azure File Copy
Azure Function App
Azure Function App for Container
Azure Key Vault
Azure Monitor Alerts
Azure MySQL Deployment
Azure Policy
Azure PowerShell
Azure Resource Group Deployment
Azure SQL Database Deployment
Azure Web App
Azure virtual machine scale set deployment
Azure Web App for Container
Build Machine Image (Packer)
Chef
Chef Knife
Copy Files Over SSH
Docker
Docker Compose
Helm Deploy
IIS Web App Deploy (Machine Group)
IIS Web App Manage (Machine Group)
Kubectl task
Kubernetes manifest task
PowerShell on Target Machines
Service Fabric App Deployment
Service Fabric Compose Deploy
SSH
Windows Machine File Copy
WinRM SQL Server DB Deployment
MySQL Database Deployment On Machine Group
Tool tasks
Docker Installer
Go Tool Installer
Helm Installer
Java Tool Installer
Kubectl Installer
Node.js Tool Installer
NuGet Tool Installer
Use .NET Core
Use Python Version
Use Ruby Version
Visual Studio Test Platform Installer
Troubleshooting
Troubleshoot pipeline runs
Review logs
Debug deployment issues
Troubleshoot Azure connections
Reference
YAML schema
Expressions
File matching patterns
File and variable transform
Logging commands
Artifact policy checks
Case studies & best practices
Pipelines security walkthrough
Overview
Approach to securing YAML pipelines
Repository protection
Pipeline resources
Project structure
Security through templates
Variables and parameters
Shared infrastructure
Other security considerations
Add continuous security validation
Build & deploy automation
Progressively expose releases using deployment rings
Progressively expose features in production
Developer resources
REST API reference
Azure DevOps CLI
Microsoft Learn
Create a build pipeline
Implement a code workflow in your build pipeline by using Git and GitHub
Run quality tests in your build pipeline
Manage build dependencies with Azure Artifacts
Automated testing
What is Azure Pipelines?
2/26/2020 • 2 minutes to read • Edit Online
Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2017
Azure Pipelines is a cloud service that you can use to automatically build and test your code project and make it
available to other users. It works with just about any language or project type.
Azure Pipelines combines continuous integration (CI) and continuous delivery (CD) to constantly and consistently
test and build your code and ship it to any target.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Azure Pipelines supports continuous integration (CI) and continuous delivery (CD) to constantly and consistently
test and build your code and ship it to any target. You accomplish this by defining a pipeline. You define pipelines
using the YAML syntax or through the user interface (Classic).
Azure Pipelines supports continuous integration (CI) and continuous delivery (CD) to constantly and consistently
test and build your code and ship it to any target. You accomplish this by defining a pipeline using the user
interface, also referred to as Classic.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
The pipeline is versioned with your code. It follows the same branching structure. You get validation of your
changes through code reviews in pull requests and branch build policies.
Every branch you use can modify the build policy by modifying the azure-pipelines.yml file.
A change to the build process might cause a break or result in an unexpected outcome. Because the change is in
version control with the rest of your codebase, you can more easily identify the issue.
Follow these basic steps:
1. Configure Azure Pipelines to use your Git repo.
2. Edit your azure-pipelines.yml file to define your build.
3. Push your code to your version control repository. This action kicks off the default trigger to build and deploy
and then monitor the results.
Your code is now updated, built, tested, and packaged. It can be deployed to any target.
YAML pipelines aren't available in TFS 2018 and earlier versions.
Feature availability
Certain pipeline features are only available when using YAML or when defining build or release pipelines with the
Classic interface. The following table indicates which features are supported and for which tasks and methods.
TFS 2015 through TFS 2018 supports the Classic interface only. The following table indicates which pipeline
features are available when defining build or release pipelines.
F EAT URE C L A SSIC B UIL D C L A SSIC REL EA SE N OT ES
Related articles
Key concepts for new Azure Pipelines users
Sign up for Azure Pipelines
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines
Sign up for an Azure DevOps organization and Azure Pipelines to begin managing CI/CD to deploy your code
with high-performance pipelines.
For more information on Azure Pipelines, see What is Azure Pipelines.
2. Enter your email address, phone number, or Skype ID for your Microsoft account. If you're a Visual Studio
subscriber and you get Azure DevOps as a benefit, use the Microsoft account associated with your
subscription. Select Next .
3. Enter your password and select Sign in .
IMPORTANT
If your GitHub email address is associated with an Azure AD-backed organization in Azure DevOps, you can't sign in with
your GitHub account, rather you must sign in with your Azure AD account.
1. Choose Star t free with GitHub . If you're already part of an Azure DevOps organization, choose Star t
free .
2. Enter your GitHub account credentials, and then select Sign in .
An organization is created based on the account you used to sign in. Use the following URL to sign in to
your organization at any time:
https://dev.azure.com/{yourorganization}
Create a project
If you signed up for Azure DevOps with an existing MSA or GitHub identity, you're automatically prompted to
create a project. Create either a public or private project. To learn more about public projects, see What is a public
project?.
1. Enter a name for your project, select the visibility, and optionally provide a description. Then choose
Create project .
Special characters aren't allowed in the project name (such as / : \ ~ & % ; @ ' " ? < > | # $ * } { , + = [ ]).
The project name also can't begin with an underscore, can't begin or end with a period, and must be 64
characters or less. Set your project visibility to either public or private. Public visibility allows for anyone on
the internet to view your project. Private visibility is for only people who you give access to your project.
2. When your project is created, the Kanban board automatically appears.
You're now set to create your first pipeline, or invite other users to collaborate with your project.
1. From your project web portal, choose the Azure DevOps icon, and then select Organization
settings .
2. Select Users > Add users .
]
3. Complete the form by entering or selecting the following information:
Users: Enter the email addresses (Microsoft accounts) or GitHub IDs for the users. You can add several
email addresses by separating them with a semicolon (;). An email address appears in red when it's
accepted.
Access level: Assign one of the following access levels:
Basic: Assign to users who must have access to all Azure Pipelines features. You can grant up to
five users Basic access for free.
Stakeholder : Assign to users for limited access to features to view, add, and modify work items.
You can assign an unlimited amount of users Stakeholder access for free.
Add to project: Select the project you named in the preceding procedure.
Azure DevOps groups: Select one of the following security groups, which will determine the
permissions the users have to do select tasks. To learn more, see Azure Pipelines resources.
Project Readers: Assign to users who only require read-only access.
Project Contributors: Assign to users who will contribute fully to the project.
Project Administrators: Assign to users who will configure project resources.
NOTE
Add email addresses for personal Microsoft accounts and IDs for GitHub accounts unless you plan to use Azure
Active Directory (Azure AD) to authenticate users and control organization access. If a user doesn't have a Microsoft
or GitHub account, ask the user to sign up for a Microsoft account or a GitHub account.
Next steps
Create your first pipeline
Related articles
What is Azure Pipelines?
Key concepts for new Azure Pipelines users
Create your first pipeline
Create your first pipeline
11/2/2020 • 26 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
This is a step-by-step guide to using Azure Pipelines to build a GitHub repository.
NOTE
If you want create a new pipeline by copying another pipeline, see Clone or import a pipeline.
https://github.com/MicrosoftDocs/pipelines-java
NOTE
Even in a private project, anonymous badge access is enabled by default. With anonymous badge access enabled,
users outside your organization might be able to query information such as project names, branch names, job names,
and build status through the badge status API.
Because you just changed the Readme.md file in this repository, Azure Pipelines automatically builds your
code, according to the configuration in the azure-pipelines.yml file at the root of your repository. Back in
Azure Pipelines, observe that a new run appears. Each time you make an edit, Azure Pipelines starts a new
run.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments, and
jobs are called phases.
NOTE
This guidance applies to TFS version 2017.3 and newer.
We'll show you how to use the classic editor in Azure DevOps Server 2019 to create a build and release that
prints "Hello world".
We'll show you how to use the classic editor in TFS to create a build and a release that prints "Hello world".
Prerequisites
A self-hosted Windows agent.
2. If your project is empty, you will be greeted with a screen to help you add code to your repository.
Choose the bottom choice to initialize your repo with a readme file:
1. Navigate to your repository by clicking Code in the top navigation.
2. If your project is empty, you will be greeted with a screen to help you add code to your repository.
Choose the bottom choice to initialize your repo with a readme file:
3. In the dialog box, name your new file and create it.
HelloWorld.ps1
HelloWorld.ps1
In this tutorial, our focus is on CI/CD, so we're keeping the code part simple. We're working in an Azure
Repos Git repository directly in your web browser.
When you're ready to begin building and deploying a real app, you can use a wide range of version
control clients and services with Azure Pipelines CI builds. Learn more.
3. Make sure that the source , project , repositor y , and default branch match the location in which you
created the script.
4. Start with an Empty job .
5. On the left side, select Pipeline and specify whatever Name you want to use. For the Agent pool ,
select Hosted VS2017 .
6. On the left side, select the plus sign ( + ) to add a task to Job 1 . On the right side, select the Utility
category, select the PowerShell task from the list, and then choose Add .
7. On the left side, select your new PowerShell script task.
8. For the Script Path argument, select the ... button to browse your repository and select the script you
created.
A build pipeline is the entity through which you define your automated build pipeline. In the build
pipeline, you compose a set of tasks, each of which perform a step in your build. The task catalog
provides a rich set of tasks for you to get started. You can also add PowerShell or shell scripts to your
build pipeline.
Path to Publish : Select the ... button to browse and select the script you created.
Ar tifact Name : Enter drop .
Ar tifact Type : Select Ser ver .
Artifacts are the files that you want your build to produce. Artifacts can be nearly anything your team
needs to test or deploy your app. For example, you've got a .DLL and .EXE executable files and .PDB
symbols file of a C# or C++ .NET Windows app.
To enable you to produce artifacts, we provide tools such as copying with pattern matching, and a staging
directory in which you can gather your artifacts before publishing them. See Artifacts in Azure Pipelines.
Enable continuous integration (CI)
1. Select the Triggers tab.
2. Enable Continuous integration .
A continuous integration trigger on a build pipeline indicates that the system should automatically queue
a new build whenever a code change is committed. You can make the trigger more general or more
specific, and also schedule your build (for example, on a nightly basis). See Build triggers.
Choose the link to watch the new build as it happens. Once the agent is allocated, you'll start seeing
the live logs of the build. Notice that the PowerShell script is run as part of the build, and that "Hello
world" is printed to the console.
4. Go to the build summary. On the Ar tifacts tab of the build, notice that the script is published as an
artifact.
1. Select Save & queue , and then select Save & queue .
2. On the dialog box, select Save & queue once more.
This queues a new build on the Microsoft-hosted agent.
3. You see a link to the new build on the top of the page.
Choose the link to watch the new build as it happens. Once the agent is allocated, you'll start seeing
the live logs of the build. Notice that the PowerShell script is run as part of the build, and that "Hello
world" is printed to the console.
TFS 2018.2
TFS 2018 RTM
4. Go to the build summary.
5. On the Ar tifacts tab of the build, notice that the script is published as an artifact.
You can view a summary of all the builds or drill into the logs for each build at any time by navigating to
the Builds tab in Azure Pipelines . For each build, you can also view a list of commits that were built and
the work items associated with each commit. You can also run tests in each build and analyze the test
failures.
1. Select Save & queue , and then select Save & queue .
2. On the dialog box, select the Queue button.
This queues a new build on the agent. Once the agent is allocated, you'll start seeing the live logs of
the build. Notice that the PowerShell script is run as part of the build, and that "Hello world" is printed
to the console.
4. On the Ar tifacts tab of the build, notice that the script is published as an artifact.
You can view a summary of all the builds or drill into the logs for each build at any time by navigating to
the Builds tab in Build and Release . For each build, you can also view a list of commits that were built
and the work items associated with each commit. You can also run tests in each build and analyze the test
failures.
TFS 2018.2
TFS 2018 RTM
Arguments
We just introduced the concept of build variables in these steps. We printed the value of a variable that is
automatically predefined and initialized by the system. You can also define custom variables and use
them either in arguments to your tasks, or as environment variables within your scripts. To learn more
about variables, see Build variables.
13. On the Pipeline tab, select the QA stage and select Clone .
14. Rename the cloned stage Production .
15. Rename the release pipeline Hello world .
13. On the Pipeline tab, select the QA stage and select Clone .
A release pipeline is a collection of stages to which the application build artifacts are deployed. It also
defines the actual deployment pipeline for each stage, as well as how the artifacts are promoted from one
stage to another.
Also, notice that we used some variables in our script arguments. In this case, we used release variables
instead of the build variables we used for the build pipeline.
Deploy a release
Run the script in each stage.
1. Create a new release.
When Create new release appears, select Create .
2. Open the release that you created.
You can track the progress of each release to see if it has been deployed to all the stages. You can track
the commits that are part of each release, the associated work items, and the results of any test runs that
you've added to the release pipeline.
Param(
[string]$greeter,
[string]$trigger
)
Write-Host "Hello world" from $greeter
Write-Host Trigger: $trigger
Write-Host "Now that you've got CI/CD, you can automatically deploy your app every time your team
checks in code."
Next steps
You've just learned how to create your first Azure Pipeline. Learn more about configuring pipelines in the
language of your choice:
.NET Core
Go
Java
Node.js
Python
Containers
Or, you can proceed to customize the pipeline you just created.
To run your pipeline in a container, see Container jobs.
For details about building GitHub repositories, see Build GitHub repositories.
To learn what else you can do in YAML pipelines, see YAML schema reference.
Clean up
If you created any test pipelines, they are easy to delete when you are done with them.
Browser
Azure DevOps CLI
To delete a pipeline, navigate to the summary page for that pipeline, and choose Delete from the ... menu at
the top-right of the page. Type the name of the pipeline to confirm, and choose Delete .
You've learned the basics of creating and running a pipeline. Now you're ready to configure your build
pipeline for the programming language you're using. Go ahead and create a new build pipeline, and this time,
use one of the following templates.
L A N GUA GE T EM P L AT E TO USE
.NET ASP.NET
Go Go
Java Gradle
JavaScript Node.js
L A N GUA GE T EM P L AT E TO USE
Xcode Xcode
FAQ
Where can I read articles about DevOps and CI/CD?
What is Continuous Integration?
What is Continuous Delivery?
What is DevOps?
What kinds of version control can I use
When you're ready to get going with CI/CD for your app, you can use the version control system of your
choice:
Clients
Visual Studio Code for Windows, macOS, and Linux
Visual Studio with Git for Windows or Visual Studio for Mac
Eclipse
Xcode
IntelliJ
Command line
Services
Azure Pipelines
Git service providers such as GitHub and Bitbucket Cloud
Subversion
Clients
Visual Studio Code for Windows, macOS, and Linux
Visual Studio with Git for Windows or Visual Studio for Mac
Visual Studio with TFVC
Eclipse
Xcode
IntelliJ
Command line
Services
Azure Pipelines
Git service providers such as GitHub and Bitbucket Cloud
Subversion
How do I replicate a pipeline?
If your pipeline has a pattern that you want to replicate in other pipelines, clone it, export it, or save it as a
template.
After you clone a pipeline, you can make changes and then save it.
After you export a pipeline, you can import it from the All pipelines tab.
After you create a template, your team members can use it to follow the pattern in new pipelines.
TIP
If you're using the New Build Editor , then your custom templates are shown at the bottom of the list.
How do I work with drafts?
If you're editing a build pipeline and you want to test some changes that are not yet ready for production, you
can save it as a draft.
Or, if you decide to discard the draft, you can delete it from the All Pipeline tab shown above.
How can I delete a pipeline?
To delete a pipeline, navigate to the summary page for that pipeline, and choose Delete from the ... menu in
the top-right of the page. Type the name of the pipeline to confirm, and choose Delete .
What else can I do when I queue a build?
You can queue builds automatically or manually.
When you manually queue a build, you can, for a single run of the build:
Specify the pool into which the build goes.
Add and modify some variables.
Add demands.
In a Git repository
Build a branch or a tag.
Build a commit.
In a TFVC repository
Specify the source version as a label or changeset.
Run a private build of a shelveset. (You can use this option on either a Microsoft-hosted agent or
a self-hosted agent.)
You can queue builds automatically or manually.
When you manually queue a build, you can, for a single run of the build:
Specify the pool into which the build goes.
Add and modify some variables.
Add demands.
In a Git repository
Build a branch or a tag.
Build a commit.
Where can I learn more about build pipeline settings?
To learn more about build pipeline settings, see:
Getting sources
Tasks
Variables
Triggers
Options
Retention
History
To learn more about build pipeline settings, see:
Getting sources
Tasks
Variables
Triggers
Retention
History
How do I programmatically create a build pipeline?
REST API Reference: Create a build pipeline
NOTE
You can also manage builds and build pipelines from the command line or scripts using the Azure Pipelines CLI.
Create your first pipeline from the CLI
11/2/2020 • 10 minutes to read • Edit Online
Azure Pipelines
This is a step-by-step guide to using Azure Pipelines from the Azure CLI (command-line interface) to build a GitHub
repository. You can use Azure Pipelines to build an app written in any language. For this quickstart, you'll use Java.
Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
Azure CLI version 2.0.49 or newer.
To install, see Install the Azure CLI.
To check the version from the command prompt:
az --version
Make sure your Azure DevOps defaults include the organization and project from the command prompt:
az login
After you've forked it, clone it to your dev machine. Learn how: Fork a repo.
3. Navigate to the cloned directory.
4. Create a new pipeline:
The repository and branch details are picked up from the git configuration available in the cloned directory.
5. Enter your GitHub user name and password to authenticate Azure Pipelines.
Enter your GitHub username (Leave blank for using already generated PAT): Contoso
6. Provide a name for the service connection created to enable Azure Pipelines to communicate with the
GitHub Repository.
7. Select the Maven pipeline template from the list of recommended templates.
8. The pipeline YAML is generated. You can open the YAML in your default editor to view and make changes.
Do you want to view/edit the template yaml before proceeding?
[1] Continue with the generated yaml
[2] View or edit the yaml
Please enter a choice [Default choice(1)]:2
9. Provide where you want to commit the YAML file that is generated.
Parameters
branch : Name of the branch on which the pipeline run is to be queued, for example, refs/heads/master.
commit-id : Commit-id on which the pipeline run is to be queued.
folder-path : Folder path of pipeline. Default is root level folder.
id : Required if name is not supplied. ID of the pipeline to queue.
name : Required if ID is not supplied, but ignored if ID is supplied. Name of the pipeline to queue.
open : Open the pipeline results page in your web browser.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
variables : Space separated "name=value" pairs for the variables you would like to set.
Example
The following command runs the pipeline named myGithubname.pipelines-java in the branch pipeline and
shows the result in table format.
Run ID Number Status Result Pipeline ID Pipeline Name Source Branch Queued
Time Reason
-------- ---------- ---------- -------- ------------- --------------------------- --------------- ------
-------------------- --------
123 20200123.2 notStarted 12 myGithubname.pipelines-java pipeline
2020-01-23 11:55:56.633450 manual
Update a pipeline
You can update an existing pipeline with the az pipelines update command. To get started, see Get started with
Azure DevOps CLI.
Parameters
branch : Name of the branch on which the pipeline run is to be configured, for example, refs/heads/master.
description : New description for the pipeline.
id : Required if name is not supplied. ID of the pipeline to update.
name : Required if ID is not supplied. Name of the pipeline to update.
new-folder-path : New full path of the folder to which the pipeline is moved, for example,
user1/production_pipelines.
new-name : New updated name of the pipeline.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
queue-id : Queue ID of the agent pool where the pipeline needs to run.
yaml-path : Path of the pipeline's yaml file in the repo.
Example
The following command updates the pipeline with the ID of 12 with a new name and description and shows the
result in table format.
az pipelines update --id 12 --description "rename pipeline" --new-name updatedname.pipelines-java --output
table
Show pipeline
You can view the details of an existing pipeline with the az pipelines show command. To get started, see Get started
with Azure DevOps CLI.
Parameters
folder-path : Folder path of pipeline. Default is root level folder.
id : Required if name is not supplied. ID of the pipeline to show details.
name : Required if name is not supplied, but ignored if ID is supplied. Name of the pipeline to show details.
open : Open the pipeline summary page in your web browser.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
Example
The following command shows the details of the pipeline with the ID of 12 and returns the result in table format.
NOTE
Even in a private project, anonymous badge access is enabled by default. With anonymous badge access enabled, users
outside your organization might be able to query information such as project names, branch names, job names, and build
status through the badge status API.
Because you just changed the Readme.md file in this repository, Azure Pipelines automatically builds your code,
according to the configuration in the azure-pipelines.yml file at the root of your repository. Back in Azure
Pipelines, observe that a new run appears. Each time you make an edit, Azure Pipelines starts a new run.
Next steps
You've just learned how to create your first Azure Pipeline. Learn more about configuring pipelines in the language
of your choice:
.NET Core
Go
Java
Node.js
Python
Containers
Or, you can proceed to customize the pipeline you just created.
To run your pipeline in a container, see Container jobs.
For details about building GitHub repositories, see Build GitHub repositories.
To learn what else you can do in YAML pipelines, see YAML schema reference.
Clean up
If you created any test pipelines, they are easy to delete when you are done with them.
Browser
Azure DevOps CLI
To delete a pipeline, navigate to the summary page for that pipeline, and choose Delete from the ... menu at the
top-right of the page. Type the name of the pipeline to confirm, and choose Delete .
Clone or import a pipeline
11/2/2020 • 3 minutes to read • Edit Online
One approach to creating a pipeline is to copy an existing pipeline and use it as a starting point. For YAML
pipelines, the process is as easy as copying the YAML from one pipeline to another. For pipelines created in the
classic editor, the procedure depends on whether the pipeline to copy is in the same project as the new pipeline. If
the pipeline to copy is in the same project, you can clone it, and if it is in a different project you can export it from
that project and import it into your project.
Clone a pipeline
YAML
Classic
For YAML pipelines, the process for cloning is to copy the YAML from the source pipeline and use it as the basis for
the new pipeline.
1. Navigate to your pipeline, and choose Edit .
2. Copy the pipeline YAML from the editor, and paste it into the YAML editor for your new pipeline.
3. To customize your newly cloned pipeline, see Customize your pipeline.
1. Navigate to the pipeline details for your pipeline, and choose Edit .
2. Copy the pipeline YAML from the editor, and paste it into the YAML editor for your new pipeline.
3. To customize your newly cloned pipeline, see Customize your pipeline.
This version of TFS doesn't support YAML pipelines.
2. Copy the pipeline YAML from the editor, and paste it into the YAML editor for your new pipeline.
3. To customize your newly cloned pipeline, see Customize your pipeline.
1. Navigate to the pipeline details for your pipeline, and choose Edit .
2. Copy the pipeline YAML from the editor, and paste it into the YAML editor for your new pipeline.
3. To customize your newly cloned pipeline, see Customize your pipeline.
This version of TFS doesn't support YAML pipelines.
Next steps
Learn to customize the pipeline you just cloned or imported.
Customize your pipeline
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
This is a step-by-step guide on common ways to customize your pipeline.
Prerequisite
Follow instructions in Create your first pipeline to create a working pipeline.
trigger:
- master
pool:
vmImage: 'Ubuntu-16.04'
steps:
- task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
mavenOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.8'
jdkArchitectureOption: 'x64'
publishJUnitResults: false
testResultsFiles: '**/surefire-reports/TEST-*.xml'
goals: 'package'
NOTE
The contents of your YAML file may be different depending on the sample repo you started with, or upgrades
made in Azure Pipelines.
This pipeline runs whenever your team pushes a change to the master branch of your repo. It runs on a
Microsoft-hosted Linux machine. The pipeline process has a single step, which is to run the Maven task.
pool:
vmImage: "ubuntu-16.04"
To choose a different platform like Windows or Mac, change the vmImage value:
pool:
vmImage: "vs2017-win2016"
pool:
vmImage: "macos-latest"
Select Save and then confirm the changes to see your pipeline run on a different platform.
Add steps
You can add additional scripts or tasks as steps to your pipeline. A task is a pre-packaged script. You can use
tasks for building, testing, publishing, or deploying your app. For Java, the Maven task we used handles testing
and publishing results, however, you can use a task to publish code coverage results too.
Open the YAML editor for your pipeline.
Add the following snippet to the end of your YAML file.
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: "JaCoCo"
summaryFileLocation: "$(System.DefaultWorkingDirectory)/**/site/jacoco/jacoco.xml"
reportDirectory: "$(System.DefaultWorkingDirectory)/**/site/jacoco"
failIfCoverageEmpty: true
pool:
vmImage: "ubuntu-16.04"
pool:
vmImage: $(imageName)
Select Save and then confirm the changes to see your build run up to three jobs on three different
platforms.
Each agent can run only one job at a time. To run multiple jobs in parallel you must configure multiple agents.
You also need sufficient parallel jobs.
strategy:
matrix:
jdk10:
jdk_version: "1.10"
jdk11:
jdk_version: "1.11"
maxParallel: 2
jdkVersionOption: "1.11"
jdkVersionOption: $(jdk_version)
Make sure to change the $(imageName) variable back to the platform of your choice.
If you want to build on multiple platforms and versions, replace the entire content in your
azure-pipelines.yml file before the publishing task with the following snippet:
trigger:
- master
strategy:
matrix:
jdk10_linux:
imageName: "ubuntu-16.04"
jdk_version: "1.10"
jdk11_windows:
imageName: "vs2017-win2016"
jdk_version: "1.11"
maxParallel: 2
pool:
vmImage: $(imageName)
steps:
- task: Maven@3
inputs:
mavenPomFile: "pom.xml"
mavenOptions: "-Xmx3072m"
javaHomeOption: "JDKVersion"
jdkVersionOption: $(jdk_version)
jdkArchitectureOption: "x64"
publishJUnitResults: true
testResultsFiles: "**/TEST-*.xml"
goals: "package"
Select Save and then confirm the changes to see your build run two jobs on three different platforms and
SDKs.
Customize CI triggers
You can use a trigger: to specify the events when you want to run the pipeline. YAML pipelines are configured
by default with a CI trigger on your default branch (which is usually master). You can set up triggers for specific
branches or for pull request validation. For a pull request validation trigger just replace the trigger: step with
pr: as shown in the two examples below.
If you'd like to set up triggers, add either of the following snippets at the beginning of your
azure-pipelines.yml file.
trigger:
- master
- releases/*
pr:
- master
- releases/*
You can specify the full name of the branch (for example, master ) or a prefix-matching wildcard (for
example, releases/* ).
Customize settings
There are pipeline settings that you wouldn't want to manage in your YAML file. Follow these steps to view and
modify these settings:
1. From your web browser, open the project for your organization in Azure DevOps and choose Pipelines /
Pipelines from the navigation sidebar.
2. Select the pipeline you want to configure settings for from the list of pipelines.
3. Open the overflow menu by clicking the action button with the vertical ellipsis and select Settings.
Processing of new run requests
Sometimes you'll want to prevent new runs from starting on your pipeline.
By default, the processing of new run requests is Enabled . This setting allows standard processing of all
trigger types, including manual runs.
Paused pipelines allow run requests to be processed, but those requests are queued without actually starting.
When new request processing is enabled, run processing resumes starting with the first request in the queue.
Disabled pipelines prevent users from starting new runs. All triggers are also disabled while this setting is
applied.
Other settings
YAML file path. If you ever need to direct your pipeline to use a different YAML file, you can specify the path
to that file. This setting can also be useful if you need to move/rename your YAML file.
Automatically link work items included in this run. The changes associated with a given pipeline run
may have work items associated with them. Select this option to link those work items to the run. When this
option is selected, you'll need to specify a specific branch. Work items will only be associated with runs of that
branch.
To get notifications when your runs fail, see how to Manage notifications for a team
You've just learned the basics of customizing your pipeline. Next we recommend that you learn more about
customizing a pipeline for the language you use:
.NET Core
Containers
Go
Java
Node.js
Python
Or, to grow your CI pipeline to a CI/CD pipeline, include a deployment job with steps to deploy your app to an
environment.
To learn more about the topics in this guide see Jobs, Tasks, Catalog of Tasks, Variables, Triggers, or
Troubleshooting.
To learn what else you can do in YAML pipelines, see YAML schema reference.
Multi-stage pipelines user experience
11/2/2020 • 4 minutes to read • Edit Online
The multi-stage pipelines experience brings improvements and ease of use to the Pipelines portal UI. This article
shows you how to view and manage your pipelines using this new experience.
Navigating pipelines
You can view and manage your pipelines by choosing Pipelines from the left-hand menu.
You can drill down and view pipeline details, run details, pipeline analytics, job details, logs, and more.
At the top of each page is a breadcrumb navigation bar. Select the different areas of the bar to navigate to different
areas of the portal. The breadcrumb navigation is a convenient way to go back one or more steps.
1. This area of the breadcrumb navigation shows you what page you're currently viewing. In this example, the page
is the run summary for run number 20191209.3 .
2. One level up is a link to the pipeline details for that run.
3. The next level up is the pipelines landing page.
4. This link is to the FabrikamFiber project, which contains the pipeline for this run.
5. The root breadcrumb link is to the Azure DevOps fabrikam-tailspin organization, which contains the project
that contains the pipeline.
Many pages also contain a back button that takes you to the previous page.
Select Runs to view all pipeline runs. You can optionally filter the displayed runs.
Select a pipeline run to view information about that run.
Runs
Select Runs to view the runs for that pipeline. You can optionally filter the displayed runs.
You can choose to Retain or Delete a run from the context menu. For more information on run retention, see Build
and release retention policies.
Branches
Select Branches to view the history or run for that branch. Hover over the Histor y to view a summary for each
run, and select a run to navigate to the details page for that run.
Analytics
Select Analytics to view pipeline metrics such as pass rate and run duration. Choose View full repor t for more
information on each metric.
View pipeline run details
From the pipeline run summary you can view the status of your run, both while it is running and when it is
complete.
From the summary pane you can download artifacts, and navigate to linked commits, test results, and work items.
Cancel and re -run a pipeline
If the pipeline is running, you can cancel it by choosing Cancel . If the run has completed, you can re-run the
pipeline by choosing Run new .
NOTE
You can't delete a run if the run is retained. If you don't see Delete , choose Stop retaining run , and then delete the run. If
you see both Delete and View retention releases , one or more configured retention policies still apply to your run.
Choose View retention releases , delete the policies (only the policies for the selected run are removed), and then delete
the run.
Manage security
You can configure pipelines security on a project level from the context menu on the pipelines landing page, and on
a pipeline level on the pipeline details page.
To support security of your pipeline operations, you can add users to a built-in security group, set individual
permissions for a user or group, or add users to pre-defined roles. You can manage security for for Azure Pipelines
in the web portal, either from the user or admin context. For more information on configuring pipelines security,
see Pipeline permissions and security roles.
Next steps
Learn more about configuring pipelines in the language of your choice:
.NET Core
Go
Java
Node.js
Python
Containers and Container jobs
Learn more about building Azure Repos and GitHub repositories.
To learn what else you can do in YAML pipelines, see Customize your pipeline, and for a complete reference see
YAML schema reference.
Key concepts for new Azure Pipelines users
11/2/2020 • 4 minutes to read • Edit Online
Learn about the key concepts and components that are used in Azure Pipelines. Understanding the basic terms
and parts of Azure Pipelines helps you further explore how it can help you deliver better code more efficiently
and reliably.
Key concepts over view
Agent
When your build or deployment runs, the system begins one or more jobs. An agent is computing infrastructure
with installed agent software that runs one job at a time.
For more in-depth information about the different types of agents and how to use them, see Build and release
agents.
Approvals
Approvals define a set of validations required before a deployment can be performed. Manual approval is a
common check performed to control deployments to production environments. When checks are configured on
an environment, pipelines will stop before starting a stage that deploys to the environment until all the checks
are completed successfully.
Artifact
An artifact is a collection of files or packages published by a run. Artifacts are made available to subsequent tasks,
such as distribution or deployment. For more information, see Artifacts in Azure Pipelines.
Continuous delivery
Continuous delivery (CD) is a process by which code is built, tested, and deployed to one or more test and
production stages. Deploying and testing in multiple stages helps drive quality. Continuous integration systems
produce deployable artifacts, which includes infrastructure and apps. Automated release pipelines consume these
artifacts to release new versions and fixes to existing systems. Monitoring and alerting systems run constantly to
drive visibility into the entire CD process. This process ensures that errors are caught often and early.
Continuous integration
Continuous integration (CI) is the practice used by development teams to simplify the testing and building of
code. CI helps to catch bugs or problems early in the development cycle, which makes them easier and faster to
fix. Automated tests and builds are run as part of the CI process. The process can run on a set schedule, whenever
code is pushed, or both. Items known as artifacts are produced from CI systems. They're used by the continuous
delivery release pipelines to drive automatic deployments.
Deployment group
A deployment group is a set of deployment target machines that have agents installed. A deployment group is
just another grouping of agents, like an agent pool. You can set the deployment targets in a pipeline for a job
using a deployment group. Learn more about provisioning agents for deployment groups.
Environment
An environment is a collection of resources, where you deploy your application. It can contain one or more virtual
machines, containers, web apps, or any service that's used to host the application being developed. A pipeline
might deploy the app to one or more environments after build is completed and tests are run.
Job
A stage contains one or more jobs. Each job runs on an agent. A job represents an execution boundary of a set of
steps. All of the steps run together on the same agent. For example, you might build two configurations - x86 and
x64. In this case, you have one build stage and two jobs.
Pipeline
A pipeline defines the continuous integration and deployment process for your app. It's made up of one or more
stages. It can be thought of as a workflow that defines how your test, build, and deployment steps are run.
Run
A run represents one execution of a pipeline. It collects the logs associated with running the steps and the results
of running tests. During a run, Azure Pipelines will first process the pipeline and then hand off the run to one or
more agents. Each agent will run jobs. Learn more about the pipeline run sequence.
Script
A script runs code as a step in your pipeline using command line, PowerShell, or Bash. You can write cross-
platform scripts for macOS, Linux, and Windows. Unlike a task, a script is custom code that is specific to your
pipeline.
Stage
A stage is a logical boundary in the pipeline. It can be used to mark separation of concerns (e.g., Build, QA, and
production). Each stage contains one or more jobs.
Step
A step is the smallest building block of a pipeline. For example, a pipeline might consist of build and test steps. A
step can either be a script or a task. A task is simply a pre-created script offered as a convenience to you. To view
the available tasks, see the Build and release tasks reference. For information on creating custom tasks, see Create
a custom task.
Task
A task is the building block for defining automation in a pipeline. A task is packaged script or procedure that has
been abstracted with a set of inputs.
Trigger
A trigger is something that's set up to tell the pipeline when to run. You can configure a pipeline to run upon a
push to a repository, at scheduled times, or upon the completion of another build. All of these actions are known
as triggers. For more information, see build triggers and release triggers.
About the authors
Dave Jarvis contributed to the key concepts overview graphic.
Supported source repositories
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Azure Pipelines, Azure DevOps Server, and TFS integrate with a number of version control systems. When you
use any of these version control systems, you can configure a pipeline to build, test, and deploy your application.
YAML pipelines are a new form of pipelines that have been introduced in Azure DevOps Server 2019 and in
Azure Pipelines. YAML pipelines only work with certain version control systems. The following table shows all the
supported version control systems and the ones that support YAML pipelines.
A Z URE DEVO P S
SERVER 2019, T F S
A Z URE P IP EL IN ES A Z URE P IP EL IN ES 2018, T F S 2017, T F S
REP O SITO RY T Y P E ( YA M L ) ( C L A SSIC EDITO R) 2015. 4 T F S 2015 RT M
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Azure Pipelines can automatically build and validate every pull request and commit to your Azure Repos Git
repository.
CI triggers
Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified
branches or you push specified tags.
YAML
Classic
YAML pipelines are configured by default with a CI trigger on all branches.
Branches
You can control which branches get CI triggers with a simple syntax:
trigger:
- master
- releases/*
You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ). See
Wildcards for information on the wildcard syntax.
NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).
NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.
For more complex triggers that use exclude or batch , you must use the full syntax as shown in the following
example.
In the above example, the pipeline will be triggered if a change is pushed to master or to any releases branch.
However, it won't be triggered if a change is made to a releases branch that starts with old .
If you specify an exclude clause without an include clause, then it is equivalent to specifying * in the include
clause.
In addition to specifying branch names in the branches lists, you can also configure triggers based on tags by
using the following format:
trigger:
branches:
include:
- refs/tags/{tagname}
exclude:
- refs/tags/{othertagname}
trigger:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string
IMPORTANT
When you specify a trigger, it replaces the default implicit trigger, and only pushes to branches that are explicitly configured
to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list.
Batching CI runs
If you have many team members uploading changes often, you may want to reduce the number of runs you start.
If you set batch to true , when a pipeline is running, the system waits until the run is completed, then starts
another run with all changes that have not yet been built.
# specific branch build with batching
trigger:
batch: true
branches:
include:
- master
To clarify this example, let us say that a push A to master caused the above pipeline to run. While that pipeline is
running, additional pushes B and C occur into the repository. These updates do not start new independent runs
immediately. But after the first run is completed, all pushes until that point of time are batched together and a new
run is started.
NOTE
If the pipeline has multiple jobs and stages, then the first run should still reach a terminal state by completing or skipping all
its jobs and stages before the second run can start. For this reason, you must exercise caution when using this feature in a
pipeline with multiple stages or approvals. If you wish to batch your builds in such cases, it is recommended that you split
your CI/CD process into two pipelines - one for build (with batching) and one for deployments.
Paths
You can specify file paths to include or exclude. Note that the wildcard syntax is different between branches/tags
and file paths.
When you specify paths, you must explicitly specify branches to trigger on. You can't trigger a pipeline with only a
path filter; you must also have a branch filter, and the changed files that match the path filter must be from a
branch that matches the branch filter.
Tips:
Paths are always specified relative to the root of the repository.
If you don't set path filters, then the root folder of the repo is implicitly included by default.
If you exclude a path, you cannot also include it unless you qualify it to a deeper folder. For example if you
exclude /tools then you could include /tools/trigger-runs-on-these
The order of path filters doesn't matter.
Paths in Git are case-sensitive. Be sure to use the same case as the real folders.
NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).
Tags
In addition to specifying tags in the branches lists as covered in the previous section, you can directly specify tags
to include or exclude:
# specific tag
trigger:
tags:
include:
- v2.*
exclude:
- v2.0
If you don't specify any tag triggers, then by default, tags will not trigger pipelines.
IMPORTANT
If you specify tags in combination with branch filters, the trigger will fire if either the branch filter is satisfied or the tag filter
is satisfied. For example, if a pushed tag satisfies the branch filter, the pipeline triggers even if the tag is excluded by the tag
filter, because the push satisfied the branch filter.
Opting out of CI
Disabling the CI trigger
You can opt out of CI triggers entirely by specifying trigger: none .
IMPORTANT
When you push a change to a branch, the YAML file in that branch is evaluated to determine if a CI run should be started.
You can also tell Azure Pipelines to skip running a pipeline that a commit would normally trigger. Just include
[skip ci] in the commit message or description of the HEAD commit and Azure Pipelines will skip running CI.
You can also use any of the variations below.
[skip ci] or [ci skip]
skip-checks: true or skip-checks:true
[skip azurepipelines] or [azurepipelines skip]
[skip azpipelines] or [azpipelines skip]
[skip azp] or [azp skip]
***NO_CI***
trigger:
branches:
include:
- master
- releases/*
- feature/*
exclude:
- releases/old*
- feature/*-working
paths:
include:
- '*' # same as '/' for the repository root
exclude:
- 'docs/*' # same as 'docs/'
PR triggers
Pull request (PR) triggers cause a pipeline to run whenever a pull request is opened with one of the specified
target branches, or when changes are pushed to such a pull request. In Azure Repos Git, this functionality is
implemented using branch policies. To enable pull request validation in Azure Git Repos, navigate to the branch
policies for the desired branch, and configure the Build validation policy for that branch. For more information, see
Configure branch policies.
NOTE
To configure validation builds for an Azure Repos Git repository, you must be a project administrator of its project.
NOTE
Draft pull requests do not trigger a pipeline even if you configure a branch policy.
IMPORTANT
Limit job authorization scope to referenced Azure DevOps repositories is enabled by default for new
organizations and projects created after May 2020.
When Limit job authorization scope to referenced Azure DevOps repositories is enabled, your YAML
pipelines must explicitly reference any Azure Repos Git repositories you want to use in the pipeline as a checkout
step in the job that uses the repository. You won't be able to fetch code using scripting tasks and git commands for
an Azure Repos Git repository unless that repo is first explicitly referenced.
There are a few exceptions where you don't need to explicitly reference an Azure Repos Git repository before using
it in your pipeline when Limit job authorization scope to referenced Azure DevOps repositories is
enabled.
If you do not have an explicit checkout step in your pipeline, it is as if you have a checkout: self step, and the
self repository is checked out.
If you are using a script to perform read-only operations on a repository in a public project, you don't need to
reference the public project repository in a checkout step.
If you are using a script that provides its own authentication to the repo, such as a PAT, you don't need to
reference that repository in a checkout step.
For example, when Limit job authorization scope to referenced Azure DevOps repositories is enabled, if
your pipeline is in the FabrikamProject/Fabrikam repo in your organization, and you want to use a script to check
out the FabrikamProject/FabrikamTools repo, you must also reference this repository in a checkout step.
If you are already checking out the FabrikamTools repository in your pipeline using a checkout step, you may
subsequently use scripts to interact with that repository, such as checking out different branches.
steps:
- checkout: git://FabrikamFiber/FabrikamTools # Azure Repos Git repository in the same organization
NOTE
For many scenarios, multi-repo checkout can be leveraged, removing the need to use scripts to check out additional
repositories in your pipeline. For more information, see Check out multiple repositories in your pipeline.
Checkout
When a pipeline is triggered, Azure Pipelines pulls your source code from the Azure Repos Git repository. You can
control various aspects of how this happens.
Preferred version of Git
The Windows agent comes with its own copy of Git. If you prefer to supply your own Git rather than use the
included copy, set System.PreferGitFromPath to true . This setting is always true on non-Windows agents.
Checkout path
YAML
Classic
If you are checking out a single repository, by default, your source code will be checked out into a directory called
s . For YAML pipelines, you can change this by specifying checkout with a path . The specified path is relative to
$(Agent.BuildDirectory) . For example: if the checkout path value is mycustompath and $(Agent.BuildDirectory) is
C:\agent\_work\1 , then the source code will be checked out into C:\agent\_work\1\mycustompath .
If you are using multiple checkout steps and checking out multiple repositories, and not explicitly specifying the
folder using path , each repository is placed in a subfolder of s named after the repository. For example if you
check out two repositories named tools and code , the source code will be checked out into
C:\agent\_work\1\s\tools and C:\agent\_work\1\s\code .
Please note that the checkout path value cannot be set to go up any directory levels above
$(Agent.BuildDirectory) , so path\..\anotherpath will result in a valid checkout path (i.e.
C:\agent\_work\1\anotherpath ), but a value like ..\invalidpath will not (i.e. C:\agent\_work\invalidpath ).
You can configure the path setting in the Checkout step of your pipeline.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch
Submodules
YAML
Classic
You can configure the submodules setting in the Checkout step of your pipeline if you want to download files from
submodules.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch
The build pipeline will check out your Git submodules as long as they are:
Unauthenticated: A public, unauthenticated repo with no credentials required to clone or fetch.
Authenticated:
Contained in the same project as the Azure Repos Git repo specified above. The same credentials that
are used by the agent to get the sources from the main repository are also used to get the sources
for submodules.
Added by using a URL relative to the main repository. For example
This one would be checked out:
git submodule add ../../../FabrikamFiberProject/_git/FabrikamFiber FabrikamFiber
In this example the submodule refers to a repo (FabrikamFiber) in the same Azure DevOps
organization, but in a different project (FabrikamFiberProject). The same credentials that are
used by the agent to get the sources from the main repository are also used to get the
sources for submodules. This requires that the job access token has access to the repository in
the second project. If you restricted the job access token as explained in the section above,
then you won't be able to do this.
This one would not be checked out:
git submodule add https://[email protected]/fabrikam-
fiber/FabrikamFiberProject/_git/FabrikamFiber FabrikamFiber
NOTE
Q: Why can't I use a Git credential manager on the agent? A: Storing the submodule credentials in a Git credential
manager installed on your private build agent is usually not effective as the credential manager may prompt you to re-enter
the credentials whenever the submodule is updated. This isn't desirable during automated builds when user interaction isn't
possible.
Shallow fetch
You may want to limit how far back in history to download. Effectively this results in git fetch --depth=n . If your
repository is large, this option might make your build pipeline more efficient. Your repository might be large if it
has been in use for a long time and has sizeable history. It also might be large if you added and later deleted large
files.
YAML
Classic
You can configure the fetchDepth setting in the Checkout step of your pipeline.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch
In these cases this option can help you conserve network and storage resources. It might also save time. The
reason it doesn't always save time is because in some situations the server might need to spend time calculating
the commits to download for the depth you specify.
NOTE
When the pipeline is started, the branch to build is resolved to a commit ID. Then, the agent fetches the branch and checks
out the desired commit. There is a small window between when a branch is resolved to a commit ID and when the agent
performs the checkout. If the branch updates rapidly and you set a very small value for shallow fetch, the commit may not
exist when the agent attempts to check it out. If that happens, increase the shallow fetch depth setting.
steps:
- checkout: none # Don't sync sources
NOTE
When you use this option, the agent also skips running Git commands that clean the repo.
Clean build
You can perform different forms of cleaning the working directory of your self-hosted agent before a build runs.
In general, for faster performance of your self-hosted agents, don't clean the repo. In this case, to get the best
performance, make sure you're also building incrementally by disabling any Clean option of the task or tool you're
using to build.
If you do need to clean the repo (for example to avoid problems caused by residual files from a previous build),
your options are below.
NOTE
Cleaning is not effective if you're using a Microsoft-hosted agent because you'll get a new agent every time.
YAML
Classic
You can configure the clean setting in the Checkout step of your pipeline.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch
When clean is set to true the build pipeline performs an undo of any changes in $(Build.SourcesDirectory) .
More specifically, the following Git commands are executed prior to fetching the source.
For more options, you can configure the workspace setting of a Job.
jobs:
- job: string # name of the job, A-Z, a-z, 0-9, and underscore
...
workspace:
clean: outputs | resources | all # what to clean up before the job runs
In the Tag format you can use user-defined and predefined variables that have a scope of "All." For example:
$(Build.DefinitionName)_$(Build.DefinitionVersion)_$(Build.BuildId)_$(Build.BuildNumber)_$(My.Variable)
The first four variables are predefined. My.Variable can be defined by you on the variables tab.
The build pipeline labels your sources with a Git tag.
Some build variables might yield a value that is not a valid label. For example, variables such as
$(Build.RequestedFor) and $(Build.DefinitionName) can contain white space. If the value contains white space,
the tag is not created.
After the sources are tagged by your build pipeline, an artifact with the Git ref refs/tags/{tag} is automatically
added to the completed build. This gives your team additional traceability and a more user-friendly way to
navigate from the build to the code that was built.
FAQ
Problems related to Azure Repos integration fall into three categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Wrong version : My pipeline runs, but it is using an unexpected version of the source/YAML.
Failing triggers
I just created a new YAML pipeline with CI/PR triggers, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Are your YAML CI or PR triggers being overridden by pipeline settings in the UI? While editing your
pipeline, choose ... and then Triggers .
Check the Override the YAML trigger from here setting for the types of trigger (Continuous
integration or Pull request validation ) available for your repo.
Are you configuring the PR trigger in the YAML file or in branch policies for the repo? For an Azure Repos Git
repo, you cannot configure a PR trigger in the YAML file. You need to use branch policies.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you updated the YAML file in the correct branch? If you push an update to a branch, then the YAML file
in that same branch governs the CI behavior. If you push an update to a source branch, then the YAML file
resulting from merging the source branch with the target branch governs the PR behavior. Make sure that
the YAML file in the correct branch has the necessary CI or PR configuration.
Have you configured the trigger correctly? When you define a YAML trigger, you can specify both include
and exclude clauses for branches, tags, and paths. Ensure that the include clause matches the details of your
commit and that the exclude clause doesn't exclude them. Check the syntax for the triggers and make sure
that it is accurate.
Have you used variables in defining the trigger or the paths? That is not supported.
Did you use templates for your YAML file? If so, make sure that your triggers are defined in the main YAML
file. Triggers defined inside template files are not supported.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
Do you have wildcards in your path filters? Understand the limitations of wildcards in your paths as
described in this article.
Did you just push a new branch? If so, the new branch may not start a new run. See the section "Behavior of
triggers when new branches are created".
My CI or PR triggers have been working fine. But, they stopped working now.
First go through the troubleshooting steps in the previous question. Then, follow these additional steps:
Do you have merge conflicts in your PR? For a PR that did not trigger a pipeline, open it and check whether
it has a merge conflict. Resolve the merge conflict.
Are you experiencing a delay in the processing of push or PR events? You can usually verify this by seeing if
the issue is specific to a single pipeline or is common to all pipelines or repos in your project. If a push or a
PR update to any of the repos exhibits this symptom, we might be experiencing delays in processing the
update events. Check if we are experiencing a service outage on our status page. If the status page shows
an issue, then our team must have already started working on it. Check the page frequently for updates on
the issue.
I do not want users to override the list of branches for triggers when they update the YAML file. How can I do this?
Users with permissions to contribute code can update the YAML file and include/exclude additional branches. As a
result, users can include their own feature or user branch in their YAML file and push that update to a feature or
user branch. This may cause the pipeline to be triggered for all updates to that branch. If you want to prevent this
behavior, then you can:
1. Edit the pipeline in the Azure Pipelines UI.
2. Navigate to the Triggers menu.
3. Select Override the YAML continuous Integration trigger from here .
4. Specify the branches to include or exclude for the trigger.
When you follow these steps, any CI triggers specified in the YAML file are ignored.
I have multiple repositories in my YAML pipeline. How do I set up triggers for each repository?
See triggers in Using multiple repositories.
Failing checkout
I see the following error in the log file during checkout step. How do I fix it?
remote: TF401019: The Git repository with name or identifier XYZ does not exist or you do not have permissions
for the operation you are attempting.
fatal: repository 'XYZ' not found
##[error] Git fetch failed with exit code: 128
Related articles
Scheduled triggers
Pipeline completion triggers
Build GitHub repositories
11/2/2020 • 53 minutes to read • Edit Online
Azure Pipelines
Azure Pipelines can automatically build and validate every pull request and commit to your GitHub repository.
This article describes how to configure the integration between GitHub and Azure Pipelines.
If you're new to Azure Pipelines integration with GitHub, follow the steps in Create your first pipeline to get your
first pipeline working with a GitHub repository, and then come back to this article to learn more about
configuring and customizing the integration between GitHub and Azure Pipelines.
Azure DevOps' structure consists of organizations that contain projects . See Plan your organizational structure.
Following this pattern, your GitHub repositories and Azure DevOps Projects will have matching URL paths. For
example:
SERVIC E URL
GitHub https://github.com/python/cpython
Users
Your GitHub users do not automatically get access to Azure Pipelines. Azure Pipelines is unaware of GitHub
identities. For this reason, there is no way to configure Azure Pipelines to automatically notify users of a build
failure or a PR validation failure using their GitHub identity and email address. You must explicitly create new
users in Azure Pipelines to replicate GitHub users. Once you create new users, you can configure their
permissions in Azure DevOps to reflect their permissions in GitHub. You can also configure notifications in Azure
DevOps using their Azure DevOps identity.
GitHub organization roles
GitHub organization member roles are found at https://github.com/orgs/your-organization/people (replace
your-organization ).
If your GitHub repository grants permission to teams, you can create matching teams in the Teams section of
your Azure DevOps project settings. Then, add the teams to the security groups above, just like users.
Pipeline-specific permissions
To grant permissions to users or teams for specific pipelines in an Azure DevOps project, follow these steps:
1. Visit the project's Pipelines page (for example, https://dev.azure.com/your-organization/your-project/_build ).
2. Select the pipeline for which to set specific permissions.
3. From the '...' context menu, select Security .
4. Click Add... to add a specific user, team, or group and customize their permissions for the pipeline.
Write access to code Only upon your deliberate action, Azure Pipelines will simplify
creating a pipeline by committing a YAML file to a selected
branch of your GitHub repository.
Read access to metadata Azure Pipelines will retrieve GitHub metadata for displaying
the repository, branches, and issues associated with a build in
the build's summary.
Read and write access to checks Azure Pipelines will read and write its own build, test, and
code coverage results to be displayed in GitHub.
Read and write access to pull requests Only upon your deliberate action, Azure Pipelines will simplify
creating a pipeline by creating a pull request for a YAML file
that was committed to a selected branch of your GitHub
repository. Azure Pipelines will retrieve pull request metadata
to display in build summaries associated with pull requests.
This means that the GitHub App is likely already installed for your organization. When you create a pipeline for a
repository in the organization, the GitHub App will automatically be used to connect to GitHub.
Create pipelines in multiple Azure DevOps organizations and projects
Once the GitHub App is installed, pipelines can be created for the organization's repositories in different Azure
DevOps organizations and projects. However, if you create pipelines for a single repository in multiple Azure
DevOps organizations, only the first organization's pipelines can be automatically triggered by GitHub commits or
pull requests. Manual or scheduled builds are still possible in secondary Azure DevOps organizations.
OAuth authentication
OAuth is the simplest authentication type to get started with for repositories in your personal GitHub account.
GitHub status updates will be performed on behalf of your personal GitHub identity. For pipelines to keep
working, your repository access must remain active. Some GitHub features, like Checks, are unavailable with
OAuth and require the GitHub App.
To use OAuth, click Choose a different connection below the list of repositories while creating a pipeline. Then,
click Authorize to sign into GitHub and authorize with OAuth. An OAuth connection will be saved in your Azure
DevOps project for later use, as well as used in the pipeline being created.
Permissions needed in GitHub
To create a pipeline for a GitHub repository with continuous integration and pull request triggers, you must have
the required GitHub permissions configured. Otherwise, the repositor y will not appear in the repository list
while creating a pipeline. Depending on the authentication type and ownership of the repository, ensure that the
appropriate access is configured.
If the repo is in your personal GitHub account, at least once, authenticate to GitHub with OAuth using your
personal GitHub account credentials. This can be done in Azure DevOps project settings under Pipelines >
Service connections > New service connection > GitHub > Authorize. Grant Azure Pipelines access to your
repositories under "Permissions" here.
If the repo is in someone else's personal GitHub account, at least once, the other person must authenticate
to GitHub with OAuth using their personal GitHub account credentials. This can be done in Azure DevOps
project settings under Pipelines > Service connections > New service connection > GitHub > Authorize.
The other person must grant Azure Pipelines access to their repositories under "Permissions" here. You
must be added as a collaborator in the repository's settings under "Collaborators". Accept the invitation to
be a collaborator using the link that is emailed to you.
If the repo is in a GitHub organization that you own, at least once, authenticate to GitHub with OAuth using
your personal GitHub account credentials. This can be done in Azure DevOps project settings under
Pipelines > Service connections > New service connection > GitHub > Authorize. Grant Azure Pipelines
access to your organization under "Organization access" here. You must be added as a collaborator, or your
team must be added, in the repository's settings under "Collaborators and teams".
If the repo is in a GitHub organization that someone else owns, at least once, a GitHub organization owner
must authenticate to GitHub with OAuth using their personal GitHub account credentials. This can be done
in Azure DevOps project settings under Pipelines > Service connections > New service connection >
GitHub > Authorize. The organization owner must grant Azure Pipelines access to the organization under
"Organization access" here. You must be added as a collaborator, or your team must be added, in the
repository's settings under "Collaborators and teams". Accept the invitation to be a collaborator using the
link that is emailed to you.
Revoke OAuth access
After authorizing Azure Pipelines to use OAuth, to later revoke it and prevent further use, visit OAuth Apps in your
GitHub settings. You can also delete it from the list of GitHub service connections in your Azure DevOps project
settings.
Personal access token (PAT ) authentication
PATs are effectively the same as OAuth, but allow you to control which permissions are granted to Azure Pipelines.
Builds and GitHub status updates will be performed on behalf of your personal GitHub identity. For builds to keep
working, your repository access must remain active.
To create a PAT, visit Personal access tokens in your GitHub settings. The required permissions are repo ,
admin:repo_hook , read:user , and user:email . These are the same permissions required when using OAuth
above. Copy the generated PAT to the clipboard and paste it into a new GitHub service connection in your Azure
DevOps project settings. For future recall, name the service connection after your GitHub username. It will be
available in your Azure DevOps project for later use when creating pipelines.
Permissions needed in GitHub
To create a pipeline for a GitHub repository with continuous integration and pull request triggers, you must have
the required GitHub permissions configured. Otherwise, the repositor y will not appear in the repository list
while creating a pipeline. Depending on the authentication type and ownership of the repository, ensure that the
following access is configured.
If the repo is in your personal GitHub account, the PAT must have the required access scopes under
Personal access tokens: repo , admin:repo_hook , read:user , and user:email .
If the repo is in someone else's personal GitHub account, the PAT must have the required access scopes
under Personal access tokens: repo , admin:repo_hook , read:user , and user:email . You must be added as
a collaborator in the repository's settings under "Collaborators". Accept the invitation to be a collaborator
using the link that is emailed to you.
If the repo is in a GitHub organization that you own, the PAT must have the required access scopes under
Personal access tokens: repo , admin:repo_hook , read:user , and user:email . You must be added as a
collaborator, or your team must be added, in the repository's settings under "Collaborators and teams".
If the repo is in a GitHub organization that someone else owns, the PAT must have the required access
scopes under Personal access tokens: repo , admin:repo_hook , read:user , and user:email . You must be
added as a collaborator, or your team must be added, in the repository's settings under "Collaborators and
teams". Accept the invitation to be a collaborator using the link that is emailed to you.
Revoke PAT access
After authorizing Azure Pipelines to use a PAT, to later delete it and prevent further use, visit Personal access
tokens in your GitHub settings. You can also delete it from the list of GitHub service connections in your Azure
DevOps project settings.
CI triggers
Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified
branches or you push specified tags.
YAML
Classic
YAML pipelines are configured by default with a CI trigger on all branches.
Branches
You can control which branches get CI triggers with a simple syntax:
trigger:
- master
- releases/*
You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ). See
Wildcards for information on the wildcard syntax.
NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).
NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.
For more complex triggers that use exclude or batch , you must use the full syntax as shown in the following
example.
# specific branch build
trigger:
branches:
include:
- master
- releases/*
exclude:
- releases/old*
In the above example, the pipeline will be triggered if a change is pushed to master or to any releases branch.
However, it won't be triggered if a change is made to a releases branch that starts with old .
If you specify an exclude clause without an include clause, then it is equivalent to specifying * in the include
clause.
In addition to specifying branch names in the branches lists, you can also configure triggers based on tags by
using the following format:
trigger:
branches:
include:
- refs/tags/{tagname}
exclude:
- refs/tags/{othertagname}
trigger:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string
IMPORTANT
When you specify a trigger, it replaces the default implicit trigger, and only pushes to branches that are explicitly configured
to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list.
Batching CI runs
If you have many team members uploading changes often, you may want to reduce the number of runs you start.
If you set batch to true , when a pipeline is running, the system waits until the run is completed, then starts
another run with all changes that have not yet been built.
To clarify this example, let us say that a push A to master caused the above pipeline to run. While that pipeline is
running, additional pushes B and C occur into the repository. These updates do not start new independent runs
immediately. But after the first run is completed, all pushes until that point of time are batched together and a new
run is started.
NOTE
If the pipeline has multiple jobs and stages, then the first run should still reach a terminal state by completing or skipping all
its jobs and stages before the second run can start. For this reason, you must exercise caution when using this feature in a
pipeline with multiple stages or approvals. If you wish to batch your builds in such cases, it is recommended that you split
your CI/CD process into two pipelines - one for build (with batching) and one for deployments.
Paths
You can specify file paths to include or exclude. Note that the wildcard syntax is different between branches/tags
and file paths.
When you specify paths, you must explicitly specify branches to trigger on. You can't trigger a pipeline with only a
path filter; you must also have a branch filter, and the changed files that match the path filter must be from a
branch that matches the branch filter.
Tips:
Paths are always specified relative to the root of the repository.
If you don't set path filters, then the root folder of the repo is implicitly included by default.
If you exclude a path, you cannot also include it unless you qualify it to a deeper folder. For example if you
exclude /tools then you could include /tools/trigger-runs-on-these
The order of path filters doesn't matter.
Paths in Git are case-sensitive. Be sure to use the same case as the real folders.
NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).
Tags
In addition to specifying tags in the branches lists as covered in the previous section, you can directly specify tags
to include or exclude:
# specific tag
trigger:
tags:
include:
- v2.*
exclude:
- v2.0
If you don't specify any tag triggers, then by default, tags will not trigger pipelines.
IMPORTANT
If you specify tags in combination with branch filters, the trigger will fire if either the branch filter is satisfied or the tag filter
is satisfied. For example, if a pushed tag satisfies the branch filter, the pipeline triggers even if the tag is excluded by the tag
filter, because the push satisfied the branch filter.
Opting out of CI
Disabling the CI trigger
You can opt out of CI triggers entirely by specifying trigger: none .
IMPORTANT
When you push a change to a branch, the YAML file in that branch is evaluated to determine if a CI run should be started.
trigger:
branches:
include:
- master
- releases/*
- feature/*
exclude:
- releases/old*
- feature/*-working
paths:
include:
- '*' # same as '/' for the repository root
exclude:
- 'docs/*' # same as 'docs/'
PR triggers
Pull request (PR) triggers cause a pipeline to run whenever a pull request is opened with one of the specified
target branches, or when updates are made to such a pull request.
YAML
Classic
Branches
You can specify the target branches when validating your pull requests. For example, to validate pull requests that
target master and releases/* , you can use the following pr trigger.
pr:
- master
- releases/*
This configuration starts a new run the first time a new pull request is created, and after every update made to the
pull request.
You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ).
NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).
NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.
GitHub creates a new ref when a pull request is created. The ref points to a merge commit, which is the merged
code between the source and target branches of the pull request. The PR validation pipeline builds the commit
this ref points to. This means that the YAML file that is used to run the pipeline is also a merge between the source
and the target branch. As a result, the changes you make to the YAML file in source branch of the pull request can
override the behavior defined by the YAML file in target branch.
If no pr triggers appear in your YAML file, pull request validations are automatically enabled for all branches, as
if you wrote the following pr trigger. This configuration triggers a build when any pull request is created, and
when commits come into the source branch of any active pull request.
pr:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string
IMPORTANT
When you specify a pr trigger, it replaces the default implicit pr trigger, and only pushes to branches that are explicitly
configured to be included will trigger a pipeline.
For more complex triggers that need to exclude certain branches, you must use the full syntax as shown in the
following example.
# specific branch
pr:
branches:
include:
- master
- releases/*
exclude:
- releases/old*
Paths
You can specify file paths to include or exclude. For example:
# specific path
pr:
branches:
include:
- master
- releases/*
paths:
include:
- docs/*
exclude:
- docs/README.md
NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).
Multiple PR updates
You can specify whether additional updates to a PR should cancel in-progress validation runs for the same PR. The
default is true .
Draft PR validation
By default, pull request triggers fire on draft pull requests as well as pull requests that are ready for review. To
disable pull request triggers for draft pull requests, set the drafts property to false .
pr:
autoCancel: boolean # indicates whether additional pushes to a PR should cancel in-progress runs for the
same PR. Defaults to true
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
drafts: boolean # whether to build draft PRs, defaults to true
# no PR triggers
pr: none
NOTE
If your pr trigger isn't firing, follow the troubleshooting steps in the FAQ.
NOTE
Draft pull requests do not trigger a pipeline.
Protected branches
You can run a validation build with each commit or pull request that targets a branch, and even prevent pull
requests from merging until a validation build succeeds.
To configure mandatory validation builds for a GitHub repository, you must be its owner, a collaborator with the
Admin role, or a GitHub organization member with the Write role.
1. First, create a pipeline for the repository and build it at least once so that its status is posted to GitHub,
thereby making GitHub aware of the pipeline's name.
2. Next, follow GitHub's documentation for configuring protected branches in the repository's settings.
For the status check, select the name of your pipeline in the Status checks list.
IMPORTANT
If your pipeline doesn't show up in this list, please ensure the following:
You are using GitHub app authentication
Your pipeline has run at least once in the last week
IMPORTANT
These settings affect the security of your pipeline.
When you create a pipeline, it is automatically triggered for pull requests from forks of your repository. You can
change this behavior, carefully considering how it affects security. To enable or disable this behavior:
1. Go to your Azure DevOps project. Select Pipelines , locate your pipeline, and select Edit .
2. Select the Triggers tab. After enabling the Pull request trigger , enable or disable the Build pull requests
from forks of this repositor y check box.
By default with GitHub pipelines, secrets associated with your build pipeline are not made available to pull request
builds of forks. These secrets are enabled by default with GitHub Enterprise Server pipelines. Secrets include:
A security token with access to your GitHub repository.
These items, if your pipeline uses them:
Service connection credentials
Files from the secure files library
Build variables marked secret
To bypass this precaution on GitHub pipelines, enable the Make secrets available to builds of forks check
box. Be aware of this setting's effect on security.
Important security considerations
A GitHub user can fork your repository, change it, and create a pull request to propose changes to your
repository. This pull request could contain malicious code to run as part of your triggered build. For example, an
ill-intentioned script or unit test change might leak secrets or compromise the agent machine that's performing
the build. We recommend the following actions to address this risk:
Do not enable the Make secrets available to builds of forks check box if your repository is public or
untrusted users can submit pull requests that automatically trigger builds. Otherwise, secrets might leak
during a build.
Use a Microsoft-hosted agent pool to build pull requests from forks. Microsoft-hosted agent machines are
immediately deleted after they complete a build, so there is no lasting impact if they're compromised.
If you must use a self-hosted agent, do not store any secrets or perform other builds and releases that use
secrets on the same agent, unless your repository is private and you trust pull request creators. Otherwise,
secrets might leak, and the repository contents or secrets of other builds and releases might be revealed.
Comment triggers
Repository collaborators can comment on a pull request to manually run a pipeline. You might use this to run an
optional test suite or validation build. The following commands can be issued to Azure Pipelines in comments:
C OMMAND RESULT
/AzurePipelines run Run all pipelines that are associated with this repository and
whose triggers do not exclude this pull request.
/AzurePipelines run <pipeline-name> Run the specified pipeline unless its triggers exclude this pull
request.
NOTE
For brevity, you can comment using /azp instead of /AzurePipelines .
IMPORTANT
Responses to these commands will appear in the pull request discussion only if your pipeline uses the Azure Pipelines
GitHub App.
Checkout
When a pipeline is triggered, Azure Pipelines pulls your source code from the Azure Repos Git repository. You can
control various aspects of how this happens.
Preferred version of Git
The Windows agent comes with its own copy of Git. If you prefer to supply your own Git rather than use the
included copy, set System.PreferGitFromPath to true . This setting is always true on non-Windows agents.
Checkout path
YAML
Classic
If you are checking out a single repository, by default, your source code will be checked out into a directory called
s . For YAML pipelines, you can change this by specifying checkout with a path . The specified path is relative to
$(Agent.BuildDirectory) . For example: if the checkout path value is mycustompath and $(Agent.BuildDirectory) is
C:\agent\_work\1 , then the source code will be checked out into C:\agent\_work\1\mycustompath .
If you are using multiple checkout steps and checking out multiple repositories, and not explicitly specifying the
folder using path , each repository is placed in a subfolder of s named after the repository. For example if you
check out two repositories named tools and code , the source code will be checked out into
C:\agent\_work\1\s\tools and C:\agent\_work\1\s\code .
Please note that the checkout path value cannot be set to go up any directory levels above
$(Agent.BuildDirectory) , so path\..\anotherpath will result in a valid checkout path (i.e.
C:\agent\_work\1\anotherpath ), but a value like ..\invalidpath will not (i.e. C:\agent\_work\invalidpath ).
You can configure the path setting in the Checkout step of your pipeline.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch
Submodules
YAML
Classic
You can configure the submodules setting in the Checkout step of your pipeline if you want to download files from
submodules.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch
The build pipeline will check out your Git submodules as long as they are:
Unauthenticated: A public, unauthenticated repo with no credentials required to clone or fetch.
Authenticated:
Contained in the same project as the Azure Repos Git repo specified above. The same credentials
that are used by the agent to get the sources from the main repository are also used to get the
sources for submodules.
Added by using a URL relative to the main repository. For example
This one would be checked out:
git submodule add ../../../FabrikamFiberProject/_git/FabrikamFiber FabrikamFiber
In this example the submodule refers to a repo (FabrikamFiber) in the same Azure DevOps
organization, but in a different project (FabrikamFiberProject). The same credentials that are
used by the agent to get the sources from the main repository are also used to get the
sources for submodules. This requires that the job access token has access to the repository
in the second project. If you restricted the job access token as explained in the section above,
then you won't be able to do this.
This one would not be checked out:
git submodule add https://[email protected]/fabrikam-
fiber/FabrikamFiberProject/_git/FabrikamFiber FabrikamFiber
NOTE
Q: Why can't I use a Git credential manager on the agent? A: Storing the submodule credentials in a Git credential
manager installed on your private build agent is usually not effective as the credential manager may prompt you to re-
enter the credentials whenever the submodule is updated. This isn't desirable during automated builds when user
interaction isn't possible.
Shallow fetch
You may want to limit how far back in history to download. Effectively this results in git fetch --depth=n . If your
repository is large, this option might make your build pipeline more efficient. Your repository might be large if it
has been in use for a long time and has sizeable history. It also might be large if you added and later deleted large
files.
YAML
Classic
You can configure the fetchDepth setting in the Checkout step of your pipeline.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch
In these cases this option can help you conserve network and storage resources. It might also save time. The
reason it doesn't always save time is because in some situations the server might need to spend time calculating
the commits to download for the depth you specify.
NOTE
When the pipeline is started, the branch to build is resolved to a commit ID. Then, the agent fetches the branch and checks
out the desired commit. There is a small window between when a branch is resolved to a commit ID and when the agent
performs the checkout. If the branch updates rapidly and you set a very small value for shallow fetch, the commit may not
exist when the agent attempts to check it out. If that happens, increase the shallow fetch depth setting.
Don't sync sources
You may want to skip fetching new commits. This option can be useful in cases when you want to:
Git init, config, and fetch using your own custom options.
Use a build pipeline to just run automation (for example some scripts) that do not depend on code in
version control.
YAML
Classic
You can configure the Don't sync sources setting in the Checkout step of your pipeline, by setting
checkout: none .
steps:
- checkout: none # Don't sync sources
NOTE
When you use this option, the agent also skips running Git commands that clean the repo.
Clean build
You can perform different forms of cleaning the working directory of your self-hosted agent before a build runs.
In general, for faster performance of your self-hosted agents, don't clean the repo. In this case, to get the best
performance, make sure you're also building incrementally by disabling any Clean option of the task or tool
you're using to build.
If you do need to clean the repo (for example to avoid problems caused by residual files from a previous build),
your options are below.
NOTE
Cleaning is not effective if you're using a Microsoft-hosted agent because you'll get a new agent every time.
YAML
Classic
You can configure the clean setting in the Checkout step of your pipeline.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # whether to fetch clean each time
fetchDepth: number # the depth of commits to ask Git to fetch
lfs: boolean # whether to download Git-LFS files
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1)
persistCredentials: boolean # set to 'true' to leave the OAuth token in the Git config after the initial
fetch
When clean is set to true the build pipeline performs an undo of any changes in $(Build.SourcesDirectory) .
More specifically, the following Git commands are executed prior to fetching the source.
git clean -ffdx
git reset --hard HEAD
For more options, you can configure the workspace setting of a Job.
jobs:
- job: string # name of the job, A-Z, a-z, 0-9, and underscore
...
workspace:
clean: outputs | resources | all # what to clean up before the job runs
From the classic editor, choose YAML , choose the Get sources task, and then configure the desired properties
there.
In the Tag format you can use user-defined and predefined variables that have a scope of "All." For example:
$(Build.DefinitionName)_$(Build.DefinitionVersion)_$(Build.BuildId)_$(Build.BuildNumber)_$(My.Variable)
The first four variables are predefined. My.Variable can be defined by you on the variables tab.
The build pipeline labels your sources with a Git tag.
Some build variables might yield a value that is not a valid label. For example, variables such as
$(Build.RequestedFor) and $(Build.DefinitionName) can contain white space. If the value contains white space,
the tag is not created.
After the sources are tagged by your build pipeline, an artifact with the Git ref refs/tags/{tag} is automatically
added to the completed build. This gives your team additional traceability and a more user-friendly way to
navigate from the build to the code that was built.
Pre-defined variables
When you build a GitHub repository, most of the pre-defined variables are available to your jobs. However, since
Azure Pipelines does not recognize the identity of a user making an update in GitHub, the following variables are
set to system identity instead of user's identity:
Build.RequestedFor
Build.RequestedForId
Build.RequestedForEmail
Status updates
There are two types of statuses that Azure Pipelines posts back to GitHub - basic statuses and GitHub Check Runs.
GitHub Checks functionality is only available with GitHub Apps.
Pipeline statuses show up in various places in the GitHub UI.
For PRs, they are displayed on the PR conversations tab.
For individual commits, they are displayed when hovering over the status mark after the commit time on the
repo's commits tab.
PAT or OAuth GitHub connections
For pipelines using PAT or OAuth GitHub connections, statuses are posted back to the commit/PR that triggered
the run. The GitHub status API is used to post such updates. These statuses contain limited information: pipeline
status (failed, success), URL to link back to the build pipeline, and a brief description of the status.
Statuses for PAT or OAuth GitHub connections are only sent at the run level. In other words, you can have a single
status updated for an entire run. If you have multiple jobs in a run, you cannot post a separate status for each job.
However, multiple pipelines can post separate statuses to the same commit.
GitHub Checks
For pipelines set up using the Azure Pipelines GitHub app), the status is posted back in the form of GitHub Checks.
GitHub Checks allow for sending detailed information about the pipeline status as well as test, code coverage, and
errors. The GitHub Checks API can be found here.
For every pipeline using the GitHub App, Checks are posted back for the overall run as well as each job in that
run.
GitHub allows three options when one or more Check Runs fail for a PR/commit. You can choose to "re-run" the
individual Check, re-run all the failing Checks on that PR/commit, or re-run all the Checks, whether they
succeeded initially or not.
Clicking on the "Re-run" link next to the Check Run name will result in Azure Pipelines retrying the run that
generated the Check Run. The resultant run will have the same run number and will use the same version of the
source code, configuration, and YAML file as the initial build. Only those jobs that failed in the initial run and any
dependent downstream jobs will be run again. Clicking on the "Re-run all failing checks" link will have the same
effect. This is the same behavior as clicking "Re-try run" in the Azure Pipelines UI. Clicking on "Re-run all checks"
will result in a new run, with a new run number and will pick up changes in the configuration or YAML file.
FAQ
Problems related to GitHub integration fall into the following categories:
Connection types : I am not sure what connection type I am using to connect my pipeline to GitHub.
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Wrong version : My pipeline runs, but it is using an unexpected version of the source/YAML.
Missing status updates : My GitHub PRs are blocked because Azure Pipeline did not report a status update.
Connection types
To troubleshoot triggers, how do I know the type of GitHub connection I'm using for my pipeline?
Troubleshooting problems with triggers very much depends on the type of GitHub connection you use in your
pipeline. There are two ways to determine the type of connection - from GitHub and from Azure Pipelines.
From GitHub: If a repo is set up to use the GitHub app, then the statuses on PRs and commits will be Check
Runs. If the repo has Azure Pipelines set up with OAuth or PAT connections, the statuses will be the "old"
style of statuses. A quick way to determine if the statuses are Check Runs or simple statuses is to look at
the "conversation" tab on a GitHub PR.
If the "Details" link redirects to the Checks tab, it is a Check Run and the repo is using the app.
If the "Details" link redirects to the Azure DevOps pipeline, then the status is an "old style" status and the
repo is not using the app.
From Azure Pipelines: You can also determine the type of connection by inspecting the pipeline in Azure
Pipelines UI. Open the editor for the pipeline. Select Triggers to open the classic editor for the pipeline.
Then, select YAML tab and then the Get sources step. You'll notice a banner Authorized using
connection: indicating the service connection that was used to integrate the pipeline with GitHub. The
name of the service connection is a hyperlink. Select it to navigate to the service connection properties.
The properties of the service connection will indicate the type of connection being used:
Azure Pipelines app indicates GitHub app connection
oauth indicates OAuth connection
personalaccesstoken indicates PAT authentication
How do I switch my pipeline to use GitHub app instead of OAuth?
Using a GitHub app instead of OAuth or PAT connection is the recommended integration between GitHub and
Azure Pipelines. To switch to GitHub app, follow these steps:
1. Navigate here and install the app in the GitHub organization of your repository.
2. During installation, you'll be redirected to Azure DevOps to choose an Azure DevOps organization and project.
Choose the organization and project that contain the classic build pipeline you want to use the app for. This
choice associates the GitHub App installation with your Azure DevOps organization. If you choose incorrectly,
you can visit this page to uninstall the GitHub app from your GitHub org and start over.
3. In the next page that appears, you do not need to proceed creating a new pipeline.
4. Edit your pipeline by visiting the Pipelines page (e.g.,
https://dev.azure.com/YOUR_ORG_NAME/YOUR_PROJECT_NAME/_build), selecting your pipeline, and clicking
Edit.
5. If this is a YAML pipeline, select the Triggers menu to open the classic editor.
6. Select the "Get sources" step in the pipeline.
7. On the green bar with text "Authorized using connection", click "Change" and select the GitHub App connection
with the same name as the GitHub organization in which you installed the app.
8. On the toolbar, select "Save and queue" and then "Save and queue". Click the link to the pipeline run that was
queued to make sure it succeeds.
9. Create (or close and reopen) a pull request in your GitHub repository to verify that a build is successfully
queued in its "Checks" section.
Why isn't a GitHub repository displayed for me to choose in Azure Pipelines?
Depending on the authentication type and ownership of the repository, specific permissions are required.
If you're using the GitHub App, see GitHub App authentication.
If you're using OAuth, see OAuth authentication.
If you're using PATs, see Personal access token (PAT) authentication.
When I select a repository during pipeline creation, I get an error "The repository {repo-name} is in use with the Azure Pipelines
GitHub App in another Azure DevOps organization."
This means that your repository is already associated with a pipeline in a different organization. CI and PR events
from this repository won't work as they will be delivered to the other organization. Here are the steps you should
take to remove the mapping to the other organization before proceeding to create a pipeline.
1. Open a pull request in your GitHub repository, and make the comment /azp where . This reports back the
Azure DevOps organization that the repository is mapped to.
2. To change the mapping, uninstall the app from the GitHub organization, and re-install it. As you re-install it,
make sure to select the correct organization when you are redirected to Azure DevOps.
Failing triggers
I just created a new YAML pipeline with CI/PR triggers, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Are your YAML CI or PR triggers being overridden by pipeline settings in the UI? While editing your
pipeline, choose ... and then Triggers .
Check the Override the YAML trigger from here setting for the types of trigger (Continuous
integration or Pull request validation ) available for your repo.
Are you using the GitHub app connection to connect the pipeline to GitHub? See Connection types to
determine the type of connection you have. If you are using a GitHub app connection, follow these steps:
Is the mapping set up properly between GitHub and Azure DevOps? Open a pull request in your
GitHub repository, and make the comment /azp where . This reports back the Azure DevOps
organization that the repository is mapped to.
If no organizations are set up to build this repository using the app, go to
https://github.com/<org_name>/<repo_name>/settings/installations and complete the
configuration of the app.
If a different Azure DevOps organization is reported, then someone has already established a
pipeline for this repo in a different organization. We currently have the limitation that we can
only map a GitHub repo to a single DevOps org. Only the pipelines in the first Azure DevOps
org can be automatically triggered. To change the mapping, uninstall the app from the GitHub
organization, and re-install it. As you re-install it, make sure to select the correct organization
when you are redirected to Azure DevOps.
Are you using OAuth or PAT to connect the pipeline to GitHub? See Connection types to determine the type
of connection you have. If you are using a GitHub connection, follow these steps:
1. OAuth and PAT connections rely on webhooks to communicate updates to Azure Pipelines. In
GitHub, navigate to the settings for your repository, then to Webhooks. Verify that the webhooks
exist. Usually you should see three webhooks - push, pull_request, and issue_comment. If you don't,
then you must re-create the service connection and update the pipeline to use the new service
connection.
2. Select each of the webhooks in GitHub and verify that the payload that corresponds to the user's
commit exists and was sent successfully to Azure DevOps. You may see an error here if the event
could not be communicated to Azure DevOps.
The traffic from Azure DevOps could be throttled by GitHub. When Azure Pipelines receives a notification
from GitHub, it tries to contact GitHub and fetch more information about the repo and YAML file. If you
have a repo with a large number of updates and pull requests, this call may fail due to such throttling. In
this case, see if you can reduce the frequency of builds by using batching or stricter path/branch filters.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you updated the YAML file in the correct branch? If you push an update to a branch, then the YAML
file in that same branch governs the CI behavior. If you push an update to a source branch, then the YAML
file resulting from merging the source branch with the target branch governs the PR behavior. Make sure
that the YAML file in the correct branch has the necessary CI or PR configuration.
Have you configured the trigger correctly? When you define a YAML trigger, you can specify both include
and exclude clauses for branches, tags, and paths. Ensure that the include clause matches the details of
your commit and that the exclude clause doesn't exclude them. Check the syntax for the triggers and make
sure that it is accurate.
Have you used variables in defining the trigger or the paths? That is not supported.
Did you use templates for your YAML file? If so, make sure that your triggers are defined in the main YAML
file. Triggers defined inside template files are not supported.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
Do you have wildcards in your path filters? Understand the limitations of wildcards in your paths as
described in this article.
Did you just push a new branch? If so, the new branch may not start a new run. See the section "Behavior
of triggers when new branches are created".
My CI or PR triggers have been working fine. But, they stopped working now.
First go through the troubleshooting steps in the previous question. Then, follow these additional steps:
Do you have merge conflicts in your PR? For a PR that did not trigger a pipeline, open it and check whether
it has a merge conflict. Resolve the merge conflict.
Are you experiencing a delay in the processing of push or PR events? You can usually verify this by seeing
if the issue is specific to a single pipeline or is common to all pipelines or repos in your project. If a push or
a PR update to any of the repos exhibits this symptom, we might be experiencing delays in processing the
update events. Check if we are experiencing a service outage on our status page. If the status page shows
an issue, then our team must have already started working on it. Check the page frequently for updates on
the issue.
I do not want users to override the list of branches for triggers when they update the YAML file. How can I do this?
Users with permissions to contribute code can update the YAML file and include/exclude additional branches. As a
result, users can include their own feature or user branch in their YAML file and push that update to a feature or
user branch. This may cause the pipeline to be triggered for all updates to that branch. If you want to prevent this
behavior, then you can:
1. Edit the pipeline in the Azure Pipelines UI.
2. Navigate to the Triggers menu.
3. Select Override the YAML continuous Integration trigger from here .
4. Specify the branches to include or exclude for the trigger.
When you follow these steps, any CI triggers specified in the YAML file are ignored.
Failing checkout
I see the following error in the log file during checkout step. How do I fix it?
This could be caused by an outage of GitHub. Try to access the repository in GitHub and make sure that you are
able to.
Wrong version
A wrong version of the YAML file is being used in the pipeline. Why is that?
For CI triggers, the YAML file that is in the branch you are pushing is evaluated to see if a CI build should be
run.
For PR triggers, the YAML file resulting from merging the source and target branches of the PR is evaluated to
see if a PR build should be run.
Missing status updates
My PR in GitHub is blocked since Azure Pipelines did not update the status.
This could be a transient error that resulted in Azure DevOps not being able to communicate with GitHub. Retry
the check in GitHub if you use the GitHub app. Or, make a trivial update to the PR to see if the problem can be
resolved.
Related articles
Scheduled triggers
Pipeline completion triggers
Build GitHub Enterprise Server repositories
11/2/2020 • 11 minutes to read • Edit Online
You can integrate your on-premises GitHub Enterprise Server with Azure Pipelines. Your on-premises server may
be exposed to the Internet or it may not be.
If your GitHub Enterprise Server is reachable from the servers that run Azure Pipelines service, then:
you can set up classic build and YAML pipelines
you can configure CI, PR, and scheduled triggers
If your GitHub Enterprise Server is not reachable from the servers that run Azure Pipelines service, then:
you can only set up classic build pipelines
you can only start manual or scheduled builds
you cannot set up YAML pipelines
you cannot configure CI or PR triggers for your classic build pipelines
If your on-premises server is reachable from Microsoft-hosted agents, then you can use them to run your
pipelines. Otherwise, you must set up self-hosted agents that can access your on-premises server and fetch the
code.
shprodcca1ip1 40.82.185.225
tfsprodcca1ip1 40.82.190.38
Central US
tfsprodcus1ip1 13.86.38.60
tfsprodcus2ip1 13.86.33.223
shprodcus1ip1 13.86.39.243
tfsprodcus4ip1 52.158.209.56
tfsprodcus5ip1 13.89.136.165
tfsprodcus3ip1 13.86.36.181
East Asia
shprodea1ip1 20.189.72.51
tfsprodea1ip1 40.81.25.218
East Australia
tfsprodeausu7ip1 40.82.217.103
shprodeausu7ip1 40.82.220.184
East US
tfsprodeus2su5ip1 20.41.47.137
tfsprodeus2su3ip1 20.44.80.98
shprodeus2su1ip1 20.36.242.132
tfsprodeus2su1ip1 20.44.80.197
South Brazil
shprodsbr1ip1 20.40.112.11
tfsprodsbr1ip1 20.40.114.3
South India
tfsprodsin1ip1 40.81.75.130
shprodsin1ip1 40.81.76.87
South UK
tfsproduks1ip1 40.81.159.67
shproduks1ip1 40.81.156.105
West Central US
shprodwcus0ip1 52.159.49.185
Western Europe
tfsprodweu2ip1 52.236.147.103
shprodweusu4ip1 52.142.238.243
tfsprodweu5ip1 51.144.61.32
tfsprodweu3ip1 52.236.147.236
tfsprodweu6ip1 40.74.28.0
tfsprodweusu4ip1 52.142.235.223
Western US 2
tfsprodwus22ip1 40.91.93.92
tfsprodwus23ip1 40.91.93.56
tfsprodwus24ip1 40.91.88.106
tfsprodwus25ip1 51.143.58.182
tfsprodwus2su6ip1 40.91.75.130
Add the corresponding range of IP addresses to your firewall exception rules.
FAQ
Problems related to GitHub Enterprise integration fall into the following categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Wrong version : My pipeline runs, but it is using an unexpected version of the source/YAML.
Failing triggers
I just created a new YAML pipeline with CI/PR triggers, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Are your YAML CI or PR triggers being overridden by pipeline settings in the UI? While editing your pipeline,
choose ... and then Triggers .
Check the Override the YAML trigger from here setting for the types of trigger (Continuous
integration or Pull request validation ) available for your repo.
Webhooks are used to communicate updates from GitHub Enterprise to Azure Pipelines. In GitHub
Enterprise, navigate to the settings for your repository, then to Webhooks. Verify that the webhooks exist.
Usually you should see two webhooks - push, pull_request. If you don't, then you must re-create the service
connection and update the pipeline to use the new service connection.
Select each of the webhooks in GitHub Enterprise and verify that the payload that corresponds to the user's
commit exists and was sent successfully to Azure DevOps. You may see an error here if the event could not
be communicated to Azure DevOps.
When Azure Pipelines receives a notification from GitHub, it tries to contact GitHub and fetch more
information about the repo and YAML file. If the GitHub Enterprise Server is behind a firewall, this traffic
may not reach your server. See Azure DevOps IP Addresses and verify that you have granted exceptions to
all the required IP addresses. These IP addresses may have changed since you have originally set up the
exception rules.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you updated the YAML file in the correct branch? If you push an update to a branch, then the YAML file
in that same branch governs the CI behavior. If you push an update to a source branch, then the YAML file
resulting from merging the source branch with the target branch governs the PR behavior. Make sure that
the YAML file in the correct branch has the necessary CI or PR configuration.
Have you configured the trigger correctly? When you define a YAML trigger, you can specify both include
and exclude clauses for branches, tags, and paths. Ensure that the include clause matches the details of your
commit and that the exclude clause doesn't exclude them. Check the syntax for the triggers and make sure
that it is accurate.
Have you used variables in defining the trigger or the paths? That is not supported.
Did you use templates for your YAML file? If so, make sure that your triggers are defined in the main YAML
file. Triggers defined inside template files are not supported.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
Do you have wildcards in your path filters? Understand the limitations of wildcards in your paths as
described in this article.
Did you just push a new branch? If so, the new branch may not start a new run. See the section "Behavior of
triggers when new branches are created".
My CI or PR triggers have been working fine. But, they stopped working now.
First go through the troubleshooting steps in the previous question. Then, follow these additional steps:
Do you have merge conflicts in your PR? For a PR that did not trigger a pipeline, open it and check whether
it has a merge conflict. Resolve the merge conflict.
Are you experiencing a delay in the processing of push or PR events? You can usually verify this by seeing if
the issue is specific to a single pipeline or is common to all pipelines or repos in your project. If a push or a
PR update to any of the repos exhibits this symptom, we might be experiencing delays in processing the
update events. Check if we are experiencing a service outage on our status page. If the status page shows an
issue, then our team must have already started working on it. Check the page frequently for updates on the
issue.
Failing checkout
Do you use Microsoft-hosted agents? If so, these agents may not be able to reach your GitHub Enterprise Server.
See Not reachable from Microsoft-hosted agents for more information.
Wrong version
A wrong version of the YAML file is being used in the pipeline. Why is that?
For CI triggers, the YAML file that is in the branch you are pushing is evaluated to see if a CI build should be run.
For PR triggers, the YAML file resulting from merging the source and target branches of the PR is evaluated to
see if a PR build should be run.
Build Bitbucket Cloud repositories
11/2/2020 • 16 minutes to read • Edit Online
Azure Pipelines
Azure Pipelines can automatically build and validate every pull request and commit to your Bitbucket Cloud
repository. This article describes how to configure the integration between Bitbucket Cloud and Azure Pipelines.
Bitbucket and Azure Pipelines are two independent services that integrate well together. Your Bitbucket Cloud users
do not automatically get access to Azure Pipelines. You must add them explicitly to Azure Pipelines.
A UT H EN T IC AT IO N T Y P E P IP EL IN ES RUN USIN G
OAuth authentication
OAuth is the simplest authentication type to get started with for repositories in your Bitbucket account. Bitbucket
status updates will be performed on behalf of your personal Bitbucket identity. For pipelines to keep working, your
repository access must remain active.
To use OAuth, login to Bitbucket when prompted during pipeline creation. Then, click Authorize to authorize with
OAuth. An OAuth connection will be saved in your Azure DevOps project for later use, as well as used in the
pipeline being created.
Password authentication
Builds and Bitbucket status updates will be performed on behalf of your personal identity. For builds to keep
working, your repository access must remain active.
To create a password connection, visit Service connections in your Azure DevOps project settings. Create a new
Bitbucket service connection and provide the user name and password to connect to your Bitbucket Cloud
repository.
CI triggers
Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified
branches or you push specified tags.
YAML
Classic
YAML pipelines are configured by default with a CI trigger on all branches.
Branches
You can control which branches get CI triggers with a simple syntax:
trigger:
- master
- releases/*
You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ). See
Wildcards for information on the wildcard syntax.
NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).
NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.
For more complex triggers that use exclude or batch , you must use the full syntax as shown in the following
example.
In the above example, the pipeline will be triggered if a change is pushed to master or to any releases branch.
However, it won't be triggered if a change is made to a releases branch that starts with old .
If you specify an exclude clause without an include clause, then it is equivalent to specifying * in the include
clause.
In addition to specifying branch names in the branches lists, you can also configure triggers based on tags by
using the following format:
trigger:
branches:
include:
- refs/tags/{tagname}
exclude:
- refs/tags/{othertagname}
trigger:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string
IMPORTANT
When you specify a trigger, it replaces the default implicit trigger, and only pushes to branches that are explicitly configured
to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list.
Batching CI runs
If you have many team members uploading changes often, you may want to reduce the number of runs you start.
If you set batch to true , when a pipeline is running, the system waits until the run is completed, then starts
another run with all changes that have not yet been built.
To clarify this example, let us say that a push A to master caused the above pipeline to run. While that pipeline is
running, additional pushes B and C occur into the repository. These updates do not start new independent runs
immediately. But after the first run is completed, all pushes until that point of time are batched together and a new
run is started.
NOTE
If the pipeline has multiple jobs and stages, then the first run should still reach a terminal state by completing or skipping all
its jobs and stages before the second run can start. For this reason, you must exercise caution when using this feature in a
pipeline with multiple stages or approvals. If you wish to batch your builds in such cases, it is recommended that you split
your CI/CD process into two pipelines - one for build (with batching) and one for deployments.
Paths
You can specify file paths to include or exclude. Note that the wildcard syntax is different between branches/tags
and file paths.
# specific path build
trigger:
branches:
include:
- master
- releases/*
paths:
include:
- docs/*
exclude:
- docs/README.md
When you specify paths, you must explicitly specify branches to trigger on. You can't trigger a pipeline with only a
path filter; you must also have a branch filter, and the changed files that match the path filter must be from a
branch that matches the branch filter.
Tips:
Paths are always specified relative to the root of the repository.
If you don't set path filters, then the root folder of the repo is implicitly included by default.
If you exclude a path, you cannot also include it unless you qualify it to a deeper folder. For example if you
exclude /tools then you could include /tools/trigger-runs-on-these
The order of path filters doesn't matter.
Paths in Git are case-sensitive. Be sure to use the same case as the real folders.
NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).
NOTE
For Bitbucket Cloud repos, using branches syntax is the only way to specify tag triggers. The tags: syntax is not
supported for Bitbucket.
Opting out of CI
Disabling the CI trigger
You can opt out of CI triggers entirely by specifying trigger: none .
IMPORTANT
When you push a change to a branch, the YAML file in that branch is evaluated to determine if a CI run should be started.
trigger:
branches:
include:
- master
- releases/*
- feature/*
exclude:
- releases/old*
- feature/*-working
paths:
include:
- '*' # same as '/' for the repository root
exclude:
- 'docs/*' # same as 'docs/'
PR triggers
Pull request (PR) triggers cause a pipeline to run whenever a pull request is opened with one of the specified target
branches, or when updates are made to such a pull request.
YAML
Classic
Branches
You can specify the target branches when validating your pull requests. For example, to validate pull requests that
target master and releases/* , you can use the following pr trigger.
pr:
- master
- releases/*
This configuration starts a new run the first time a new pull request is created, and after every update made to the
pull request.
You can specify the full name of the branch (for example, master ) or a wildcard (for example, releases/* ).
NOTE
You cannot use variables in triggers, as variables are evaluated at runtime (after the trigger has fired).
NOTE
If you use templates to author YAML files, then you can only specify triggers in the main YAML file for the pipeline. You
cannot specify triggers in the template files.
Each new run builds the latest commit from the source branch of the pull request. This is different from how Azure
Pipelines builds pull requests in other repositories (e.g., Azure Repos or GitHub), where it builds the merge commit.
Unfortunately, Bitbucket does not expose information about the merge commit, which contains the merged code
between the source and target branches of the pull request.
If no pr triggers appear in your YAML file, pull request validations are automatically enabled for all branches, as if
you wrote the following pr trigger. This configuration triggers a build when any pull request is created, and when
commits come into the source branch of any active pull request.
pr:
branches:
include:
- '*' # must quote since "*" is a YAML reserved character; we want a string
IMPORTANT
When you specify a pr trigger, it replaces the default implicit pr trigger, and only pushes to branches that are explicitly
configured to be included will trigger a pipeline.
For more complex triggers that need to exclude certain branches, you must use the full syntax as shown in the
following example.
# specific branch
pr:
branches:
include:
- master
- releases/*
exclude:
- releases/old*
Paths
You can specify file paths to include or exclude. For example:
# specific path
pr:
branches:
include:
- master
- releases/*
paths:
include:
- docs/*
exclude:
- docs/README.md
NOTE
You cannot use variables in paths, as variables are evaluated at runtime (after the trigger has fired).
Multiple PR updates
You can specify whether additional updates to a PR should cancel in-progress validation runs for the same PR. The
default is true .
# no PR triggers
pr: none
NOTE
If your pr trigger isn't firing, ensure that you have not overridden YAML PR triggers in the UI.
FAQ
Problems related to Bitbucket integration fall into the following categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Wrong version : My pipeline runs, but it is using an unexpected version of the source/YAML.
Failing triggers
I just created a new YAML pipeline with CI/PR triggers, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Are your YAML CI or PR triggers being overridden by pipeline settings in the UI? While editing your pipeline,
choose ... and then Triggers .
Check the Override the YAML trigger from here setting for the types of trigger (Continuous
integration or Pull request validation ) available for your repo.
Webhooks are used to communicate updates from Bitbucket to Azure Pipelines. In Bitbucket, navigate to the
settings for your repository, then to Webhooks. Verify that the webhooks exist.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you updated the YAML file in the correct branch? If you push an update to a branch, then the YAML file
in that same branch governs the CI behavior. If you push an update to a source branch, then the YAML file
resulting from merging the source branch with the target branch governs the PR behavior. Make sure that
the YAML file in the correct branch has the necessary CI or PR configuration.
Have you configured the trigger correctly? When you define a YAML trigger, you can specify both include
and exclude clauses for branches, tags, and paths. Ensure that the include clause matches the details of your
commit and that the exclude clause doesn't exclude them. Check the syntax for the triggers and make sure
that it is accurate.
Have you used variables in defining the trigger or the paths? That is not supported.
Did you use templates for your YAML file? If so, make sure that your triggers are defined in the main YAML
file. Triggers defined inside template files are not supported.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
Do you have wildcards in your path filters? Understand the limitations of wildcards in your paths as
described in this article.
Did you just push a new branch? If so, the new branch may not start a new run. See the section "Behavior of
triggers when new branches are created".
My CI or PR triggers have been working fine. But, they stopped working now.
First go through the troubleshooting steps in the previous question. Then, follow these additional steps:
Do you have merge conflicts in your PR? For a PR that did not trigger a pipeline, open it and check whether
it has a merge conflict. Resolve the merge conflict.
Are you experiencing a delay in the processing of push or PR events? You can usually verify this by seeing if
the issue is specific to a single pipeline or is common to all pipelines or repos in your project. If a push or a
PR update to any of the repos exhibits this symptom, we might be experiencing delays in processing the
update events. Check if we are experiencing a service outage on our status page. If the status page shows an
issue, then our team must have already started working on it. Check the page frequently for updates on the
issue.
I do not want users to override the list of branches for triggers when they update the YAML file. How can I do this?
Users with permissions to contribute code can update the YAML file and include/exclude additional branches. As a
result, users can include their own feature or user branch in their YAML file and push that update to a feature or
user branch. This may cause the pipeline to be triggered for all updates to that branch. If you want to prevent this
behavior, then you can:
1. Edit the pipeline in the Azure Pipelines UI.
2. Navigate to the Triggers menu.
3. Select Override the YAML continuous Integration trigger from here .
4. Specify the branches to include or exclude for the trigger.
When you follow these steps, any CI triggers specified in the YAML file are ignored.
Wrong version
A wrong version of the YAML file is being used in the pipeline. Why is that?
For CI triggers, the YAML file that is in the branch you are pushing is evaluated to see if a CI build should be run.
For PR triggers, the YAML file resulting from merging the source and target branches of the PR is evaluated to
see if a PR build should be run.
Build on-premises Bitbucket repositories
11/2/2020 • 6 minutes to read • Edit Online
NOTE
To integrate Bitbucket Cloud with Azure Pipelines, see Bitbucket Cloud.
You can integrate your on-premises Bitbucket server or another Git server with Azure Pipelines. Your on-premises
server may be exposed to the Internet or it may not be.
If your on-premises server is reachable from the servers that run Azure Pipelines service, then:
you can set up classic build and configure CI triggers
If your on-premises server is not reachable from the servers that run Azure Pipelines service, then:
you can set up classic build pipelines and start manual builds
you cannot configure CI triggers
NOTE
YAML pipelines do not work with on-premises Bitbucket repositories.
NOTE
PR triggers are not available with on-premises Bitbucket repositories.
If your on-premises server is reachable from the hosted agents, then you can use the hosted agents to run manual,
scheduled, or CI builds. Otherwise, you must set up self-hosted agents that can access your on-premises server and
fetch the code.
shprodcca1ip1 40.82.185.225
tfsprodcca1ip1 40.82.190.38
Central US
tfsprodcus1ip1 13.86.38.60
tfsprodcus2ip1 13.86.33.223
shprodcus1ip1 13.86.39.243
tfsprodcus4ip1 52.158.209.56
tfsprodcus5ip1 13.89.136.165
tfsprodcus3ip1 13.86.36.181
East Asia
shprodea1ip1 20.189.72.51
tfsprodea1ip1 40.81.25.218
East Australia
tfsprodeausu7ip1 40.82.217.103
shprodeausu7ip1 40.82.220.184
East US
tfsprodeus2su5ip1 20.41.47.137
tfsprodeus2su3ip1 20.44.80.98
shprodeus2su1ip1 20.36.242.132
tfsprodeus2su1ip1 20.44.80.197
South Brazil
shprodsbr1ip1 20.40.112.11
tfsprodsbr1ip1 20.40.114.3
South India
tfsprodsin1ip1 40.81.75.130
shprodsin1ip1 40.81.76.87
South UK
tfsproduks1ip1 40.81.159.67
shproduks1ip1 40.81.156.105
West Central US
shprodwcus0ip1 52.159.49.185
Western Europe
tfsprodweu2ip1 52.236.147.103
shprodweusu4ip1 52.142.238.243
tfsprodweu5ip1 51.144.61.32
tfsprodweu3ip1 52.236.147.236
tfsprodweu6ip1 40.74.28.0
tfsprodweusu4ip1 52.142.235.223
Western US 2
tfsprodwus22ip1 40.91.93.92
tfsprodwus23ip1 40.91.93.56
tfsprodwus24ip1 40.91.88.106
tfsprodwus25ip1 51.143.58.182
tfsprodwus2su6ip1 40.91.75.130
FAQ
Problems related to Bitbucket Server integration fall into the following categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Failing triggers
I pushed a change to my server, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Is your Bitbucket server accessible from Azure Pipelines? Azure Pipelines periodically polls Bitbucket server
for changes. If the Bitbucket server is behind a firewall, this traffic may not reach your server. See Azure
DevOps IP Addresses and verify that you have granted exceptions to all the required IP addresses. These IP
addresses may have changed since you have originally set up the exception rules. You can only start manual
runs if you used an external Git connection and if your server is not accessible from Azure Pipelines.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
Have you excluded the branches or paths to which you pushed your changes? Test by pushing a change to
an included path in an included branch. Note that paths in triggers are case-sensitive. Make sure that you
use the same case as those of real folders when specifying the paths in triggers.
I did not push any updates to my code, however the pipeline is still being triggered.
The continuous integration trigger for Bitbucket works through polling. After each polling interval, Azure
Pipelines attempts to contact the Bitbucket server to check if there have been any updates to the code. If Azure
Pipelines is unable to reach the Bitbucket server (possibly due to a network issue), then we start a new run
anyway assuming that there might have been code changes. In a few cases, Azure Pipelines may also create a
dummy failed build with an error message to indicate that it was unable to reach the server.
Failing checkout
When I attempt to start a new run manually, there is a delay of 4-8 minutes before it starts.
Your Bitbucket server is not reachable from Azure Pipelines. Make sure that you have not selected the option to
attempt accessing this Git ser ver from Azure Pipelines in the Bitbucket service connection. If that option
is selected, Azure Pipelines will attempt to contact to your server and since your server is unreachable, it
eventually times out and starts the run anyway. Unchecking that option speeds up your manual runs.
The checkout step fails with the error that the server cannot be resolved.
Do you use Microsoft-hosted agents? If so, these agents may not be able to reach your Bitbucket server. See Not
reachable from Microsoft-hosted agents for more information.
Build TFVC repositories
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
NOTE
Azure Pipelines, TFS 2017.2 and newer : Click Advanced settings to see some of the following options.
Repository name
Ignore this text box (TFS 2017 RTM or older).
Mappings (workspace )
Include with a type value of Map only the folders that your build pipeline requires. If a subfolder of a mapped
folder contains files that the build pipeline does not require, map it with a type value of Cloak .
Make sure that you Map all folders that contain files that your build pipeline requires. For example, if you add
another project, you might have to add another mapping to the workspace.
Cloak folders you don't need. By default the root folder of project is mapped in the workspace. This configuration
results in the build agent downloading all the files in the version control folder of your project. If this folder
contains lots of data, your build could waste build system resources and slow down your build pipeline by
downloading large amounts of data that it does not require.
When you remove projects, look for mappings that you can remove from the workspace.
If this is a CI build, in most cases you should make sure that these mappings match the filter settings of your CI
trigger on the Triggers tab.
For more information on how to optimize a TFVC workspace, see Optimize your workspace.
Clean the local repo on the agent
You can perform different forms of cleaning the working directory of your self-hosted agent before a build runs.
In general, for faster performance of your self-hosted agents, don't clean the repo. In this case, to get the best
performance, make sure you're also building incrementally by disabling any Clean option of the task or tool you're
using to build.
If you do need to clean the repo (for example to avoid problems caused by residual files from a previous build),
your options are below.
NOTE
Cleaning is not relevant if you are using a Microsoft-hosted agent because you get a new agent every time in that case.
Sources and output director y : Same operation as Sources option above, plus: Deletes and recreates
$(Build.BinariesDirectory) .
all if you want to delete $(Agent.BuildDirectory) , which is the entire working folder that contains the
sources folder, binaries folder, artifacts folder, and so on.
source if you want to delete $(Build.SourcesDirectory) .
binary If you want to delete $(Build.BinariesDirectory) .
TFS 2015 RTM
Select true to delete the repository folder.
Label sources
You may want to label your source code files to enable your team to easily identify which version of each file is
included in the completed build. You also have the option to specify whether the source code should be labeled for
all builds or only for successful builds.
NOTE
You can only use this feature when the source repository in your build is a GitHub repository, or a Git or TFVC repository
from your project.
In the Label format you can use user-defined and predefined variables that have a scope of "All." For example:
$(Build.DefinitionName)_$(Build.DefinitionVersion)_$(Build.BuildId)_$(Build.BuildNumber)_$(My.Variable)
The first four variables are predefined. My.Variable can be defined by you on the variables tab.
The build pipeline labels your sources with a TFVC label.
CI triggers
Select Enable continuous integration on the Triggers tab to enable this trigger if you want the build to run
whenever someone checks in code.
Batch changes
Select this check box if you have many team members uploading changes often and you want to reduce the
number of builds you are running. If you select this option, when a build is running, the system waits until the
build is completed and then queues another build of all changes that have not yet been built.
Path filters
Select the version control paths you want to include and exclude. In most cases, you should make sure that these
filters are consistent with your TFVC mappings. You can use path filters to reduce the set of files that you want to
trigger a build.
Tips:
Paths are always specified relative to the root of the workspace.
If you don't set path filters, then the root folder of the workspace is implicitly included by default.
If you exclude a path, you cannot also include it unless you qualify it to a deeper folder. For example if you
exclude /tools then you could include /tools/trigger-runs-on-these
The order of path filters doesn't matter.
Gated check-in
You can use gated check-in to protect against breaking changes.
By default Use workspace mappings for filters is selected. Builds are triggered whenever a change is checked
in under a path specified in your source mappings.
Otherwise, you can clear this check box and specify the paths in the trigger.
How it affects your developers
When developers try to check-in, they are prompted to build their changes.
FAQ
I get the following error when running a pipeline:
The shelveset <xyz> could not be found for check-in
Is your job authorization scope set to collection ? TFVC repositories are usually spread across the projects in
your collection. You may be reading or writing to a folder that can only be accessed when the scope is the entire
collection. You can set this in organization settings or in project setting under the Pipelines tab.
I get the following error when running a pipeline:
The underlying connection was closed: An unexpected error occurred on a receive. ##[error]Exit code 100
returned from process: file name 'tf', arguments 'vc workspace /new /location:local /permission:Public
This is usually an intermittent error caused when the service is experiencing technical issues. Please re-run the
pipeline.
What is scorch?
Scorch is a TFVC power tool that ensures source control on the server and the local disk are identical. See
Microsoft Visual Studio Team Foundation Server 2015 Power Tools.
Build Subversion repositories
11/2/2020 • 4 minutes to read • Edit Online
You can integrate your on-premises Subversion server with Azure Pipelines. The Subversion server must be
accessible to Azure Pipelines.
NOTE
YAML pipelines do not work with Subversion repositories.
If your server is reachable from the hosted agents, then you can use the hosted agents to run manual, scheduled, or
CI builds. Otherwise, you must set up self-hosted agents that can access your on-premises server and fetch the
code.
To integrate with Subversion, create a Subversion service connection and use that to create a pipeline. CI triggers
work through polling. In other words, Azure Pipelines periodically checks the Subversion server if there are any
updates to code. If there are, then Azure Pipelines will start a new run.
If the Subversion server cannot be reached from Azure Pipelines, work with your IT department to open a network
path between Azure Pipelines and your server. For example, you can add exceptions to your firewall rules to allow
traffic from Azure Pipelines to flow through. See the section on Azure DevOps IPs to see which IP addresses you
need to allow. Furthermore, you need to have a public DNS entry for the Subversion server so that Azure Pipelines
can resolve the FQDN of your server to an IP address.
Reachable from Microsoft-hosted agents
A decision you have to make is whether to use Microsoft-hosted agents or self-hosted agents to run your pipelines.
This often comes down to whether Microsoft-hosted agents can reach your server. To check whether they can,
create a simple pipeline to use Microsoft-hosted agents and make sure to add a step to checkout source code from
your server. If this passes, then you can continue using Microsoft-hosted agents.
Not reachable from Microsoft-hosted agents
If the simple test pipeline mentioned in the above section fails with an error, then the Subversion server is probably
not reachable from Microsoft-hosted agents. This is probably caused by a firewall blocking traffic from these
servers. You have two options in this case:
Work with your IT department to open a network path between Microsoft-hosted agents and Subversion
server. See the section on networking in Microsoft-hosted agents.
Switch to using self-hosted agents or scale-set agents. These agents can be set up within your network and
hence will have access to the Subversion server. These agents only require outbound connections to Azure
Pipelines. There is no need to open a firewall for inbound connections. Make sure that the name of the
server you specified when creating the service connection is resolvable from the self-hosted agents.
tfsprodcca1ip1 40.82.190.38
Central US
tfsprodcus1ip1 13.86.38.60
tfsprodcus2ip1 13.86.33.223
shprodcus1ip1 13.86.39.243
tfsprodcus4ip1 52.158.209.56
tfsprodcus5ip1 13.89.136.165
tfsprodcus3ip1 13.86.36.181
East Asia
shprodea1ip1 20.189.72.51
tfsprodea1ip1 40.81.25.218
East Australia
tfsprodeausu7ip1 40.82.217.103
shprodeausu7ip1 40.82.220.184
East US
tfsprodeus2su5ip1 20.41.47.137
tfsprodeus2su3ip1 20.44.80.98
shprodeus2su1ip1 20.36.242.132
tfsprodeus2su1ip1 20.44.80.197
South Brazil
shprodsbr1ip1 20.40.112.11
tfsprodsbr1ip1 20.40.114.3
South India
tfsprodsin1ip1 40.81.75.130
shprodsin1ip1 40.81.76.87
South UK
tfsproduks1ip1 40.81.159.67
shproduks1ip1 40.81.156.105
West Central US
shprodwcus0ip1 52.159.49.185
Western Europe
tfsprodweu2ip1 52.236.147.103
shprodweusu4ip1 52.142.238.243
tfsprodweu5ip1 51.144.61.32
tfsprodweu3ip1 52.236.147.236
tfsprodweu6ip1 40.74.28.0
tfsprodweusu4ip1 52.142.235.223
Western US 2
tfsprodwus22ip1 40.91.93.92
tfsprodwus23ip1 40.91.93.56
tfsprodwus24ip1 40.91.88.106
tfsprodwus25ip1 51.143.58.182
tfsprodwus2su6ip1 40.91.75.130
FAQ
Problems related to Subversion server integration fall into the following categories:
Failing triggers : My pipeline is not being triggered when I push an update to the repo.
Failing checkout : My pipeline is being triggered, but it fails in the checkout step.
Failing triggers
I pushed a change to my server, but the pipeline is not being triggered.
Follow each of these steps to troubleshoot your failing triggers:
Is your Subversion server accessible from Azure Pipelines? Azure Pipelines periodically polls Subversion
server for changes. If the Subversion server is behind a firewall, this traffic may not reach your server. See
Azure DevOps IP Addresses and verify that you have granted exceptions to all the required IP addresses.
These IP addresses may have changed since you have originally set up the exception rules.
Is your pipeline paused or disabled? Open the editor for the pipeline, and then select Settings to check. If
your pipeline is paused or disabled, then triggers do not work.
I did not push any updates to my code, however the pipeline is still being triggered.
The continuous integration trigger for Subversion works through polling. After each polling interval, Azure
Pipelines attempts to contact the Subversion server to check if there have been any updates to the code. If Azure
Pipelines is unable to reach the server (possibly due to a network issue), then we start a new run anyway
assuming that there might have been code changes. In a few cases, Azure Pipelines may also create a dummy
failed build with an error message to indicate that it was unable to reach the server.
Failing checkout
The checkout step fails with the error that the server cannot be resolved.
Do you use Microsoft-hosted agents? If so, these agents may not be able to reach your Bitbucket server. See Not
reachable from Microsoft-hosted agents for more information.
Check out multiple repositories in your pipeline
11/2/2020 • 9 minutes to read • Edit Online
NOTE
When you check out Azure Repos Git repositories other than the one containing the pipeline, you may be prompted to
authorize access to that resource before the pipeline runs for the first time. For more information, see Why am I am
prompted to authorize resources the first time I try to check out a different repository? in the FAQ section.
resources:
repositories:
- repository: MyGitHubRepo # The name used to reference this repository in the checkout step
type: github
endpoint: MyGitHubServiceConnection
name: MyGitHubOrgOrUser/MyGitHubRepo
- repository: MyBitbucketRepo
type: bitbucket
endpoint: MyBitbucketServiceConnection
name: MyBitbucketOrgOrUser/MyBitbucketRepo
- repository: MyAzureReposGitRepository # In a different organization
endpoint: MyAzureReposGitServiceConnection
type: git
name: OtherProject/MyAzureReposGitRepo
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
steps:
- checkout: self
- checkout: MyGitHubRepo
- checkout: MyBitbucketRepo
- checkout: MyAzureReposGitRepository
If the self repository is named CurrentRepo , the command produces the following output:
script
CurrentRepo MyAzureReposGitRepo MyBitbucketRepo MyGitHubRepo . In this example, the names of the repositories are
used for the folders, because no path is specified in the checkout step. For more information on repository folder
names and locations, see the following Checkout path section.
NOTE
Only Azure Repos Git repositories in the same organization can use the inline syntax. Azure Repos Git repositories in a
different organization, and other supported repository types require a service connection and must be declared as a
repository resource.
steps:
- checkout: git://MyProject/MyRepo # Azure Repos Git repository in the same organization
NOTE
In the previous example, the self repository is not checked out. If you specify any checkout steps, you must include
checkout: self in order for self to be checked out.
Checkout path
Unless a path is specified in the checkout step, source code is placed in a default directory. This directory is
different depending on whether you are checking out a single repository or multiple repositories.
Single repositor y : If you have a single checkout step in your job, or you have no checkout step which is
equivalent to checkout: self , your source code is checked out into a directory called s located as a
subfolder of (Agent.BuildDirectory) . If (Agent.BuildDirectory) is C:\agent\_work\1 , your code is checked
out to C:\agent\_work\1\s .
Multiple repositories : If you have multiple checkout steps in your job, your source code is checked out
into directories named after the repositories as a subfolder of s in (Agent.BuildDirectory) . If
(Agent.BuildDirectory) is C:\agent\_work\1 and your repositories are named tools and code , your
code is checked out to C:\agent\_work\1\s\tools and C:\agent\_work\1\s\code .
NOTE
If no path is specified in the checkout step, the name of the repository is used for the folder, not the
repository value which is used to reference the repository in the checkout step.
If a path is specified for a checkout step, that path is used, relative to (Agent.BuildDirectory) .
NOTE
If you are using default paths, adding a second repository checkout step changes the default path of the code for the first
repository. For example, the code for a repository named tools would be checked out to C:\agent\_work\1\s when
tools is the only repository, but if a second repository is added, tools would then be checked out to
C:\agent\_work\1\s\tools . If you have any steps that depend on the source code being in the original location, those
steps must be updated.
When using a repository resource, specify the ref using the ref property. The following example checks out the
features/tools/ branch of the designated repository.
resources:
repositories:
- repository: MyGitHubRepo
type: github
endpoint: MyGitHubServiceConnection
name: MyGitHubOrgOrUser/MyGitHubRepo
ref: features/tools
steps:
- checkout: MyGitHubRepo
Triggers
You can trigger a pipeline when an update is pushed to the self repository or to any of the repositories declared
as resources. This is useful, for instance, in the following scenarios:
You consume a tool or a library from a different repository. You want to run tests for your application
whenever the tool or library is updated.
You keep your YAML file in a separate repository from the application code. You want to trigger the pipeline
every time an update is pushed to the application repository.
IMPORTANT
Repository resource triggers only work for Azure Repos Git repositories in the same organization at present. They do not
work for GitHub or Bitbucket repository resources.
If you do not specify a trigger section in a repository resource, then the pipeline won't be triggered by changes
to that repository. If you specify a trigger section, then the behavior for triggering is similar to how CI triggers
work for the self repository.
If you specify a trigger section for multiple repository resources, then a change to any of them will start a new
run.
The trigger for self repository can be defined in a trigger section at the root of the YAML file, or in a
repository resource for self . For example, the following two are equivalent.
trigger:
- main
steps:
...
resources:
repositories:
- repository: self
type: git
name: MyProject/MyGitRepo
trigger:
- main
steps:
...
NOTE
It is an error to define the trigger for self repository twice. Do not define it both at the root of the YAML file and in the
resources section.
When a pipeline is triggered, Azure Pipelines has to determine the version of the YAML file that should be used
and a version for each repository that should be checked out. If a change to the self repository triggers a
pipeline, then the commit that triggered the pipeline is used to determine the version of the YAML file. If a change
to any other repository resource triggers the pipeline, then the latest version of YAML from the default branch
of self repository is used.
When an update to one of the repositories triggers a pipeline, then the following variables are set based on
triggering repository:
Build.Repository.ID
Build.Repository.Name
Build.Repository.Provider
Build.Repository.Uri
Build.SourceBranch
Build.SourceBranchName
Build.SourceVersion
Build.SourceVersionMessage
For the triggering repository, the commit that triggered the pipeline determines the version of the code that is
checked out. For other repositories, the ref defined in the YAML for that repository resource determines the
default version that is checked out.
Consider the following example, where the self repository contains the YAML file and repositories A and B
contain additional source code.
trigger:
- main
- feature
resources:
repositories:
- repository: A
type: git
name: MyProject/A
ref: main
trigger:
- main
- repository: B
type: git
name: MyProject/B
ref: release
trigger:
- main
- release
The following table shows which versions are checked out for each repository by a pipeline using the above YAML
file, unless you explicitly override the behavior during checkout .
C H A N GE M A DE P IP EL IN E VERSIO N O F VERSIO N O F
TO T RIGGERED YA M L SELF VERSIO N O F A VERSIO N O F B
main in self Yes commit from commit from latest from latest from
main that main that main release
triggered the triggered the
pipeline pipeline
feature in Yes commit from commit from latest from latest from
self feature that feature that main release
triggered the triggered the
pipeline pipeline
main in A Yes latest from latest from commit from latest from
main main main that release
triggered the
pipeline
main in B Yes latest from latest from latest from commit from
main main main main that
triggered the
pipeline
release in B Yes latest from latest from latest from commit from
main main main release that
triggered the
pipeline
You can also trigger the pipeline when you create or update a pull request in any of the repositories. To do this,
declare the repository resources in the YAML files as in the examples above, and configure a branch policy in the
repository (Azure Repos only).
Repository details
When you check out multiple repositories, some details about the self repository are available as variables.
When you use multi-repo triggers, some of those variables have information about the triggering repository
instead. Details about all of the repositories consumed by the job are available as a template context object called
resources.repositories .
For example, to get the ref of a non- self repository, you could write a pipeline like this:
resources:
repositories:
- repository: other
type: git
name: MyProject/OtherTools
variables:
tools.ref: $[ resources.repositories['other'].ref ]
steps:
- checkout: self
- checkout: other
- bash: echo "Tools version: $TOOLS_REF"
FAQ
Why can't I check out a repository from another project? It used to work.
Why am I am prompted to authorize resources the first time I try to check out a different repository?
Why can't I check out a repository from another project? It used to work.
Azure Pipelines provides a Limit job authorization scope to current project setting, that when enabled,
doesn't permit the pipeline to access resources outside of the project that contains the pipeline. This setting can be
set at either the organization or project level. If this setting is enabled, you won't be able to check out a repository
in another project unless you explicitly grant access. For more information, see Job authorization scope.
Why am I am prompted to authorize resources the first time I try to check out a different repository?
When you check out Azure Repos Git repositories other than the one containing the pipeline, you may be
prompted to authorize access to that resource before the pipeline runs for the first time. These prompts are
displayed on the pipeline run summary page.
Choose View or Authorize resources , and follow the prompts to authorize the resources.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.
Use triggers to run a pipeline automatically. Azure Pipelines supports many types of triggers. Based on your
pipeline's type, select the appropriate trigger from the list below:
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
IMPORTANT
Scheduled triggers defined using the pipeline settings UI take precedence over YAML scheduled triggers.
If your YAML pipeline has both YAML scheduled triggers and UI defined scheduled triggers, only the UI defined scheduled
triggers are run. To run the YAML defined scheduled triggers in your YAML pipeline, you must remove the scheduled
triggers defined in the pipeline setting UI. Once all UI scheduled triggers are removed, a push must be made in order for the
YAML scheduled triggers to start running.
Scheduled triggers cause a pipeline to run on a schedule defined using cron syntax.
NOTE
If you want to run your pipeline by only using scheduled triggers, you must disable PR and continuous integration triggers
by specifying pr: none and trigger: none in your YAML file. If you're using Azure Repos Git, PR builds are configured
using branch policy and must be disabled there.
schedules:
- cron: string # cron syntax defining a schedule
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes since
the last successful scheduled run. The default is false.
The first schedule, Daily midnight build , runs a pipeline at midnight every day, but only if the code has changed
since the last successful scheduled run, for master and all releases/* branches, except those under
releases/ancient/* .
The second schedule, Weekly Sunday build , runs a pipeline at noon on Sundays, whether the code has changed
or not since the last run, for all releases/* branches.
NOTE
The time zone for cron schedules is UTC, so in these examples, the midnight build and the noon build are at midnight and
noon in UTC.
NOTE
If you specify an exclude clause without an include clause for branches , it is equivalent to specifying * in the
include clause.
NOTE
You cannot use pipeline variables when specifying schedules.
NOTE
If you use templates in your YAML file, then the schedules must be specified in the main YAML file and not in the template
files.
In this example, the scheduled runs for the following schedule are displayed.
schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master
The Scheduled runs windows displays the times converted to the local time zone set on the computer used to
browse to the Azure DevOps portal. In this example the screenshot was taken in the EST time zone.
IMPORTANT
Scheduled runs for a branch are added only if the branch matches the branch filters for the scheduled triggers in the YAML
file in that par ticular branch .
Next, a new branch is created based off of master , named new-feature . The scheduled triggers from the YAML
file in the new branch are read, and since there is no match for the new-feature branch, no changes are made to
the scheduled builds, and the new-feature branch is not built using a scheduled trigger.
If new-feature is added to the branches list and this change is pushed to the new-feature branch, the YAML file is
read, and since new-feature is now in the branches list, a scheduled build is added for the new-feature branch.
Now consider that a branch named release is created based off master , and then release is added to the
branch filters in the YAML file in the master branch, but not in the newly created release branch.
# YAML file in the release branch
schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master
# YAML file in the master branch with release added to the branches list
schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master
- release
Because was added to the branch filters in the master branch, but not to the branch filters in the
release
release branch, the release branch won't be built on that schedule. Only when the feature branch is added to
the branch filters in the YAML file in the feature branch will the scheduled build be added to the scheduler.
mm HH DD MM DW
\ \ \ \ \__ Days of week
\ \ \ \____ Months
\ \ \______ Days
\ \________ Hours
\__________ Minutes
F IEL D A C C EP T ED VA L UES
Minutes 0 through 59
Hours 0 through 23
Days 1 through 31
Months 1 through 12, full English names, first three letters of English
names
Days of week 0 through 6 (starting with Sunday), full English names, first
three letters of English names
schedules:
- cron: ...
...
always: true
Every Monday - Friday at 3:00 AM (UTC - 5:00 time zone), build branches that meet the features/nc/*
branch filter criteria
schedules:
- cron: "30 21 * * Sun-Thu"
displayName: M-F 3:00 AM (UTC + 5:30) India daily build
branches:
include:
- /features/india/*
- cron: "0 8 * * Mon-Fri"
displayName: M-F 3:00 AM (UTC - 5) NC daily build
branches:
include:
- /features/nc/*
In the first schedule, M-F 3:00 AM (UTC + 5:30) India daily build , the cron syntax ( mm HH DD MM DW ) is
30 21 * * Sun-Thu .
Minutes and Hours - 30 21 - This maps to 21:30 UTC ( 9:30 PM UTC ). Since the specified time zone in the
classic editor is UTC + 5:30 , we need to subtract 5 hours and 30 minutes from the desired build time of 3:00
AM to arrive at the desired UTC time to specify for the YAML trigger.
Days and Months are specified as wildcards since this schedule doesn't specify to run only on certain days of
the month or on a specific month.
Days of the week - Sun-Thu - because of the timezone conversion, for our builds to run at 3:00 AM in the UTC
+ 5:30 India time zone, we need to specify starting them the previous day in UTC time. We could also specify
the days of the week as 0-4 or 0,1,2,3,4 .
In the second schedule, M-F 3:00 AM (UTC - 5) NC daily build , the cron syntax is 0 8 * * Mon-Fri .
Minutes and Hours - 0 8 - This maps to 8:00 AM UTC . Since the specified time zone in the classic editor is
UTC - 5:00 , we need to add 5 hours from the desired build time of 3:00 AM to arrive at the desired UTC time
to specify for the YAML trigger.
Days and Months are specified as wildcards since this schedule doesn't specify to run only on certain days of
the month or on a specific month.
Days of the week - Mon-Fri - Because our timezone conversions don't span multiple days of the week for our
desired schedule, we don't need to do any conversion here. We could also specify the days of the week as 1-5
or 1,2,3,4,5 .
IMPORTANT
The UTC time zones in YAML scheduled triggers don't account for daylight savings time.
Every Sunday at 3:00 AM UTC, build the releases/lastversion branch, even if the source or pipeline hasn't
changed
The equivalent YAML scheduled trigger is:
schedules:
- cron: "0 3 * * Mon-Fri"
displayName: M-F 3:00 AM (UTC) daily build
branches:
include:
- master
- /releases/*
- cron: "0 3 * * Sun"
displayName: Sunday 3:00 AM (UTC) weekly latest version build
branches:
include:
- /releases/lastversion
always: true
In the first schedule, M-F 3:00 AM (UTC) daily build , the cron syntax is 0 3 * * Mon-Fri .
Minutes and Hours - 0 3 - This maps to 3:00 AM UTC . Since the specified time zone in the classic editor is
UTC , we don't need to do any time zone conversions.
Days and Months are specified as wildcards since this schedule doesn't specify to run only on certain days of
the month or on a specific month.
Days of the week - Mon-Fri - because there is no timezone conversion, the days of the week map directly from
the classic editor schedule. We could also specify the days of the week as 1,2,3,4,5 .
In the second schedule, Sunday 3:00 AM (UTC) weekly latest version build , the cron syntax is 0 3 * * Sun .
Minutes and Hours - 0 3 - This maps to 3:00 AM UTC . Since the specified time zone in the classic editor is
UTC , we don't need to do any time zone conversions.
Days and Months are specified as wildcards since this schedule doesn't specify to run only on certain days of
the month or on a specific month.
Days of the week - Sun - Because our timezone conversions don't span multiple days of the week for our
desired schedule, we don't need to do any conversion here. We could also specify the days of the week as 0 .
We also specify always: true since this build is scheduled to run whether or not the source code has been
updated.
Scheduled builds are not yet supported in YAML syntax. After you create your YAML build pipeline, you can use
pipeline settings to specify a scheduled trigger.
YAML pipelines are not yet available on TFS.
FAQ
I defined a schedule in the YAML file. But it didn't run. What happened?
My YAML schedules were working fine. But, they stopped working now. How do I debug this?
My code hasn't changed, yet a scheduled build is triggered. Why?
I see the planned run in the Scheduled runs panel. However, it does not run at that time. Why?
Schedules defined in YAML pipeline work for one branch but not the other. How do I fix this?
I defined a schedule in the YAML file. But it didn't run. What happened?
Check the next few runs that Azure Pipelines has scheduled for your pipeline. You can find these by
selecting the Scheduled runs action in your pipeline. The list is filtered down to only show you the
upcoming few runs over the next few days. If this does not meet your expectation, it is probably the case
that you have mistyped your cron schedule, or you do not have the schedule defined in the correct branch.
Read the topic above to understand how to configure schedules. Reevaluate your cron syntax. All the times
for cron schedules are in UTC.
Make a small trivial change to your YAML file and push that update into your repository. If there was any
problem in reading the schedules from the YAML file earlier, it should be fixed now.
If you have any schedules defined in the UI, then your YAML schedules are not honored. Ensure that you do
not have any UI schedules by navigating to the editor for your pipeline and then selecting Triggers .
There is a limit on the number of runs you can schedule for a pipeline. Read more about limits.
If there are no changes to your code, they Azure Pipelines may not start new runs. Learn how to override
this behavior.
My YAML schedules were working fine. But, they stopped working now. How do I debug this?
If you did not specify always:true , your pipeline won't be scheduled unless there are any updates made to
your code. Check whether there have been any code changes and how you configured the schedules.
There is a limit on how many times you can schedule your pipeline. Check if you have exceeded those
limits.
Check if someone enabled additional schedules in the UI. Open the editor for your pipeline, and select
Triggers . If they defined schedules in the UI, then your YAML schedules won't be honored.
Check if your pipeline is paused or disabled. Select Settings for your pipeline.
Check the next few runs that Azure Pipelines has scheduled for your pipeline. You can find these by
selecting the Scheduled runs action in your pipeline. If you do not see the schedules that you expected,
make a small trivial change to you YAML file, and push the update to your repository. This should re-sync
the schedules.
If you use GitHub for storing your code, it is possible that Azure Pipelines may have been throttled by
GitHub when it tried to start a new run. Check if you can start a new run manually.
My code hasn't changed, yet a scheduled build is triggered. Why?
You might have enabled an option to always run a scheduled build even if there are no code changes. If
you use a YAML file, verify the syntax for the schedule in the YAML file. If you use classic pipelines, verify if
you checked this option in the scheduled triggers.
You might have updated the build pipeline or some property of the pipeline. This will cause a new run to be
scheduled even if you have not updated your source code. Verify the Histor y of changes in the pipeline
using the classic editor.
You might have updated the service connection used to connect to the repository. This will cause a new run
to be scheduled even if you have not updated your source code.
Azure Pipelines first checks if there are any updates to your code. If Azure Pipelines is unable to reach your
repository or get this information, it will either start a scheduled run anyway or it will create a failed run to
indicate that it is unable to reach the repository. If you notice that a run was created and that failed
immediately, this is likely the reason. It is a dummy build to let you know that Azure Pipelines is unable to
reach your repository.
I see the planned run in the Scheduled runs panel. However, it does not run at that time. Why?
The Scheduled runs panel shows all potential schedules. However, it may not actually run unless you have
made real updates to the code. To force a schedule to always run, ensure that you have set the always
property in the YAML pipeline, or checked the option to always run in a classic pipeline.
Schedules defined in YAML pipeline work for one branch but not the other. How do I fix this?
Schedules are defined in YAML files, and these files are associated with branches. If you want a pipeline to be
scheduled for a particular branch, say features/X, then make sure that the YAML file in that branch has the cron
schedule defined in it, and that it has the correct branch inclusions for the schedule. The YAML file in features/X
branch should have the following in this example:
schedules:
- cron: "0 12 * * 0" # replace with your schedule
branches:
include:
- features/X
Trigger one pipeline after another
11/2/2020 • 7 minutes to read • Edit Online
Large products have several components that are dependent on each other. These components are often
independently built. When an upstream component (a library, for example) changes, the downstream
dependencies have to be rebuilt and revalidated.
In situations like these, add a pipeline trigger to run your pipeline upon the successful completion of the
triggering pipeline .
YAML
Classic
To trigger a pipeline upon the completion of another, specify the triggering pipeline as a pipeline resource.
NOTE
Previously, you may have navigated to the classic editor for your YAML pipeline and configured build completion
triggers in the UI. While that model still works, it is no longer recommended. The recommended approach is to specify
pipeline triggers directly within the YAML file. Build completion triggers as defined in the classic editor have various
drawbacks, which have now been addressed in pipeline triggers. For instance, there is no way to trigger a pipeline on the
same branch as that of the triggering pipeline using build completion triggers.
In the following example, we have two pipelines - app-ci (the pipeline defined by the YAML snippet) and
security-lib-ci (the pipeline referenced by the pipeline resource). We want the app-ci pipeline to run
automatically every time a new version of the security library is built in the master branch or any releases branch.
pipeline: securitylib specifies the name of the pipeline resource, and is used when referring to the
pipeline resource from other parts of the pipeline, such as pipeline resource variables.
source: security-lib-ci specifies the name of the pipeline referenced by this pipeline resource. You can
retrieve a pipeline's name from the Azure DevOps portal in several places, such as the Pipelines landing
page. To configure the pipeline name setting, edit the YAML pipeline, choose Triggers from the settings
menu, and navigate to the YAML pane.
NOTE
If the triggering pipeline is in another Azure DevOps project, you must specify the project name using
project: OtherProjectName . For more information, see pipeline resource.
NOTE
If your filters aren't working, try using the prefix refs/heads/ . For example, use refs/heads/releases/old* instead of
releases/old* .
If the triggering pipeline and the triggered pipeline use the same repository, then both the pipelines will run using
the same commit when one triggers the other. This is helpful if your first pipeline builds the code, and the second
pipeline tests it. However, if the two pipelines use different repositories, then the triggered pipeline will use the
version of the code in the branch specified by the Default branch for manual and scheduled builds setting,
as described in the following section.
Branch considerations for pipeline completion triggers
Pipeline completion triggers use the Default branch for manual and scheduled builds setting to determine
which branch's version of a YAML pipeline's branch filters to evaluate when determining whether to run a pipeline
as the result of another pipeline completing. By default this setting points to the default branch of the repository.
When a pipeline completes, the Azure DevOps runtime evaluates the pipeline resource trigger branch filters of
any pipelines with pipeline completion triggers that reference the completed pipeline. A pipeline can have
multiple versions in different branches, so the runtime evaluates the branch filters in the pipeline version in the
branch specified by the Default branch for manual and scheduled builds setting. If there is a match, the
pipeline runs, but the version of the pipeline that runs may be in a different branch depending on whether the
triggered pipeline is in the same repository as the completed pipeline.
If the two pipelines are in different repositories, the triggered pipeline version in the branch specified by
Default branch for manual and scheduled builds is run.
If the two pipelines are in the same repository, the triggered pipeline version in the same branch as the
triggering pipeline is run, even if that branch is different than the Default branch for manual and
scheduled builds , and even if that version does not have branch filters that match the completed pipeline's
branch. This is because the branch filters from the Default branch for manual and scheduled builds
branch are used to determine if the pipeline should run, and not the branch filters in the version that is in the
completed pipeline branch.
If your pipeline completion triggers don't seem to be firing, check the value of the Default branch for manual
and scheduled builds setting for the triggered pipeline. The branch filters in that branch's version of the
pipeline are used to determine whether the pipeline completion trigger initiates a run of the pipeline. By default,
Default branch for manual and scheduled builds is set to the default branch of the repository, but you can
change it after the pipeline is created.
A typical scenario in which the pipeline completion trigger doesn't fire is when a new branch is created, the
pipeline completion trigger branch filters are modified to include this new branch, but when the first pipeline
completes on a branch that matches the new branch filters, the second pipeline doesn't trigger. This happens if the
branch filters in the pipeline version in the Default branch for manual and scheduled builds branch don't
match the new branch. To resolve this trigger issue you have the following two options.
Update the branch filters in the pipeline in the Default branch for manual and scheduled builds branch
so that they match the new branch.
Update the Default branch for manual and scheduled builds setting to a branch that has a version of the
pipeline with the branch filters that match the new branch.
To view and update the Default branch for manual and scheduled builds setting:
1. Navigate to the pipeline details for your pipeline, and choose Edit .
To prevent triggering two runs of B in this example, you must remove its CI trigger or pipeline trigger.
Triggers in pipeline resources are not in Azure DevOps Server 2019. Choose the Classic tab in the documentation
for information on build completion triggers.
Release triggers
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
NOTE
This topic covers classic release pipelines. To understand triggers in YAML pipelines, see pipeline triggers.
Release triggers are an automation tool to deploy your application. When the trigger conditions are met, the
pipeline will deploy your artifacts to the environment/stages you already specified.
Build branch filters allow you to trigger a release only for a build that is from one of the branches selected here.
You also have the option to specify branch tags. If you do so, a release will be triggered only if a new build
tagged with the keywords specified here, is available.
NOTE
Automatically creating a release does not mean it will be automatically deployed to a stage. You must set up stages
triggers to deploy your app to the various stages.
Scheduled release triggers
Scheduled release trigger allow you to create new releases at specific times.
Select the schedule icon under the Ar tifacts section. Toggle the Enabled/Disabled button and specify your
release schedule. You can set up multiple schedules to trigger a release.
To use a pull request trigger, you must also enable it for specific stages. We will go through stage triggers in the
next section. You may also want to set up a branch policies for your branches.
Stage triggers
Stage triggers allow you set up specific conditions to trigger deployment to a specific stage.
Select trigger : Set the trigger that will start the deployment to this stage automatically. Select "Release"
to deploy to the stage every time a new release is created. Use the "Stage" option to deploy after
deployments to selected stages are successful. To allow only manual deployments, select "Manual".
Ar tifacts filter : Select artifact condition(s) to trigger a new deployment. A release will be deployed to
this stage only if all artifact conditions are met.
Gates : Allow you to set up specific gates to evaluate before the deployment.
Deployment queue settings : Allow you to configure actions when multiple releases are queued for
deployment.
NOTE
TFS 2015 : The following features are not available in TFS 2015 - continuous deployment triggers for multiple artifact
sources, multiple scheduled triggers combining scheduled and continuous deployment triggers in the same pipeline,
continuous deployment based on the branch or tag of a build.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
A task is the building block for defining automation in a pipeline. A task is simply a packaged script or procedure
that has been abstracted with a set of inputs.
When you add a task to your pipeline, it may also add a set of demands to the pipeline. The demands define the
prerequisites that must be installed on the agent for the task to run. When you run the build or deployment, an
agent that meets these demands will be chosen.
When you run a job, all the tasks are run in sequence, one after the other. To run the same set of tasks in parallel
on multiple agents, or to run some tasks without using an agent, see jobs.
By default, all tasks run in the same context, whether that's on the host or in a job container. You may optionally
use step targets to control context for an individual task.
Learn more about how to specify properties for a task with the YAML schema.
When you run a job, all the tasks are run in sequence, one after the other, on an agent. To run the same set of
tasks in parallel on multiple agents, or to run some tasks without using an agent, see jobs.
Custom tasks
We provide some built-in tasks to enable fundamental build and deployment scenarios. We have also provided
guidance for creating your own custom task.
In addition, Visual Studio Marketplace offers a number of extensions; each of which, when installed to your
subscription or collection, extends the task catalog with one or more tasks. Furthermore, you can write your own
custom extensions to add tasks to Azure Pipelines or TFS.
In YAML pipelines, you refer to tasks by name. If a name matches both an in-box task and a custom task, the in-
box task will take precedence. You can use the task GUID or a fully-qualified name for the custom task to avoid
this risk:
steps:
- task: myPublisherId.myExtensionId.myContributionId.myTaskName@1 #format example
- task: qetza.replacetokens.replacetokens-task.replacetokens@3 #working example
To find myPublisherId and myExtensionId , select Get on a task in the marketplace. The values after the itemName
in your URL string are myPublisherId and myExtensionId . You can also find the fully-qualified name by adding
the task to a Release pipeline and selecting View YAML when editing the task.
Task versions
Tasks are versioned, and you must specify the major version of the task used in your pipeline. This can help to
prevent issues when new versions of a task are released. Tasks are typically backwards compatible, but in some
scenarios you may encounter unpredictable errors when a task is automatically updated.
When a new minor version is released (for example, 1.2 to 1.3), your build or release will automatically use the
new version. However, if a new major version is released (for example 2.0), your build or release will continue to
use the major version you specified until you edit the pipeline and manually change to the new major version.
The build or release log will include an alert that a new major version is available.
You can set which minor version gets used by specifying the full version number of a task after the @ sign
(example: [email protected] ). You can only use task versions that exist for your organization.
YAML
Classic
In YAML, you specify the major version using @ in the task name. For example, to pin to version 2 of the
PublishTestResults task:
steps:
- task: PublishTestResults@2
The timeout period begins when the task starts running. It does not include the time the task is queued or is
waiting for an agent.
In this YAML, PublishTestResults@2 will run even if the previous step fails because of the succeededOrFailed()
condition.
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
architecture: 'x64'
- task: PublishTestResults@2
inputs:
testResultsFiles: "**/TEST-*.xml"
condition: succeededOrFailed()
NOTE
For the full schema, see YAML schema for task .
Conditions
Only when all previous dependencies have succeeded. This is the default if there is not a condition set in
the YAML.
Even if a previous dependency has failed, unless the run was canceled. Use succeededOrFailed() in the
YAML for this condition.
Even if a previous dependency has failed, even if the run was canceled. Use always() in the YAML for this
condition.
Only when a previous dependency has failed. Use failed() in the YAML for this condition.
resources:
containers:
- container: pycontainer
image: python:3.8
steps:
- task: SampleTask@1
target: host
- task: AnotherTask@1
target: pycontainer
Here, the SampleTask runs on the host and AnotherTask runs in a container.
YAML pipelines aren't available in TFS.
TIP
Want a visual walkthrough? See our April 19 news release.
YAML
Classic
Create an azure-pipelines.yml file in your project's base directory with the following contents.
pool:
vmImage: 'Ubuntu 16.04'
steps:
# Node install
- task: NodeTool@0
displayName: Node install
inputs:
versionSpec: '6.x' # The version we're installing
# Write the installed version to the command line
- script: which node
Create a new build pipeline and run it. Observe how the build is run. The Node.js Tool Installer downloads the
Node.js version if it is not already on the agent. The Command Line script logs the location of the Node.js version
on disk.
YAML pipelines aren't available in TFS.
Tool installer tasks
For a list of our tool installer tasks, see Tool installer tasks.
Disabling in-box and Marketplace tasks
On the organization settings page, you can disable Marketplace tasks, in-box tasks, or both. Disabling
Marketplace tasks can help increase security of your pipelines. If you disable both in-box and Marketplace tasks,
only tasks you install using tfx will be available.
Related articles
Jobs
Task groups
Built-in task catalog
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
NOTE
Task groups are not supported in YAML pipelines. Instead, in that case you can use templates. See YAML schema reference.
A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a
single reusable task that can be added to a build or release pipeline, just like any other task. You can choose to
extract the parameters from the encapsulated tasks as configuration variables, and abstract the rest of the task
information.
The new task group is automatically added to the task catalogue, ready to be added to other release and build
pipelines. Task groups are stored at the project level, and are not accessible outside the project scope.
Task groups are a way to standardize and centrally manage deployment steps for all your applications. When you
include a task group in your definitions, and then make a change centrally to the task group, the change is
automatically reflected in all the definitions that use the task group. There is no need to change each one
individually.
2. Select a sequence of tasks in a build or release pipeline (when using a mouse, click on the checkmarks of
each one). Then open the shortcut menu and choose Create task group .
3. Specify a name and description for the new task group, and the category (tab in the Add tasks panel) you
want to add it to.
4. After you choose Create , the new task group is created and replaces the selected tasks in your pipeline.
5. All the '$(vars)' from the underlying tasks, excluding the predefined variables, will surface as the mandatory
parameters for the newly created task group.
For example, let's say you have a task input $(foobar), which you don't intend to parameterize. However,
when you create a task group, the task input is converted into task group parameter 'foobar'. Now, you can
provide the default value for the task group parameter 'foobar' as $(foobar). This ensures that at runtime,
the expanded task gets the same input it's intended to.
6. Save your updated pipeline.
Use the Expor t shortcut command to save a copy of the task group as a JSON pipeline, and the Impor t icon to
import previously saved task group definitions. Use this feature to transfer task groups between projects and
enterprises, or replicate and save copies of your task groups.
Select a task group name to open the details page.
In the Tasks page you can edit the tasks that make up the task group. For each encapsulated task you can
change the parameter values for the non-variable parameters, edit the existing parameter variables, or convert
parameter values to and from variables. When you save the changes, all definitions that use this task group will
pick up the changes.
All the variable parameters of the task group will show up as mandatory parameters in the pipeline definition. You
can also set the default value for the task group parameters.
In the Histor y tab you can see the history of changes to the group.
In the References tab you can expand lists of all the build and release pipelines, and other task groups,
that use (reference) this task group. This is useful to ensure changes do not have unexpected effects on
other processes.
2. The string -test is appended to the task group version number. When you are happy with the changes,
choose Publish draft . You can choose whether to publish it as a preview or as a production-ready version.
3. You can now use the updated task group in your build and release processes; either by changing the
version number of the task group in an existing pipeline or by adding it from the Add tasks panel.
As with the built-in tasks, the default when you add a task group is the highest non-preview version.
4. After you have finished testing the updated task group, choose Publish preview . The Preview string is
removed from the version number string. It will now appear in definitions as a "production-ready" version.
5. In a build or release pipeline that already contains this task group, you can now select the new "production-
ready" version. When you add the task group from the Add tasks panel, it automatically selects the new
"production-ready" version.
Related topics
Tasks
Task jobs
Templates let you define reusable content, logic, and parameters. Templates function in two ways. You can insert
reusable content with a template or you can use a template to control what is allowed in a pipeline.
If a template is used to include content, it functions like an include directive in many programming languages.
Content from one file is inserted into another file. When a template controls what is allowed in a pipeline, the
template defines logic that another file must follow.
Use templates to define your logic once and then reuse it several times. Templates combine the content of
multiple YAML files into a single pipeline. You can pass parameters into a template from your parent pipeline.
Parameters
You can specify parameters and their data types in a template and pass those parameters to a pipeline. You can
also use parameters outside of templates.
Passing parameters
Parameters must contain a name and data type. In azure-pipelines.yml , when the parameter yesNo is set to a
boolean value, the build succeeds. When yesNo is set to a string such as apples , the build fails.
# File: simple-param.yml
parameters:
- name: yesNo # name of the parameter; required
type: boolean # data type of the parameter; required
default: false
steps:
- script: echo ${{ parameters.yesNo }}
# File: azure-pipelines.yml
trigger:
- master
extends:
template: simple-param.yml
parameters:
yesNo: false # set to a non-boolean value to have the build fail
steps:
- ${{ if eq(parameters.experimentalTemplate, true) }}:
- template: experimental.yml
- ${{ if not(eq(parameters.experimentalTemplate, true)) }}:
- template: stable.yml
string string
The step, stepList, job, jobList, deployment, deploymentList, stage, and stageList data types all use standard
YAML schema format. This example includes string, number, boolean, object, step, and stepList.
parameters:
- name: myString
type: string
default: a string
- name: myMultiString
type: string
default: default
values:
- default
- ubuntu
- name: myNumber
type: number
default: 2
values:
- 1
- 2
- 4
- 8
- 16
- name: myBoolean
type: boolean
default: true
- name: myObject
type: object
default:
foo: FOO
bar: BAR
things:
- one
- two
- three
nested:
one: apple
two: pear
count: 3
- name: myStep
type: step
default:
script: echo my step
- name: mySteplist
type: stepList
default:
- script: echo step one
- script: echo step two
trigger: none
jobs:
- job: stepList
steps: ${{ parameters.mySteplist }}
- job: myStep
steps:
- ${{ parameters.myStep }}
You can iterate through an object and print out each string in the object.
parameters:
- name: listOfStrings
type: object
default:
- one
- two
steps:
- ${{ each value in parameters.listOfStrings }}:
- script: echo ${{ value }}
# File: start.yml
parameters:
- name: buildSteps # the name of the parameter is buildSteps
type: stepList # data type is StepList
default: [] # default value of buildSteps
stages:
- stage: secure_buildstage
pool: Hosted VS2017
jobs:
- job: secure_buildjob
steps:
- script: echo This happens before code
displayName: 'Base: Pre-build'
- script: echo Building
displayName: 'Base: Build'
# File: azure-pipelines.yml
trigger:
- master
extends:
template: start.yml
parameters:
buildSteps:
- bash: echo Test #Passes
displayName: succeed
- bash: echo "Test"
displayName: succeed
- task: CmdLine@2
displayName: Test 3 - Will Fail
inputs:
script: echo "Script Test"
Extend from a template with resources
You can also use extends to extend from a template in your Azure pipeline that contains resources.
# File: azure-pipelines.yml
trigger:
- none
extends:
template: resource-template.yml
# File: resource-template.yml
resources:
pipelines:
- pipeline: my-pipeline
source: sourcePipeline
steps:
- script: echo "Testing resource template"
Insert a template
You can copy content from one YAML and reuse it in a different YAML. This saves you from having to manually
include the same logic in multiple places. The include-npm-steps.yml file template contains steps that are
reused in azure-pipelines.yml .
# File: templates/include-npm-steps.yml
steps:
- script: npm install
- script: yarn install
- script: npm run compile
# File: azure-pipelines.yml
jobs:
- job: Linux
pool:
vmImage: 'ubuntu-latest'
steps:
- template: templates/include-npm-steps.yml # Template reference
- job: Windows
pool:
vmImage: 'windows-latest'
steps:
- template: templates/include-npm-steps.yml # Template reference
Step reuse
You can insert a template to reuse one or more steps across several jobs. In addition to the steps from the
template, each job can define additional steps.
# File: templates/npm-steps.yml
steps:
- script: npm install
- script: npm test
# File: azure-pipelines.yml
jobs:
- job: Linux
pool:
vmImage: 'ubuntu-16.04'
steps:
- template: templates/npm-steps.yml # Template reference
- job: macOS
pool:
vmImage: 'macOS-10.14'
steps:
- template: templates/npm-steps.yml # Template reference
- job: Windows
pool:
vmImage: 'vs2017-win2016'
steps:
- script: echo This script runs before the template's steps, only on Windows.
- template: templates/npm-steps.yml # Template reference
- script: echo This step runs after the template's steps.
Job reuse
Much like steps, jobs can be reused with templates.
# File: templates/jobs.yml
jobs:
- job: Ubuntu
pool:
vmImage: 'ubuntu-latest'
steps:
- bash: echo "Hello Ubuntu"
- job: Windows
pool:
vmImage: 'windows-latest'
steps:
- bash: echo "Hello Windows"
# File: azure-pipelines.yml
jobs:
- template: templates/jobs.yml # Template reference
Stage reuse
Stages can also be reused with templates.
# File: templates/stages1.yml
stages:
- stage: Angular
jobs:
- job: angularinstall
steps:
- script: npm install angular
# File: templates/stages2.yml
stages:
- stage: Build
jobs:
- job: build
steps:
- script: npm run build
# File: azure-pipelines.yml
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: Install
jobs:
- job: npminstall
steps:
- task: Npm@1
inputs:
command: 'install'
- template: templates/stages1.yml
- template: templates/stages2.yml
# File: templates/npm-with-params.yml
parameters:
- name: name # defaults for any parameters that aren't specified
default: ''
- name: vmImage
default: ''
jobs:
- job: ${{ parameters.name }}
pool:
vmImage: ${{ parameters.vmImage }}
steps:
- script: npm install
- script: npm test
When you consume the template in your pipeline, specify values for the template parameters.
# File: azure-pipelines.yml
jobs:
- template: templates/npm-with-params.yml # Template reference
parameters:
name: Linux
vmImage: 'ubuntu-16.04'
You can also use parameters with step or stage templates. For example, steps with parameters:
# File: templates/steps-with-params.yml
parameters:
- name: 'runExtendedTests' # defaults for any parameters that aren't specified
type: boolean
default: false
steps:
- script: npm test
- ${{ if eq(parameters.runExtendedTests, true) }}:
- script: npm test --extended
When you consume the template in your pipeline, specify values for the template parameters.
# File: azure-pipelines.yml
steps:
- script: npm install
NOTE
Scalar parameters without a specified type are treated as strings. For example, eq(true, parameters['myparam']) will
return true , even if the myparam parameter is the word false , if myparam is not explicitly made boolean . Non-
empty strings are cast to true in a Boolean context. That expression could be rewritten to explicitly compare strings:
eq(parameters['myparam'], 'true') .
Parameters are not limited to scalar strings. See the list of data types. For example, using the object type:
# azure-pipelines.yml
jobs:
- template: process.yml
parameters:
pool: # this parameter is called `pool`
vmImage: ubuntu-latest # and it's a mapping rather than a string
# process.yml
parameters:
- name: 'pool'
type: object
default: {}
jobs:
- job: build
pool: ${{ parameters.pool }}
Variable reuse
Variables can be defined in one YAML and included in another template. This could be useful if you want to
store all of your variables in one file. If you are using a template to include variables in a pipeline, the included
template can only be used to define variables. You can use steps and more complex logic when you are
extending from a template. Use parameters instead of variables when you want to restrict type.
In this example, the variable favoriteVeggie is included in azure-pipelines.yml .
# File: vars.yml
variables:
favoriteVeggie: 'brussels sprouts'
# File: azure-pipelines.yml
variables:
- template: vars.yml # Template reference
steps:
- script: echo My favorite vegetable is ${{ variables.favoriteVeggie }}.
jobs:
- job: Build
pool:
vmImage: ${{ parameters.vmImage }}
steps:
- script: npm install
- script: npm test
Now you can reuse this template in multiple pipelines. Use the resources specification to provide the location
of the core repo. When you refer to the core repo, use @ and the name you gave it in resources .
# Repo: Contoso/LinuxProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: github
name: Contoso/BuildTemplates
jobs:
- template: common.yml@templates # Template reference
# Repo: Contoso/WindowsProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: github
name: Contoso/BuildTemplates
ref: refs/tags/v1.0 # optional ref to pin to
jobs:
- template: common.yml@templates # Template reference
parameters:
vmImage: 'vs2017-win2016'
For type: github, name is <identity>/<repo> as in the examples above. For type: git (Azure Repos), name is
<project>/<repo> . If that project is in a separate Azure DevOps organization, you'll need to configure a service
connection with access to the project and include that in YAML:
resources:
repositories:
- repository: templates
name: Contoso/BuildTemplates
endpoint: myServiceConnection # Azure DevOps service connection
jobs:
- template: common.yml@templates
Repositories are resolved only once, when the pipeline starts up. After that, the same resource is used for the
duration of the pipeline. Only the template files are used. Once the templates are fully expanded, the final
pipeline runs as if it were defined entirely in the source repo. This means that you can't use scripts from the
template repo in your pipeline.
If you want to use a particular, fixed version of the template, be sure to pin to a ref . The refs are either
branches ( refs/heads/<name> ) or tags ( refs/tags/<name> ). If you want to pin a specific commit, first create a tag
pointing to that commit, then pin to that tag.
You may also use @self to refer to the repository where the main pipeline was found. This is convenient for
use in extends templates if you want to refer back to contents in the extending pipeline's repository. For
example:
# Repo: Contoso/Central
# File: template.yml
jobs:
- job: PreBuild
steps: []
- job: PostBuild
steps: []
# Repo: Contoso/MyProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: git
name: Contoso/Central
extends:
template: template.yml@templates
# Repo: Contoso/MyProduct
# File: BuildJobs.yml
jobs:
- job: Build
steps: []
Template expressions
Use template expressions to specify how values are dynamically resolved during pipeline initialization. Wrap
your template expression inside this syntax: ${{ }} .
Template expressions can expand template parameters, and also variables. You can use parameters to influence
how a template is expanded. The parameters object works like the variables object in an expression. Only
predefined variables can be used in template expressions.
NOTE
Expressions are only expanded for stages , jobs , steps , and containers (inside resources ). You cannot, for
example, use an expression inside trigger or a resource like repositories . Additionally, on Azure DevOps 2020 RTW,
you can't use template expressions inside containers .
parameters:
- name: 'solution'
default: '**/*.sln'
type: string
steps:
- task: msbuild@1
inputs:
solution: ${{ parameters['solution'] }} # index syntax
- task: vstest@2
inputs:
solution: ${{ parameters.solution }} # property dereference syntax
Then you reference the template and pass it the optional solution parameter:
# File: azure-pipelines.yml
steps:
- template: steps/msbuild.yml
parameters:
solution: my.sln
Context
Within a template expression, you have access to the parameters context that contains the values of parameters
passed in. Additionally, you have access to the variables context that contains all the variables specified in the
YAML file plus many of the predefined variables (noted on each variable in that topic). Importantly, it doesn't
have runtime variables such as those stored on the pipeline or given when you start a run. Template expansion
happens very early in the run, so those variables aren't available.
Required parameters
You can add a validation step at the beginning of your template to check for the parameters you require.
Here's an example that checks for the solution parameter using Bash (which enables it to work on any
platform):
# File: steps/msbuild.yml
parameters:
- name: 'solution'
default: ''
type: string
steps:
- bash: |
if [ -z "$SOLUTION" ]; then
echo "##vso[task.logissue type=error;]Missing template parameter \"solution\""
echo "##vso[task.complete result=Failed;]"
fi
env:
SOLUTION: ${{ parameters.solution }}
displayName: Check for required parameters
- task: msbuild@1
inputs:
solution: ${{ parameters.solution }}
- task: vstest@2
inputs:
solution: ${{ parameters.solution }}
To show that the template fails if it's missing the required parameter:
# File: azure-pipelines.yml
# This will fail since it doesn't set the "solution" parameter to anything,
# so the template will use its default of an empty string
steps:
- template: steps/msbuild.yml
coalesce
Evaluates to the first non-empty, non-null string argument
Min parameters: 2. Max parameters: N
Example:
parameters:
- name: 'restoreProjects'
default: ''
type: string
- name: 'buildProjects'
default: ''
type: string
steps:
- script: echo ${{ coalesce(parameters.foo, parameters.bar, 'Nothing to see') }}
Insertion
You can use template expressions to alter the structure of a YAML pipeline. For instance, to insert into a
sequence:
# File: jobs/build.yml
parameters:
- name: 'preBuild'
type: stepList
default: []
- name: 'preTest'
type: stepList
default: []
- name: 'preSign'
type: stepList
default: []
jobs:
- job: Build
pool:
vmImage: 'vs2017-win2016'
steps:
- script: cred-scan
- ${{ parameters.preBuild }}
- task: msbuild@1
- ${{ parameters.preTest }}
- task: vstest@2
- ${{ parameters.preSign }}
- script: sign
# File: .vsts.ci.yml
jobs:
- template: jobs/build.yml
parameters:
preBuild:
- script: echo hello from pre-build
preTest:
- script: echo hello from pre-test
# Default values
parameters:
- name: 'additionalVariables'
type: object
default: {}
jobs:
- job: build
variables:
configuration: debug
arch: x86
${{ insert }}: ${{ parameters.additionalVariables }}
steps:
- task: msbuild@1
- task: vstest@2
jobs:
- template: jobs/build.yml
parameters:
additionalVariables:
TEST_SUITE: L0,L1
Conditional insertion
If you want to conditionally insert into a sequence or a mapping in a template, use insertions and expression
evaluation. You can also use if statements outside of templates as long as you use template syntax.
For example, to insert into a sequence in a template:
# File: steps/build.yml
parameters:
- name: 'toolset'
default: msbuild
type: string
values:
- msbuild
- dotnet
steps:
# msbuild
- ${{ if eq(parameters.toolset, 'msbuild') }}:
- task: msbuild@1
- task: vstest@2
# dotnet
- ${{ if eq(parameters.toolset, 'dotnet') }}:
- task: dotnet@1
inputs:
command: build
- task: dotnet@1
inputs:
command: test
# File: azure-pipelines.yml
steps:
- template: steps/build.yml
parameters:
toolset: dotnet
# File: steps/build.yml
parameters:
- name: 'debug'
type: boolean
default: false
steps:
- script: tool
env:
${{ if eq(parameters.debug, true) }}:
TOOL_DEBUG: true
TOOL_DEBUG_DIR: _dbg
steps:
- template: steps/build.yml
parameters:
debug: true
pool:
vmImage: 'ubuntu-latest'
steps:
- script: echo "start"
- ${{ if eq(variables.foo, 'test') }}:
- script: echo "this is a test"
Iterative insertion
The each directive allows iterative insertion based on a YAML sequence (array) or mapping (key-value pairs).
For example, you can wrap the steps of each job with additional pre- and post-steps:
# job.yml
parameters:
- name: 'jobs'
type: jobList
default: []
jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: SetupMyBuildTools@1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry@1 # Post steps
condition: always()
# azure-pipelines.yml
jobs:
- template: job.yml
parameters:
jobs:
- job: A
steps:
- script: echo This will get sandwiched between SetupMyBuildTools and PublishMyTelemetry.
- job: B
steps:
- script: echo So will this!
You can also manipulate the properties of whatever you're iterating over. For example, to add additional
dependencies:
# job.yml
- name: 'jobs'
type: jobList
default: []
jobs:
- job: SomeSpecialTool # Run your special tool in its own job first
steps:
- task: RunSpecialTool@1
- ${{ each job in parameters.jobs }}: # Then do each job
- ${{ each pair in job }}: # Insert all properties other than "dependsOn"
${{ if ne(pair.key, 'dependsOn') }}:
${{ pair.key }}: ${{ pair.value }}
dependsOn: # Inject dependency
- SomeSpecialTool
- ${{ if job.dependsOn }}:
- ${{ job.dependsOn }}
# azure-pipelines.yml
jobs:
- template: job.yml
parameters:
jobs:
- job: A
steps:
- script: echo This job depends on SomeSpecialTool, even though it's not explicitly shown here.
- job: B
dependsOn:
- A
steps:
- script: echo This job depends on both Job A and on SomeSpecialTool.
Escape a value
If you need to escape a value that literally contains ${{ , then wrap the value in an expression string. For
example, ${{ 'my${{value' }} or ${{ 'my${{value with a '' single quote too' }}
Imposed limits
Templates and template expressions can cause explosive growth to the size and complexity of a pipeline. To help
prevent runaway growth, Azure Pipelines imposes the following limits:
No more than 100 separate YAML files may be included (directly or indirectly)
No more than 20 levels of template nesting (templates including other templates)
No more than 10 megabytes of memory consumed while parsing the YAML (in practice, this is typically
between 600KB - 2MB of on-disk YAML, depending on the specific features used)
Template parameters
You can pass parameters to templates. The parameters section defines what parameters are available in the
template and their default values. Templates are expanded just before the pipeline runs so that values
surrounded by ${{ }} are replaced by the parameters it receives from the enclosing pipeline. As a result, only
predefined variables can be used in parameters.
To use parameters across multiple pipelines, see how to create a variable group.
Job, stage, and step templates with parameters
# File: templates/npm-with-params.yml
parameters:
name: '' # defaults for any parameters that aren't specified
vmImage: ''
jobs:
- job: ${{ parameters.name }}
pool:
vmImage: ${{ parameters.vmImage }}
steps:
- script: npm install
- script: npm test
When you consume the template in your pipeline, specify values for the template parameters.
# File: azure-pipelines.yml
jobs:
- template: templates/npm-with-params.yml # Template reference
parameters:
name: Linux
vmImage: 'ubuntu-16.04'
You can also use parameters with step or stage templates. For example, steps with parameters:
# File: templates/steps-with-params.yml
parameters:
runExtendedTests: 'false' # defaults for any parameters that aren't specified
steps:
- script: npm test
- ${{ if eq(parameters.runExtendedTests, 'true') }}:
- script: npm test --extended
When you consume the template in your pipeline, specify values for the template parameters.
# File: azure-pipelines.yml
steps:
- script: npm install
Parameters are not limited to scalar strings. As long as the place where the parameter expands expects a
mapping, the parameter can be a mapping. Likewise, sequences can be passed where sequences are expected.
For example:
# azure-pipelines.yml
jobs:
- template: process.yml
parameters:
pool: # this parameter is called `pool`
vmImage: ubuntu-latest # and it's a mapping rather than a string
# process.yml
parameters:
pool: {}
jobs:
- job: build
pool: ${{ parameters.pool }}
# Repo: Contoso/BuildTemplates
# File: common.yml
parameters:
vmImage: 'ubuntu 16.04'
jobs:
- job: Build
pool:
vmImage: ${{ parameters.vmImage }}
steps:
- script: npm install
- script: npm test
Now you can reuse this template in multiple pipelines. Use the resources specification to provide the location
of the core repo. When you refer to the core repo, use @ and the name you gave it in resources .
# Repo: Contoso/LinuxProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: github
name: Contoso/BuildTemplates
jobs:
- template: common.yml@templates # Template reference
# Repo: Contoso/WindowsProduct
# File: azure-pipelines.yml
resources:
repositories:
- repository: templates
type: github
name: Contoso/BuildTemplates
ref: refs/tags/v1.0 # optional ref to pin to
jobs:
- template: common.yml@templates # Template reference
parameters:
vmImage: 'vs2017-win2016'
For type: github, name is <identity>/<repo> as in the examples above. For type: git (Azure Repos), name is
<project>/<repo> . The project must be in the same organization; cross-organization references are not
supported.
Repositories are resolved only once, when the pipeline starts up. After that, the same resource is used for the
duration of the pipeline. Only the template files are used. Once the templates are fully expanded, the final
pipeline runs as if it were defined entirely in the source repo. This means that you can't use scripts from the
template repo in your pipeline.
If you want to use a particular, fixed version of the template, be sure to pin to a ref. Refs are either branches (
refs/heads/<name> ) or tags ( refs/tags/<name> ). If you want to pin a specific commit, first create a tag pointing
to that commit, then pin to that tag.
Expressions
Use template expressions to specify how values are dynamically resolved during pipeline initialization. Wrap
your template expression inside this syntax: ${{ }} .
Template expressions can expand template parameters, and also variables. You can use parameters to influence
how a template is expanded. The parameters object works like the variables object in an expression.
For example you define a template:
# File: steps/msbuild.yml
parameters:
solution: '**/*.sln'
steps:
- task: msbuild@1
inputs:
solution: ${{ parameters['solution'] }} # index syntax
- task: vstest@2
inputs:
solution: ${{ parameters.solution }} # property dereference syntax
Then you reference the template and pass it the optional solution parameter:
# File: azure-pipelines.yml
steps:
- template: steps/msbuild.yml
parameters:
solution: my.sln
Context
Within a template expression, you have access to the parameters context which contains the values of
parameters passed in. Additionally, you have access to the variables context which contains all the variables
specified in the YAML file plus the system variables. Importantly, it doesn't have runtime variables such as those
stored on the pipeline or given when you start a run. Template expansion happens very early in the run, so
those variables aren't available.
Required parameters
You can add a validation step at the beginning of your template to check for the parameters you require.
Here's an example that checks for the solution parameter using Bash (which enables it to work on any
platform):
# File: steps/msbuild.yml
parameters:
solution: ''
steps:
- bash: |
if [ -z "$SOLUTION" ]; then
echo "##vso[task.logissue type=error;]Missing template parameter \"solution\""
echo "##vso[task.complete result=Failed;]"
fi
env:
SOLUTION: ${{ parameters.solution }}
displayName: Check for required parameters
- task: msbuild@1
inputs:
solution: ${{ parameters.solution }}
- task: vstest@2
inputs:
solution: ${{ parameters.solution }}
To show that the template fails if it's missing the required parameter:
# File: azure-pipelines.yml
# This will fail since it doesn't set the "solution" parameter to anything,
# so the template will use its default of an empty string
steps:
- template: steps/msbuild.yml
coalesce
Evaluates to the first non-empty, non-null string argument
Min parameters: 2. Max parameters: N
Example:
parameters:
restoreProjects: ''
buildProjects: ''
steps:
- script: echo ${{ coalesce(parameters.foo, parameters.bar, 'Nothing to see') }}
Insertion
You can use template expressions to alter the structure of a YAML pipeline. For instance, to insert into a
sequence:
# File: jobs/build.yml
parameters:
preBuild: []
preTest: []
preSign: []
jobs:
- job: Build
pool:
vmImage: 'vs2017-win2016'
steps:
- script: cred-scan
- ${{ parameters.preBuild }}
- task: msbuild@1
- ${{ parameters.preTest }}
- task: vstest@2
- ${{ parameters.preSign }}
- script: sign
# File: .vsts.ci.yml
jobs:
- template: jobs/build.yml
parameters:
preBuild:
- script: echo hello from pre-build
preTest:
- script: echo hello from pre-test
# Default values
parameters:
additionalVariables: {}
jobs:
- job: build
variables:
configuration: debug
arch: x86
${{ insert }}: ${{ parameters.additionalVariables }}
steps:
- task: msbuild@1
- task: vstest@2
jobs:
- template: jobs/build.yml
parameters:
additionalVariables:
TEST_SUITE: L0,L1
Conditional insertion
If you want to conditionally insert into a sequence or a mapping, then use insertions and expression evaluation.
For example, to insert into a sequence:
# File: steps/build.yml
parameters:
toolset: msbuild
steps:
# msbuild
- ${{ if eq(parameters.toolset, 'msbuild') }}:
- task: msbuild@1
- task: vstest@2
# dotnet
- ${{ if eq(parameters.toolset, 'dotnet') }}:
- task: dotnet@1
inputs:
command: build
- task: dotnet@1
inputs:
command: test
# File: azure-pipelines.yml
steps:
- template: steps/build.yml
parameters:
toolset: dotnet
# File: steps/build.yml
parameters:
debug: false
steps:
- script: tool
env:
${{ if eq(parameters.debug, 'true') }}:
TOOL_DEBUG: true
TOOL_DEBUG_DIR: _dbg
steps:
- template: steps/build.yml
parameters:
debug: true
Iterative insertion
The each directive allows iterative insertion based on a YAML sequence (array) or mapping (key-value pairs).
For example, you can wrap the steps of each job with additional pre- and post-steps:
# job.yml
parameters:
jobs: []
jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: SetupMyBuildTools@1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry@1 # Post steps
condition: always()
# azure-pipelines.yml
jobs:
- template: job.yml
parameters:
jobs:
- job: A
steps:
- script: echo This will get sandwiched between SetupMyBuildTools and PublishMyTelemetry.
- job: B
steps:
- script: echo So will this!
You can also manipulate the properties of whatever you're iterating over. For example, to add additional
dependencies:
# job.yml
parameters:
jobs: []
jobs:
- job: SomeSpecialTool # Run your special tool in its own job first
steps:
- task: RunSpecialTool@1
- ${{ each job in parameters.jobs }}: # Then do each job
- ${{ each pair in job }}: # Insert all properties other than "dependsOn"
${{ if ne(pair.key, 'dependsOn') }}:
${{ pair.key }}: ${{ pair.value }}
dependsOn: # Inject dependency
- SomeSpecialTool
- ${{ if job.dependsOn }}:
- ${{ job.dependsOn }}
# azure-pipelines.yml
jobs:
- template: job.yml
parameters:
jobs:
- job: A
steps:
- script: echo This job depends on SomeSpecialTool, even though it's not explicitly shown here.
- job: B
dependsOn:
- A
steps:
- script: echo This job depends on both Job A and on SomeSpecialTool.
Escaping
If you need to escape a value that literally contains ${{ , then wrap the value in an expression string. For
example ${{ 'my${{value' }} or ${{ 'my${{value with a '' single quote too' }}
Limits
Templates and template expressions can cause explosive growth to the size and complexity of a pipeline. To help
prevent runaway growth, Azure Pipelines imposes the following limits:
No more than 50 separate YAML files may be included (directly or indirectly)
No more than 10 megabytes of memory consumed while parsing the YAML (in practice, this is typically
between 600KB - 2MB of on-disk YAML, depending on the specific features used)
No more than 2000 characters per template expression are allowed
Add a custom pipelines task extension
11/7/2020 • 19 minutes to read • Edit Online
Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2017
Learn how to install extensions to your organization for custom build or release tasks in Azure DevOps. These
tasks appear next to Microsoft-provided tasks in the Add Step wizard.
To learn more about the new cross-platform build/release system, see What is Azure Pipelines?.
NOTE
This article covers agent tasks in agent-based extensions. For information on server tasks/server-based extensions, check
out the Server Task GitHub Documentation.
Prerequisites
To create extensions for Azure DevOps, you need the following software and tools:
An organization in Azure DevOps. For more information, see Create an organization.
A text editor. For many of the tutorials, we use Visual Studio Code , which provides intellisense and
debugging support. Go to code.visualstudio.com to download the latest version.
The latest version of Node.js. The production environment uses Node14, Node10, or Node6 (by using the
"Node" in the "execution" object instead of Node14 or Node10 ).
TypeScript Compiler 2.2.0 or greater, although we recommend version 4.0.2 or newer for tasks that use
Node14. Go to npmjs.com to download the compiler.
[Cross-platform CLI for Azure DevOps (tfx-cli)] to package your extensions. You can install tfx-cli by using
npm , a component of Node.js, by running npm i -g tfx-cli .
A home directory for your project. The home directory of a build or release task extension should look like
the following example after you complete the steps in this tutorial:
|--- README.md
|--- images
|--- extension-icon.png
|--- buildAndReleaseTask // where your task scripts are placed
|--- vss-extension.json // extension's manifest
npm init
npm init creates the package.json file. You can accept all of the default npm init options.
TIP
The agent doesn't automatically install the required modules because it's expecting your task folder to include the node
modules. To mitigate this, copy the node_modules to buildAndReleaseTask . As your task gets bigger, it's easy to exceed
the size limit (50MB) of a VSIX file. Before you copy the node folder, you may want to run npm install --production or
npm prune --production , or you can write a script to build and pack everything.
Add azure-pipelines-task-lib
We provide a library, azure-pipelines-task-lib, that should be used to create tasks. Add it to your library.
Create a .gitignore file and add node_modules to it. Your build process should do an npm install and a
typings install so that node_modules are built each time and don't need to be checked in.
If you skip this step, typescript version 2.3.4 will be used by default.
Create tsconfig.json compiler options
This file ensures that your TypeScript files are compiled to JavaScript files.
tsc --init
For example, we want to compile to the ES6 standard instead of ES5. To ensure the ES6 standard happens, open
the newly generated tsconfig.json and update the target field to "es6."
NOTE
To have the command run successfully, make sure that TypeScript is installed globally with npm on your local machine.
Task implementation
Now that the scaffolding is complete, we can create our custom task.
task.json
Next, we create a task.json file in the buildAndReleaseTask folder. The task.json file describes the build or
release task and is what the build/release system uses to render configuration options to the user and to know
which scripts to execute at build/release time.
Copy the following code and replace the {{placeholders}} with your task's information. The most important
placeholder is the taskguid , and it must be unique. You can generate the taskguid by using Microsoft's online
GuidGen tool.
{
"$schema": "https://raw.githubusercontent.com/Microsoft/azure-pipelines-task-
lib/master/tasks.schema.json",
"id": "{{taskguid}}",
"name": "{{taskname}}",
"friendlyName": "{{taskfriendlyname}}",
"description": "{{taskdescription}}",
"helpMarkDown": "",
"category": "Utility",
"author": "{{taskauthor}}",
"version": {
"Major": 0,
"Minor": 1,
"Patch": 0
},
"instanceNameFormat": "Echo $(samplestring)",
"inputs": [
{
"name": "samplestring",
"type": "string",
"label": "Sample String",
"defaultValue": "",
"required": true,
"helpMarkDown": "A sample string"
}
],
"execution": {
"Node10": {
"target": "index.js"
}
}
}
instanceNameFormat How the task is displayed within the build or release step list.
You can use variable values by using $(variablename) .
inputs Inputs to be used when your build or release task runs. This
task expects an input with the name samplestring .
index.ts
Create an index.ts file by using the following code as a reference. This code runs when the task is called.
import tl = require('azure-pipelines-task-lib/task');
run();
Compile
Enter "tsc" from the buildAndReleaseTask folder to compile an index.js file from index.ts .
Run the task
An agent can run the task with node index.js from PowerShell.
In the following example, the task fails because inputs weren't supplied ( samplestring is a required input).
node index.js
##vso[task.debug]agent.workFolder=undefined
##vso[task.debug]loading inputs and endpoints
##vso[task.debug]loaded 0
##vso[task.debug]task result: Failed
##vso[task.issue type=error;]Input required: samplestring
##vso[task.complete result=Failed;]Input required: samplestring
As a fix, we can set the samplestring input and run the task again.
$env:INPUT_SAMPLESTRING="Human"
node index.js
##vso[task.debug]agent.workFolder=undefined
##vso[task.debug]loading inputs and endpoints
##vso[task.debug]loading INPUT_SAMPLESTRING
##vso[task.debug]loaded 1
##vso[task.debug]Agent.ProxyUrl=undefined
##vso[task.debug]Agent.CAInfo=undefined
##vso[task.debug]Agent.ClientCert=undefined
##vso[task.debug]Agent.SkipCertValidation=undefined
##vso[task.debug]samplestring=Human
Hello Human
This time, the task succeeded because samplestring was supplied, and it correctly outputted "Hello Human"!
Step 2: Unit test your task scripts
The goal of unit testing is to quickly test the task script, not the external tools it's calling. We want to test all aspects
of both success and failure paths.
Install test tools
We use Mocha as the test driver in this walk through.
before( function() {
});
after(() => {
});
TIP
Your test folder should be located in the buildAndReleaseTask folder. If you get a sync-request error, you can work around it
by adding sync-request to the buildAndReleaseTask folder with the command npm i --save-dev sync-request .
tmr.setInput('samplestring', 'human');
tmr.run();
Next, add the following example success test to your _suite.ts file to run the task mock runner:
tr.run();
console.log(tr.succeeded);
assert.equal(tr.succeeded, true, 'should have succeeded');
assert.equal(tr.warningIssues.length, 0, "should have no warnings");
assert.equal(tr.errorIssues.length, 0, "should have no errors");
console.log(tr.stdout);
assert.equal(tr.stdout.indexOf('Hello human') >= 0, true, "should display Hello human");
done();
});
import ma = require('azure-pipelines-task-lib/mock-answer');
import tmrm = require('azure-pipelines-task-lib/mock-run');
import path = require('path');
tmr.setInput('samplestring', 'bad');
tmr.run();
Next, add the following to your _suite.ts file to run the task mock runner:
it('it should fail if tool returns 1', function(done: Mocha.Done) {
this.timeout(1000);
tr.run();
console.log(tr.succeeded);
assert.equal(tr.succeeded, false, 'should have failed');
assert.equal(tr.warningIssues, 0, "should have no warnings");
assert.equal(tr.errorIssues.length, 1, "should have 1 error issue");
assert.equal(tr.errorIssues[0], 'Bad input was given', 'error issue output');
assert.equal(tr.stdout.indexOf('Hello bad'), -1, "Should not display Hello bad");
done();
});
tsc
mocha tests/_suite.js
Both tests should pass. If you want to run the tests with more verbose output (what you'd see in the build console),
set the environment variable: TASK_TEST_TRACE=1 .
$env:TASK_TEST_TRACE=1
NOTE
The publisher here must be changed to your publisher name. If you want to create a publisher now, go to create your
publisher for instructions.
Contributions
P RO P ERT Y DESC RIP T IO N
properties.name Name of the task. This name must match the folder name of
the corresponding self-contained build or release task
pipeline.
Files
P RO P ERT Y DESC RIP T IO N
NOTE
For more information about the extension manifest file , such as its properties and what they do, check out the extension
manifest reference.
NOTE
An extension or integration's version must be incremented on every update.
When you're updating an existing extension, either update the version in the manifest or pass the --rev-version
command line switch. This increments the patch version number of your extension and saves the new version to your
manifest. You must rev both the task version and extension version for an update to occur.
tfx extension create --manifest-globs vss-extension.json --rev-version only updates the extension version and
not the task version. For more information, see Build Task in GitHub.
After you have your packaged extension in a .vsix file, you're ready to publish your extension to the Marketplace.
IMPORTANT
Publishers must be verified to share extensions publicly. To learn more, see Package/Publish/Install.
Now that your extension is in the Marketplace and shared, anyone who wants to use it must install it.
Create a new Visual Studio Marketplace service connection and grant access permissions for all pipelines. For
more information about creating a service connection, see Service connections.
Use the following example to create a new pipeline with YAML. Learn more about how to Create your first pipeline
and YAML schema.
trigger:
- master
pool:
vmImage: "ubuntu-latest"
variables:
- group: variable-group # Rename to whatever you named your variable group in the prerequisite stage of step
6
stages:
- stage: Run_and_publish_unit_tests
jobs:
- job:
steps:
- task: TfxInstaller@3
inputs:
version: "v0.7.x"
- task: Npm@1
inputs:
command: 'install'
workingDir: '/TaskDirectory' # Update to the name of the directory of your task
- task: Bash@3
displayName: Compile Javascript
inputs:
targetType: "inline"
script: |
cd TaskDirectory # Update to the name of the directory of your task
tsc
- task: Npm@1
inputs:
command: 'custom'
workingDir: '/TestsDirectory' # Update to the name of the directory of your task's tests
customCommand: 'testScript' # See the definition in the explanation section below - it may be
called test
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/ResultsFile.xml'
- stage: Package_extension_and_publish_build_artifacts
jobs:
- job:
steps:
- task: TfxInstaller@3
inputs:
version: "v0.7.x"
- task: Npm@1
inputs:
command: 'install'
workingDir: '/TaskDirectory' # Update to the name of the directory of your task
- task: Bash@3
displayName: Compile Javascript
inputs:
targetType: "inline"
script: |
cd TaskDirectory # Update to the name of the directory of your task
tsc
- task: QueryAzureDevOpsExtensionVersion@3
inputs:
connectTo: 'VsTeam'
connectedServiceName: 'ServiceConnection' # Change to whatever you named the service connection
publisherId: '$(PublisherID)'
extensionId: '$(ExtensionID)'
versionAction: 'Patch'
outputVariable: 'Task.Extension.Version'
- task: PackageAzureDevOpsExtension@3
inputs:
rootFolder: '$(System.DefaultWorkingDirectory)'
rootFolder: '$(System.DefaultWorkingDirectory)'
publisherId: '$(PublisherID)'
extensionId: '$(ExtensionID)'
extensionName: '$(ExtensionName)'
extensionVersion: '$(Task.Extension.Version)'
updateTasksVersion: true
updateTasksVersionType: 'patch'
extensionVisibility: 'private' # Change to public if you're publishing to the marketplace
extensionPricing: 'free'
- task: CopyFiles@2
displayName: "Copy Files to: $(Build.ArtifactStagingDirectory)"
inputs:
Contents: "**/*.vsix"
TargetFolder: "$(Build.ArtifactStagingDirectory)"
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: '$(ArtifactName)'
publishLocation: 'Container'
- stage: Download_build_artifacts_and_publish_the_extension
jobs:
- job:
steps:
- task: TfxInstaller@3
inputs:
version: "v0.7.x"
- task: DownloadBuildArtifacts@0
inputs:
buildType: "current"
downloadType: "single"
artifactName: "$(ArtifactName)"
downloadPath: "$(System.DefaultWorkingDirectory)"
- task: PublishAzureDevOpsExtension@3
inputs:
connectTo: 'VsTeam'
connectedServiceName: 'ServiceConnection' # Change to whatever you named the service connection
fileType: 'vsix'
vsixFile: '/Publisher.*.vsix'
publisherId: '$(PublisherID)'
extensionId: '$(ExtensionID)'
extensionName: '$(ExtensionName)'
updateTasksVersion: false
extensionVisibility: 'private' # Change to public if you're publishing to the marketplace
extensionPricing: 'free'
For more help with triggers, such as CI and PR triggers, see Specify events that trigger pipelines.
NOTE
Each job uses a new user agent and requires dependencies to be installed.
Pipeline stages
This section will help you understand how the pipeline stages work.
Stage: Run and publish unit tests
This stage runs unit tests and publishes test results to Azure DevOps.
To run unit tests, add a custom script to the package.json file. For example:
"scripts": {
"testScript": "mocha ./TestFile --reporter xunit --reporter-option output=ResultsFile.xml"
},
1. Add "Use Node CLI for Azure DevOps (tfx-cli)" to install the tfx-cli onto your build agent.
2. Add the "npm" task with the "install" command and target the folder with the package.json file.
3. Add the "Bash" task to compile the TypeScript into JavaScript.
4. Add the "npm" task with the "custom" command, target the folder that contains the unit tests, and input
testScript as the command. Use the following inputs:
Command: custom
Working folder that contains package.json: /TestsDirectory
Command and arguments: testScript
5. Add the "Publish Test Results" task. If you're using the Mocha XUnit reporter, ensure that the result format is
"JUnit" and not "XUnit." Set the search folder to the root directory. Use the following inputs:
Test result format: JUnit
Test results files: **/ResultsFile.xml
Search folder: $(System.DefaultWorkingDirectory)
After the test results have been published, the output under the tests tab should look like this:
Helpful links
Extension Manifest Reference
Build/Release Task JSON Schema
Build/Release Task Examples
Specify jobs in your pipeline
11/2/2020 • 22 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS
2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.
You can organize your pipeline into jobs. Every pipeline has at least one job. A job is a series of steps that
run sequentially as a unit. In other words, a job is the smallest unit of work that can be scheduled to run.
You can organize your build or release pipeline into jobs. Every pipeline has at least one job. A job is a
series of steps that run sequentially as a unit. In other words, a job is the smallest unit of work that can be
scheduled to run.
NOTE
You must install TFS 2018.2 to use jobs in build processes. In TFS 2018 RTM you can use jobs in release
deployment processes.
You can organize your release pipeline into jobs. Every release pipeline has at least one job. Jobs are not
supported in a build pipeline in this version of TFS.
NOTE
You must install Update 2 to use jobs in a release pipeline in TFS 2017. Jobs in build pipelines are available in Azure
Pipelines, TFS 2018.2, and newer versions.
pool:
vmImage: 'ubuntu-16.04'
steps:
- bash: echo "Hello world"
You may want to specify additional properties on that job. In that case, you can use the job keyword.
jobs:
- job: myJob
timeoutInMinutes: 10
pool:
vmImage: 'ubuntu-16.04'
steps:
- bash: echo "Hello world"
Your pipeline may have multiple jobs. In that case, use the jobs keyword.
jobs:
- job: A
steps:
- bash: echo "A"
- job: B
steps:
- bash: echo "B"
Your pipeline may have multiple stages, each with multiple jobs. In that case, use the stages keyword.
stages:
- stage: A
jobs:
- job: A1
- job: A2
- stage: B
jobs:
- job: B1
- job: B2
- job: string # name of the job, A-Z, a-z, 0-9, and underscore
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
strategy:
parallel: # parallel strategy
matrix: # matrix strategy
maxParallel: number # maximum number simultaneous matrix legs to run
# note: `parallel` and `matrix` are mutually exclusive
# you may specify one or the other; including both is an error
# `maxParallel` is only valid with `matrix`
continueOnError: boolean # 'true' if future jobs should run even if this job fails; defaults to
'false'
pool: pool # agent pool
workspace:
clean: outputs | resources | all # what to clean up before the job runs
container: containerReference # container to run this job inside
timeoutInMinutes: number # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: number # how much time to give 'run always even if cancelled tasks' before
killing them
variables: { string: string } | [ variable | variableReference ]
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
services: { string: string | container } # container resources to run as a service container
If the primary intent of your job is to deploy your app (as opposed to build or test your app), then you can
use a special type of job called deployment job .
The syntax for a deployment job is:
Although you can add steps for deployment tasks in a job , we recommend that you instead use a
deployment job. A deployment job has a few benefits. For example, you can deploy to an environment,
which includes benefits such as being able to see the history of what you've deployed.
YAML is not supported in this version of TFS.
Types of jobs
Jobs can be of different types, depending on where they run.
YAML
Classic
Agent pool jobs run on an agent in an agent pool.
Ser ver jobs run on the Azure DevOps Server.
Container jobs run in a container on an agent in an agent pool. For more information about
choosing containers, see Define container jobs.
YAML
Classic
Agent pool jobs run on an agent in an agent pool.
Ser ver jobs run on the Azure DevOps Server.
Agent pool jobs run on an agent in the agent pool. These jobs are available in build and release
pipelines.
Ser ver jobs run on TFS. These jobs are available in build and release pipelines.
Deployment group jobs run on machines in a deployment group. These jobs are available only in
release pipelines.
Agent pool jobs run on an agent in the agent pool. These jobs are only available release pipelines.
Agent pool jobs
These are the most common type of jobs and they run on an agent in an agent pool. Use demands with
self-hosted agents to specify what capabilities an agent must have to run your job.
NOTE
Demands and capabilities are designed for use with self-hosted agents so that jobs can be matched with an agent
that meets the requirements of the job. When using Microsoft-hosted agents, you select an image for the agent
that matches the requirements of the job, so although it is possible to add capabilities to a Microsoft-hosted
agent, you don't need to use capabilities with Microsoft-hosted agents.
YAML
Classic
pool:
name: myPrivateAgents # your job runs on an agent in this pool
demands: agent.os -equals Windows_NT # the agent must have this capability to run the job
steps:
- script: echo hello world
Or multiple demands:
pool:
name: myPrivateAgents
demands:
- agent.os -equals Darwin
- anotherCapability -equals somethingElse
steps:
- script: echo hello world
jobs:
- job: string
timeoutInMinutes: number
cancelTimeoutInMinutes: number
strategy:
maxParallel: number
matrix: { string: { string: string } }
pool: server
jobs:
- job: string
pool: server
Dependencies
When you define multiple jobs in a single stage, you can specify dependencies between them. Pipelines
must contain at least one job with no dependencies.
NOTE
Each agent can run only one job at a time. To run multiple jobs in parallel you must configure multiple agents. You
also need sufficient parallel jobs.
YAML
Classic
The syntax for defining multiple jobs and their dependencies is:
jobs:
- job: string
dependsOn: string
condition: string
jobs:
- job: Debug
steps:
- script: echo hello from the Debug build
- job: Release
dependsOn: Debug
steps:
- script: echo hello from the Release build
jobs:
- job: Windows
pool:
vmImage: 'vs2017-win2016'
steps:
- script: echo hello from Windows
- job: macOS
pool:
vmImage: 'macOS-10.14'
steps:
- script: echo hello from macOS
- job: Linux
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: echo hello from Linux
Example of fan-out:
jobs:
- job: InitialJob
steps:
- script: echo hello from initial job
- job: SubsequentA
dependsOn: InitialJob
steps:
- script: echo hello from subsequent A
- job: SubsequentB
dependsOn: InitialJob
steps:
- script: echo hello from subsequent B
Example of fan-in:
jobs:
- job: InitialA
steps:
- script: echo hello from initial A
- job: InitialB
steps:
- script: echo hello from initial B
- job: Subsequent
dependsOn:
- InitialA
- InitialB
steps:
- script: echo hello from subsequent
Conditions
You can specify the conditions under which each job runs. By default, a job runs if it does not depend on
any other job, or if all of the jobs that it depends on have completed and succeeded. You can customize
this behavior by forcing a job to run even if a previous job fails or by specifying a custom condition.
YAML
Classic
Example to run a job based upon the status of running a previous job:
jobs:
- job: A
steps:
- script: exit 1
- job: B
dependsOn: A
condition: failed()
steps:
- script: echo this will run when A fails
- job: C
dependsOn:
- A
- B
condition: succeeded('B')
steps:
- script: echo this will run when B runs and succeeds
Example of using a custom condition:
jobs:
- job: A
steps:
- script: echo hello
- job: B
dependsOn: A
condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
steps:
- script: echo this only runs for master
You can specify that a job run based on the value of an output variable set in a previous job. In this case,
you can only use variables set in directly dependent jobs:
jobs:
- job: A
steps:
- script: "echo ##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar
- job: B
condition: and(succeeded(), ne(dependencies.A.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
steps:
- script: echo hello from B
Timeouts
To avoid taking up resources when your job is unresponsive or waiting too long, it's a good idea to set a
limit on how long your job is allowed to run. Use the job timeout setting to specify the limit in minutes for
running the job. Setting the value to zero means that the job can run:
Forever on self-hosted agents
For 360 minutes (6 hours) on Microsoft-hosted agents with a public project and public repository
For 60 minutes on Microsoft-hosted agents with a private project or private repository (unless
additional capacity is paid for)
The timeout period begins when the job starts running. It does not include the time the job is queued or is
waiting for an agent.
YAML
Classic
The timeoutInMinutes allows a limit to be set for the job execution time. When not specified, the default is
60 minutes. When 0 is specified, the maximum limit is used (described above).
The cancelTimeoutInMinutes allows a limit to be set for the job cancel time when the deployment task is
set to keep running if a previous task has failed. When not specified, the default is 5 minutes.
jobs:
- job: Test
timeoutInMinutes: 10 # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: 2 # how much time to give 'run always even if cancelled tasks' before
stopping them
Jobs targeting Microsoft-hosted agents have additional restrictions on how long they may run.
You can also set the timeout for each task individually - see task control options.
Multi-job configuration
From a single job you author, you can run multiple jobs on multiple agents in parallel. Some examples
include:
Multi-configuration builds: You can build multiple configurations in parallel. For example, you
could build a Visual C++ app for both debug and release configurations on both x86 and x64
platforms. To learn more, see Visual Studio Build - multiple configurations for multiple platforms.
Multi-configuration deployments: You can run multiple deployments in parallel, for example,
to different geographic regions.
Multi-configuration testing: You can run test multiple configurations in parallel.
YAML
Classic
The matrix strategy enables a job to be dispatched multiple times, with different variable sets. The
maxParallel tag restricts the amount of parallelism. The following job will be dispatched three times with
the values of Location and Browser set as specified. However, only two jobs will run at the same time.
jobs:
- job: Test
strategy:
maxParallel: 2
matrix:
US_IE:
Location: US
Browser: IE
US_Chrome:
Location: US
Browser: Chrome
Europe_Chrome:
Location: Europe
Browser: Chrome
NOTE
Matrix configuration names (like US_IE above) must contain only basic Latin alphabet letters (A-Z, a-z), numbers,
and underscores ( _ ). They must start with a letter. Also, they must be 100 characters or less.
It's also possible to use output variables to generate a matrix. This can be handy if you need to generate
the matrix using a script.
matrix will accept a runtime expression containing a stringified JSON object. That JSON object, when
expanded, must match the matrixing syntax. In the example below, we've hard-coded the JSON string, but
it could be generated by a scripting language or command-line program.
jobs:
- job: generator
steps:
- bash: echo "##vso[task.setVariable variable=legs;isOutput=true]{'a':{'myvar':'A'}, 'b':
{'myvar':'B'}}"
name: mtrx
# This expands to the matrix
# a:
# myvar: A
# b:
# myvar: B
- job: runner
dependsOn: generator
strategy:
matrix: $[ dependencies.generator.outputs['mtrx.legs'] ]
steps:
- script: echo $(myvar) # echos A or B depending on which leg is running
Slicing
An agent job can be used to run a suite of tests in parallel. For example, you can run a large suite of 1000
tests on a single agent. Or, you can use two agents and run 500 tests on each one in parallel.
To leverage slicing, the tasks in the job should be smart enough to understand the slice they belong to.
The Visual Studio Test task is one such task that supports test slicing. If you have installed multiple agents,
you can specify how the Visual Studio Test task will run in parallel on these agents.
YAML
Classic
The parallel strategy enables a job to be duplicated many times. Variables System.JobPositionInPhase
and System.TotalJobsInPhase are added to each job. The variables can then be used within your scripts to
divide work among the jobs. See Parallel and multiple execution using agent jobs.
The following job will be dispatched five times with the values of System.JobPositionInPhase and
System.TotalJobsInPhase set appropriately.
jobs:
- job: Test
strategy:
parallel: 5
Job variables
If you are using YAML, variables can be specified on the job. The variables can be passed to task inputs
using the macro syntax $(variableName), or accessed within a script using the stage variable.
YAML
Classic
Here's an example of defining variables in a job and using them within tasks.
variables:
mySimpleVar: simple var value
"my.dotted.var": dotted var value
"my var with spaces": var with spaces value
steps:
- script: echo Input macro = $(mySimpleVar). Env var = %MYSIMPLEVAR%
condition: eq(variables['agent.os'], 'Windows_NT')
- script: echo Input macro = $(mySimpleVar). Env var = $MYSIMPLEVAR
condition: in(variables['agent.os'], 'Darwin', 'Linux')
- bash: echo Input macro = $(my.dotted.var). Env var = $MY_DOTTED_VAR
- powershell: Write-Host "Input macro = $(my var with spaces). Env var = $env:MY_VAR_WITH_SPACES"
Workspace
When you run an agent pool job, it creates a workspace on the agent. The workspace is a directory in
which it downloads the source, runs steps, and produces outputs. The workspace directory can be
referenced in your job using Pipeline.Workspace variable. Under this, various subdirectories are created:
When you run an agent pool job, it creates a workspace on the agent. The workspace is a directory in
which it downloads the source, runs steps, and produces outputs. The workspace directory can be
referenced in your job using Agent.BuildDirectory variable. Under this, various subdirectories are
created:
Build.SourcesDirectory is where tasks download the application's source code.
Build.ArtifactStagingDirectory is where tasks download artifacts needed for the pipeline or upload
artifacts before they are published.
Build.BinariesDirectory is where tasks write their outputs.
Common.TestResultsDirectory is where tasks upload their test results.
YAML
Classic
When you run a pipeline on a self-hosted agent , by default, none of the subdirectories are cleaned in
between two consecutive runs. As a result, you can do incremental builds and deployments, provided that
tasks are implemented to make use of that. You can override this behavior using the workspace setting on
the job.
IMPORTANT
The workspace clean options are applicable only for self-hosted agents. When using Microsoft-hosted agents, job
are always run on a new agent.
- job: myJob
workspace:
clean: outputs | resources | all # what to clean up before the job runs
When you specify one of the clean options, they are interpreted as follows:
outputs : Delete Build.BinariesDirectory before running a new job.
resources : Delete Build.SourcesDirectory before running a new job.
all : Delete the entire Pipeline.Workspace directory before running a new job.
NOTE
Depending on your agent capabilities and pipeline demands, each job may be routed to a different agent in your
self-hosted pool. As a result, you may get a new agent for subsequent pipeline runs (or stages or jobs in the same
pipeline), so not cleaning is not a guarantee that subsequent runs, jobs, or stages will be able to access outputs
from previous runs, jobs, or stages. You can configure agent capabilities and pipeline demands to specify which
agents are used to run a pipeline job, but unless there is only a single agent in the pool that meets the demands,
there is no guarantee that subsequent jobs will use the same agent as previous jobs. For more information, see
Specify demands.
In addition to workspace clean, you can also configure cleaning by configuring the Clean setting in the
pipeline settings UI. When the Clean setting is true it is equivalent to specifying clean: true for every
checkout step in your pipeline. To configure the Clean setting:
1. Edit your pipeline, choose ..., and select Triggers .
2. Select YAML , Get sources , and configure your desired Clean setting. The default is false .
Artifact download
This example YAML file publishes the artifact WebSite and then downloads the artifact to
$(Pipeline.Workspace) . The Deploy job only runs if the Build job is successful.
YAML
Classic
# download the artifact and deploy it only if the build job succeeded
- job: Deploy
pool:
vmImage: 'ubuntu-16.04'
steps:
- checkout: none #skip checking out the default repository resource
- task: DownloadBuildArtifacts@0
displayName: 'Download Build Artifacts'
inputs:
artifactName: WebSite
downloadPath: $(System.DefaultWorkingDirectory)
dependsOn: Build
condition: succeeded()
steps:
- powershell: |
$url =
"$($env:SYSTEM_TEAMFOUNDATIONCOLLECTIONURI)$env:SYSTEM_TEAMPROJECTID/_apis/build/definitions/$($env:S
YSTEM_DEFINITIONID)?api-version=4.1-preview"
Write-Host "URL: $url"
$pipeline = Invoke-RestMethod -Uri $url -Headers @{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
Write-Host "Pipeline = $($pipeline | ConvertTo-Json -Depth 100)"
env:
SYSTEM_ACCESSTOKEN: $(system.accesstoken)
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
By default, jobs run on the host machine where the agent is installed. This is convenient and typically well-suited
for projects that are just beginning to adopt Azure Pipelines. Over time, you may find that you want more
control over the context where your tasks run. YAML pipelines offer container jobs for this level of control.
On Linux and Windows agents, jobs may be run on the host or in a container. (On macOS and Red Hat
Enterprise Linux 6, container jobs are not available.) Containers provide isolation from the host and allow you to
pin specific versions of tools and dependencies. Host jobs require less initial setup and infrastructure to
maintain.
Containers offer a lightweight abstraction over the host operating system. You can select the exact versions of
operating systems, tools, and dependencies that your build requires. When you specify a container in your
pipeline, the agent will first fetch and start the container. Then, each step of the job will run inside the container.
You cannot have nested containers. Containers are not supported when an agent is already running inside a
container.
If you need fine-grained control at the individual step level, step targets allow you to choose container or host
for each step.
Requirements
Linux-based containers
The Azure Pipelines system requires a few things in Linux-based containers:
Bash
glibc-based
Can run Node.js (which the agent provides)
Does not define an ENTRYPOINT
USER has access to groupadd and other privileges commands without sudo
NOTE
For Windows-based Linux containers, Node.js must be pre-installed.
Windows Containers
Azure Pipelines can also run Windows Containers. Windows Server version 1803 or higher is required. Docker
must be installed. Be sure your pipelines agent has permission to access the Docker daemon.
The Windows container must support running Node.js. A base Windows Nano Server container is missing
dependencies required to run Node. See this post for more information about what it takes to run Node on
Windows Nano Server.
Hosted agents
Only windows-2019 and ubuntu-* images support running containers. The macOS image does not support
running containers.
Single job
A simple example:
pool:
vmImage: 'ubuntu-18.04'
container: ubuntu:18.04
steps:
- script: printenv
This tells the system to fetch the ubuntu image tagged 18.04 from Docker Hub and then start the container.
When the printenv command runs, it will happen inside the ubuntu:18.04 container.
A Windows example:
pool:
vmImage: 'windows-2019'
container: mcr.microsoft.com/windows/servercore:ltsc2019
steps:
- script: set
NOTE
Windows requires that the kernel version of the host and container match. Since this example uses the Windows 2019
image, we will use the 2019 tag for the container.
Multiple jobs
Containers are also useful for running the same steps in multiple jobs. In the following example, the same steps
run in multiple versions of Ubuntu Linux. (And we don't have to mention the jobs keyword, since there's only a
single job defined.)
pool:
vmImage: 'ubuntu-18.04'
strategy:
matrix:
ubuntu14:
containerImage: ubuntu:14.04
ubuntu16:
containerImage: ubuntu:16.04
ubuntu18:
containerImage: ubuntu:18.04
container: $[ variables['containerImage'] ]
steps:
- script: printenv
Endpoints
Containers can be hosted on registries other than Docker Hub. To host an image on Azure Container Registry or
another private container registry, add a service connection to the private registry. Then you can reference it in a
container spec:
container:
image: myprivate/registry:ubuntu1604
endpoint: private_dockerhub_connection
steps:
- script: echo hello
or
container:
image: myprivate.azurecr.io/windowsservercore:1803
endpoint: my_acr_connection
steps:
- script: echo hello
Other container registries may also work. Amazon ECR doesn't currently work, as there are additional client
tools required to convert AWS credentials into something Docker can use to authenticate.
NOTE
The Red Hat Enterprise Linux 6 build of the agent won't run container job. Choose another Linux flavor, such as Red Hat
Enterprise Linux 7 or above.
Options
If you need to control container startup, you can specify options .
container:
image: ubuntu:18.04
options: --hostname container-test --ip 192.168.0.1
steps:
- script: echo hello
Running docker create --help will give you the list of supported options.
resources:
containers:
- container: u14
image: ubuntu:14.04
- container: u16
image: ubuntu:16.04
- container: u18
image: ubuntu:18.04
jobs:
- job: RunInContainer
pool:
vmImage: 'ubuntu-18.04'
strategy:
matrix:
ubuntu14:
containerResource: u14
ubuntu16:
containerResource: u16
ubuntu18:
containerResource: u18
container: $[ variables['containerResource'] ]
steps:
- script: printenv
LABEL "com.azure.dev.pipelines.agent.handler.node.path"="/usr/local/bin/node"
Add requirements
Azure Pipelines assumes a Bash-based system with common administration packages installed. Alpine Linux in
particular doesn't come with several of the packages needed. Installing bash , sudo , and shadow will cover the
basic needs.
If you depend on any in-box or Marketplace tasks, you'll also need to supply the binaries they require.
Full example of a Dockerfile
FROM node:10-alpine
LABEL "com.azure.dev.pipelines.agent.handler.node.path"="/usr/local/bin/node"
CMD [ "node" ]
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
The concept of stages varies depending on whether you use YAML pipelines or classic release pipelines.
YAML
Classic
You can organize the jobs in your pipeline into stages. Stages are the major divisions in a pipeline: "build this
app", "run these tests", and "deploy to pre-production" are good examples of stages. They are a logical boundary
in your pipeline at which you can pause the pipeline and perform various checks.
Every pipeline has at least one stage even if you do not explicitly define it. Stages may be arranged into a
dependency graph: "run this stage before that one".
NOTE
Support for stages was added in Azure DevOps Server 2019.1.
Specify stages
YAML
Classic
NOTE
Support for stages was added in Azure DevOps Server 2019.1.
In the simplest case, you do not need any logical boundaries in your pipeline. In that case, you do not have to
explicitly use the stage keyword. You can directly specify the jobs in your YAML file.
- job: B
steps:
- bash: echo "B"
If you organize your pipeline into multiple stages, you use the stages keyword.
stages:
- stage: A
jobs:
- job: A1
- job: A2
- stage: B
jobs:
- job: B1
- job: B2
If you choose to specify a pool at the stage level, then all jobs defined in that stage will use that pool unless
otherwise specified at the job-level.
NOTE
In Azure DevOps Server 2019, pools can only be specified at job level.
stages:
- stage: A
pool: StageAPool
jobs:
- job: A1 # will run on "StageAPool" pool based on the pool defined on the stage
- job: A2 # will run on "JobPool" pool
pool: JobPool
stages:
- stage: string # name of the stage, A-Z, a-z, 0-9, and underscore
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
pool: string | pool
variables: { string: string } | [ variable | variableReference ]
jobs: [ job | templateReference]
Specify dependencies
YAML
Classic
NOTE
Support for stages was added in Azure DevOps Server 2019.1.
When you define multiple stages in a pipeline, by default, they run one after the other in the order in which you
define them in the YAML file. Pipelines must contain at least one stage with no dependencies.
The syntax for defining multiple stages and their dependencies is:
stages:
- stage: string
dependsOn: string
condition: string
# if you do not use a dependsOn keyword, stages run in the order they are defined
stages:
- stage: QA
jobs:
- job:
...
- stage: Prod
jobs:
- job:
...
stages:
- stage: FunctionalTest
jobs:
- job:
...
- stage: AcceptanceTest
dependsOn: [] # this removes the implicit dependency on previous stage and causes this to run in
parallel
jobs:
- job:
...
stages:
- stage: Test
- stage: DeployUS1
dependsOn: Test # this stage runs after Test
- stage: DeployUS2
dependsOn: Test # this stage runs in parallel with DeployUS1, after Test
- stage: DeployEurope
dependsOn: # this stage runs after DeployUS1 and DeployUS2
- DeployUS1
- DeployUS2
YAML is not supported in this version of TFS.
Conditions
You can specify the conditions under which each stage runs. By default, a stage runs if it does not depend on any
other stage, or if all of the stages that it depends on have completed and succeeded. You can customize this
behavior by forcing a stage to run even if a previous stage fails or by specifying a custom condition.
NOTE
Conditions for failed ('JOBNAME/STAGENAME') and succeeded ('JOBNAME/STAGENAME') as shown in the following
example work only for .
YAML pipelines
YAML
Classic
NOTE
Support for stages was added in Azure DevOps Server 2019.1.
Example to run a stage based upon the status of running a previous stage:
stages:
- stage: A
stages:
- stage: A
- stage: B
condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
You cannot currently specify that a stage run based on the value of an output variable set in a previous stage.
YAML is not supported in this version of TFS.
Specify approvals
YAML
Classic
You can manually control when a stage should run using approval checks. This is commonly used to control
deployments to production environments. Checks are a mechanism available to the resource owner to control if
and when a stage in a pipeline can consume a resource. As an owner of a resource, such as an environment, you
can define checks that must be satisfied before a stage consuming that resource can start.
Currently, manual approval checks are supported on environments. For more information, see Approvals.
Approvals are not yet supported in YAML pipelines in this version of Azure DevOps Server.
YAML is not supported in this version of TFS.
Deployment jobs
11/2/2020 • 11 minutes to read • Edit Online
IMPORTANT
Job and stage names cannot contain keywords (example: deployment ).
Each job in a stage must have a unique name.
In YAML pipelines, we recommend that you put your deployment steps in a special type of job called a
deployment job. A deployment job is a collection of steps that are run sequentially against the environment. A
deployment job and a traditional job can exist in the same stage.
Deployment jobs provide the following benefits:
Deployment histor y : You get the deployment history across pipelines, down to a specific resource and
status of the deployments for auditing.
Apply deployment strategy : You define how your application is rolled out.
NOTE
We currently support only the runOnce, rolling, and the canary strategies.
Schema
Here is the full syntax to specify a deployment job:
jobs:
- deployment: string # Name of the deployment job, A-Z, a-z, 0-9, and underscore. The word "deploy" is a
keyword and is unsupported as the deployment name.
displayName: string # Friendly name to display in the UI.
pool: # See pool schema.
name: string # Use only global level variables for defining a pool name. Stage/job level variables
are not supported to define pool name.
demands: string | [ string ]
dependsOn: string
condition: string
continueOnError: boolean # 'true' if future jobs should run even if this job fails;
defaults to 'false'
container: containerReference # Container to run the job inside.
services: { string: string | container } # Container resources to run as a service container.
timeoutInMinutes: nonEmptyString # How long to run the job before automatically cancelling.
cancelTimeoutInMinutes: nonEmptyString # How much time to give 'run always even if cancelled tasks'
before killing them.
variables: { string: string } | [ variable | variableReference ]
environment: string # Target environment name and optionally a resource-name to record the deployment
history; format: <environment-name>.<resource-name>
strategy: [ deployment strategy ] # See deployment strategy schema.
Deployment strategies
When you're deploying application updates, it's important that the technique you use to deliver the update will:
Enable initialization.
Deploy the update.
Route traffic to the updated version.
Test the updated version after routing traffic.
In case of failure, run steps to restore to the last known good version.
We achieve this by using lifecycle hooks that can run steps during deployment. Each of the lifecycle hooks
resolves into an agent job or a server job (or a container or validation job in the future), depending on the pool
attribute. By default, the lifecycle hooks will inherit the pool specified by the deployment job.
Deployment jobs use the $(Pipeline.Workspace) system variable.
Descriptions of lifecycle hooks
preDeploy : Used to run steps that initialize resources before application deployment starts.
deploy : Used to run steps that deploy your application. Download artifact task will be auto injected only in the
deploy hook for deployment jobs. To stop downloading artifacts, use - download: none or choose specific
artifacts to download by specifying Download Pipeline Artifact task.
routeTraffic : Used to run steps that serve the traffic to the updated version.
postRouteTraffic : Used to run the steps after the traffic is routed. Typically, these tasks monitor the health of the
updated version for defined interval.
on: failure or on: success : Used to run steps for rollback actions or clean-up.
RunOnce deployment strategy
runOnce is the simplest deployment strategy wherein all the lifecycle hooks, namely preDeploy deploy ,
routeTraffic , and postRouteTraffic , are executed once. Then, either on: success or on: failure is
executed.
strategy:
runOnce:
preDeploy:
pool: [ server | pool ] # See pool schema.
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
deploy:
pool: [ server | pool ] # See pool schema.
steps:
...
routeTraffic:
pool: [ server | pool ]
steps:
...
postRouteTraffic:
pool: [ server | pool ]
steps:
...
on:
failure:
pool: [ server | pool ]
steps:
...
success:
pool: [ server | pool ]
steps:
...
Rolling deployment strategy
A rolling deployment replaces instances of the previous version of an application with instances of the new
version of the application on a fixed set of virtual machines (rolling set) in each iteration.
We currently only support the rolling strategy to VM resources.
For example, a rolling deployment typically waits for deployments on each set of virtual machines to complete
before proceeding to the next set of deployments. You could do a health check after each iteration and if a
significant issue occurs, the rolling deployment can be stopped.
Rolling deployments can be configured by specifying the keyword rolling: under strategy: node. The
strategy.name variable is available in this strategy block, which takes the name of the strategy. In this case,
rolling.
strategy:
rolling:
maxParallel: [ number or percentage as x% ]
preDeploy:
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
deploy:
steps:
...
routeTraffic:
steps:
...
postRouteTraffic:
steps:
...
on:
failure:
steps:
...
success:
steps:
...
All the lifecycle hooks are supported and lifecycle hook jobs are created to run on each VM.
preDeploy , deploy , routeTraffic , and postRouteTraffic are executed once per batch size defined by
maxParallel . Then, either on: success or on: failure is executed.
With maxParallel: <# or % of VMs> , you can control the number/percentage of virtual machine targets to deploy
to in parallel. This ensures that the app is running on these machines and is capable of handling requests while
the deployment is taking place on the rest of the machines, which reduces overall downtime.
NOTE
There are a few known gaps in this feature. For example, when you retry a stage, it will re-run the deployment on all VMs
not just failed targets.
Canary deployment strategy supports the preDeploy lifecycle hook (executed once) and iterates with the deploy
, routeTraffic , and postRouteTraffic lifecycle hooks. It then exits with either the success or failure hook.
The following variables are available in this strategy:
strategy.name : Name of the strategy. For example, canary.
strategy.action : The action to be performed on the Kubernetes cluster. For example, deploy, promote, or reject.
strategy.increment : The increment value used in the current interaction. This variable is available only in deploy
, routeTraffic , and postRouteTraffic lifecycle hooks.
Examples
RunOnce deployment strategy
The following example YAML snippet showcases a simple use of a deploy job by using the runOnce deployment
strategy.
jobs:
# Track deployments on the environment.
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Creates an environment if it doesn't exist.
environment: 'smarthotel-dev'
strategy:
# Default deployment strategy, more coming...
runOnce:
deploy:
steps:
- script: echo my first deployment
With each run of this job, deployment history is recorded against the smarthotel-dev environment.
NOTE
It's also possible to create an environment with empty resources and use that as an abstract shell to record deployment
history, as shown in the previous example.
The next example demonstrates how a pipeline can refer both an environment and a resource to be used as the
target for a deployment job.
jobs:
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Records deployment against bookings resource - Kubernetes namespace.
environment: 'smarthotel-dev.bookings'
strategy:
runOnce:
deploy:
steps:
# No need to explicitly pass the connection details.
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespace)
manifests: |
$(System.ArtifactsDirectory)/manifests/*
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)
# Set an output variable in a lifecycle hook of a deployment job executing canary strategy.
- deployment: A
pool:
vmImage: 'ubuntu-16.04'
environment: staging
strategy:
canary:
increments: [10,20] # Creates multiple jobs, one for each increment. Output variable can be
referenced with this.
deploy:
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment
variable value"
name: setvarStep
- bash: echo $(setvarStep.myOutputVar)
name: echovar
For a runOnce job, specify the name of the job instead of the lifecycle hook:
# Set an output variable in a lifecycle hook of a deployment job executing runOnce strategy.
- deployment: A
pool:
vmImage: 'ubuntu-16.04'
environment: staging
strategy:
runOnce:
deploy:
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment
variable value"
name: setvarStep
- bash: echo $(setvarStep.myOutputVar)
name: echovar
When you define an environment in a deployment job, the syntax of the output variable varies depending on
how the environment gets defined. In this example, env1 uses shorthand notation and env2 includes the full
syntax with a defined resource type.
stages:
- stage: MyStage
jobs:
- deployment: A1
pool:
vmImage: 'ubuntu-16.04'
environment: env1
strategy:
runOnce:
deploy:
steps:
- bash: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the deployment
variable value"
name: setvarStep
- bash: echo $(System.JobName)
- deployment: A2
pool:
vmImage: 'ubuntu-16.04'
environment:
name: env1
resourceType: virtualmachine
strategy:
runOnce:
deploy:
steps:
- script: echo "##vso[task.setvariable variable=myOutputVarTwo;isOutput=true]this is the second
deployment variable value"
name: setvarStepTwo
- job: B1
dependsOn: A1
pool:
vmImage: 'ubuntu-16.04'
variables:
myVarFromDeploymentJob: $[ dependencies.A1.outputs['A1.setvarStep.myOutputVar'] ]
steps:
- script: "echo $(myVarFromDeploymentJob)"
name: echovar
- job: B2
dependsOn: A2
pool:
vmImage: 'ubuntu-16.04'
variables:
myVarFromDeploymentJob: $[ dependencies.A1.outputs['A1.setvarStepTwo.myOutputVar'] ]
myOutputVarTwo: $[ dependencies.A2.outputs['Deploy_vmsfortesting.setvarStepTwo.myOutputVarTwo'] ]
steps:
- script: "echo $(myOutputVarTwo)"
name: echovartwo
FAQ
My pipeline is stuck with the message "Job is pending...". How can I fix this?
This can happen when there is a name conflict between two jobs. Verify that any deployment jobs in the same
stage have a unique name and that job and stage names do not contain keywords. If renaming does not fix the
problem, review troubleshooting pipeline runs.
Use a decorator to inject steps into a pipeline
11/2/2020 • 4 minutes to read • Edit Online
TIP
Check out our newest documentation on extension development using the Azure DevOps Extension SDK.
Pipeline decorators let you add steps to the beginning and end of every job. This process is different than adding
steps to a single definition because it applies to all pipelines in an organization.
Suppose our organization requires running a virus scanner on all build outputs that could be released. Pipeline
authors don't need to remember to add that step. We create a decorator that automatically injects the step. Our
pipeline decorator injects a custom task that does virus scanning at the end of every pipeline job.
{
"manifestVersion": 1,
"contributions": [
{
"id": "my-required-task",
"type": "ms.azure-pipelines.pipeline-decorator",
"targets": [
"ms.azure-pipelines-agent-job.post-job-tasks"
],
"properties": {
"template": "my-decorator.yml"
}
}
],
"files": [
{
"path": "my-decorator.yml",
"addressable": true,
"contentType": "text/plain"
}
]
}
Contribution options
Let's take a look at the properties and what they're used for:
targets Decorators can run before your job, after, or both. See the
table below for available options.
Targets
TA RGET DESC RIP T IO N
ms.azure-pipelines-agent-job.pre-job-tasks Run before other tasks in a classic build or YAML pipeline. Due
to differences in how source code checkout happens, this
target will run before checkout in a YAML pipeline but after
checkout in a classic build pipeline.
ms.azure-pipelines-agent-job.post-checkout-tasks Run after the last checkout task in a classic build or YAML
pipeline.
In this example, we use ms.azure-pipelines-agent-job.post-job-tasks only because we want to run at the end of all
build jobs.
This extension contributes a pipeline decorator. Next, we'll create a template YAML file to define the decorator's
behavior.
Decorator YAML
In the extension's properties, we chose the name "my-decorator.yml". Create that file in the root of your
contribution. It holds the set of steps to run after each job. We'll start with a basic example and work up to the full
task.
my-decorator.yml (initial version )
steps:
- task: CmdLine@2
displayName: 'Run my script (injected from decorator)'
inputs:
script: dir
NOTE
The decorator runs on every job in every pipeline in the organization. In later steps, we'll add logic to control when and how
the decorator runs.
Conditional injection
In our example, we only need to run the virus scanner if the build outputs might be released to the public. Let's say
that only builds from the default branch (typically master ) are ever released. We should limit the decorator to jobs
running against the default branch.
The updated file looks like this:
my-decorator.yml (revised version )
steps:
- ${{ if eq(resources.repositories['self'].ref, resources.repositories['self'].defaultBranch) }}:
- script: dir
displayName: 'Run my script (injected from decorator)'
You can start to see the power of this extensibility point. Use the context of the current job to conditionally inject
steps at runtime. Use YAML expressions to make decisions about what steps to inject and when. See pipeline
decorator expression context for a full list of available data.
There's another condition we need to consider: what if the user already included the virus scanning step? We
shouldn't waste time running it again. In this simple example, we'll pretend that any script task found in the job is
running the virus scanner. (In a real implementation, you'd have a custom task to check for that instead.)
The script task's ID is d9bafed4-0b18-4f58-968d-86655b4d2ce9 . If we see another script task, we shouldn't inject ours.
my-decorator.yml (final version )
steps:
- ${{ if and(eq(resources.repositories['self'].ref, resources.repositories['self'].defaultBranch),
not(containsValue(job.steps.*.task.id, 'd9bafed4-0b18-4f58-968d-86655b4d2ce9'))) }}:
- script: dir
displayName: 'Run my script (injected from decorator)'
Debugging
While authoring your pipeline decorator, you'll likely need to debug. You also may want to see what data you have
available in the context.
You can set the system.debugContext variable to true when you queue a pipeline. Then, look at the pipeline
summary page.
You see something similar to the following image:
Select the task to see the logs, which report the available context is available and runtime values.
Helpful Links
Learn more about YAML expression syntax.
Pipeline decorator expression context
11/2/2020 • 3 minutes to read • Edit Online
TIP
Check out our newest documentation on extension development using the Azure DevOps Extension SDK.
Resources
Pipeline resources are available on the resources object.
Repositories
Currently, there's only one key: repositories . repositories is a map from repo ID to information about the
repository.
In a designer build, the primary repo alias is __designer_repo . In a YAML pipeline, the primary repo is called self .
In a release pipeline, repositories aren't available. Release artifact variables are available.
For example, to print the name of the self repo in a YAML pipeline:
steps:
- script: echo ${{ resources.repositories['self'].name }}
resources['repositories']['self'] =
{
"alias": "self",
"id": "<repo guid>",
"type": "Git",
"version": "<commit hash>",
"name": "<repo name>",
"project": "<project guid>",
"defaultBranch": "<default ref of repo, like 'refs/heads/master'>",
"ref": "<current pipeline ref, like 'refs/heads/topic'>",
"versionInfo": {
"author": "<author of tip commit>",
"message": "<commit message of tip commit>"
},
"checkoutOptions": {}
}
Job
Job details are available on the job object.
The data looks similar to:
job =
{
"steps": [
{
"environment": null,
"inputs": {
"script": "echo hi"
},
"type": "Task",
"task": {
"id": "d9bafed4-0b18-4f58-968d-86655b4d2ce9",
"name": "CmdLine",
"version": "2.146.1"
},
"condition": null,
"continueOnError": false,
"timeoutInMinutes": 0,
"id": "5c09f0b5-9bc3-401f-8cfb-09c716403f48",
"name": "CmdLine",
"displayName": "CmdLine",
"enabled": true
}
]
}
Variables
Pipeline variables are also available.
For instance, if the pipeline had a variable called myVar , its value would be available to the decorator as
variables['myVar'] .
For example, to give a decorator an opt-out, we could look for a variable. Pipeline authors who wish to opt out of
the decorator can set this variable, and the decorator won't be injected. If the variable isn't present, then the
decorator is injected as usual.
my-decorator.yml
Then, in a pipeline in the organization, the author can request the decorator not to inject itself.
pipeline-with-opt-out.yml
variables:
skipInjecting: true
steps:
- script: echo This is the only step. No decorator is added.
Each of those steps maps to a task. Each task has a unique GUID. Task names and keywords map to task GUIDs
before decorators run. If a decorator wants to check for the existence of another task, it must search by task GUID
rather than by name or keyword.
For normal tasks (which you specify with the task keyword), you can look at the task's task.json to determine its
GUID. For special keywords like checkout and bash in the example above, you can use the following GUIDs:
K EY W O RD GUID TA SK N A M E
After resolving task names and keywords, the above YAML becomes:
steps:
- task: 6D15AF64-176C-496D-B583-FD2AE21D4DF4@1
inputs:
repository: self
- task: 6C731C3C-3C68-459A-A5C9-BDE6E6595B5B@3
inputs:
targetType: inline
script: echo This is the Bash task
- task: E213FF0F-5D5C-4791-802D-52EA3E7BE1F1@2
inputs:
targetType: inline
script: Write-Host This is the PowerShell task
TIP
Each of these GUIDs can be found in the task.json for the corresponding in-box task. The only exception is checkout ,
which is a native capability of the agent. Its GUID is built into the Azure Pipelines service and agent.
Specify conditions
11/2/2020 • 6 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments, and
jobs are called phases.
YAML
Classic
You can specify conditions under which a step, job, or stage will run.
Only when all previous dependencies have succeeded. This is the default if there is not a condition set
in the YAML.
Even if a previous dependency has failed, unless the run was canceled. Use succeededOrFailed() in the
YAML for this condition.
Even if a previous dependency has failed, even if the run was canceled. Use always() in the YAML for
this condition.
Only when a previous dependency has failed. Use failed() in the YAML for this condition.
Custom conditions
By default, steps, jobs, and stages run if all previous steps/jobs have succeeded. It's as if you specified
"condition: succeeded()" (see Job status functions).
jobs:
- job: Foo
steps:
- script: echo Hello!
condition: always() # this step will always run, even if the pipeline is canceled
- job: Bar
dependsOn: Foo
condition: failed() # this job will only run if Foo fails
stages:
- stage: A
jobs:
- job: A1
steps:
- script: echo Hello Stage A!
- stage: B
condition: and(succeeded(), eq(variables.isMain, true))
jobs:
- job: B1
steps:
- script: echo Hello Stage B!
- script: echo $(isMain)
Conditions are evaluated to decide whether to start a stage, job, or step. This means that nothing computed at
runtime inside that unit of work will be available. For example, if you have a job which sets a variable using a
runtime expression using $[ ] syntax, you can't use that variable in your custom condition.
YAML is not yet supported in TFS.
In TFS 2017.3, custom task conditions are available in the user interface only for Build pipelines. You can
use the Release REST APIs to establish custom conditions for Release pipelines.
Conditions are written as expressions. The agent evaluates the expression beginning with the innermost
function and works its way out. The final result is a boolean value that determines if the task, job, or stage
should run or not. See the expressions topic for a full guide to the syntax.
Do any of your conditions make it possible for the task to run even after the build is canceled by a user? If so,
then specify a reasonable value for cancel timeout so that these kinds of tasks have enough time to complete
after the user cancels a run.
Examples
Run for the master branch, if succeeding
Run if the build is run by a branch policy for a pull request, if failing
variables:
- name: testNull
value: ''
jobs:
- job: A
steps:
- script: echo testNull is blank
condition: eq('${{ variables.testNull }}', '')
parameters:
- name: doThing
default: true
type: boolean
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', true))
However, when you pass a parameter to a template, the parameter will not have a value when the condition
gets evaluated. As a result, if you set the parameter value in both the template and the pipeline YAML files, the
pipeline value from the template will get used in your condition.
# parameters.yml
parameters:
- name: doThing
default: false # value passed to the condition
type: boolean
jobs:
- job: B
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', true))
# azure-pipeline.yml
parameters:
- name: doThing
default: true # will not be evaluated in time
type: boolean
trigger:
- none
extends:
template: parameters.yml
jobs:
- job: Foo
steps:
- bash: |
echo "This is job Foo."
echo "##vso[task.setvariable variable=doThing;isOutput=true]Yes" #set variable doThing to Yes
name: DetermineResult
- job: Bar
dependsOn: Foo
condition: eq(dependencies.Foo.outputs['DetermineResult.doThing'], 'Yes') #map doThing and check the
value
steps:
- script: echo "Job Foo ran and doThing is Yes."
FAQ
I've got a conditional step that runs even when a job is canceled. Does my conditional step affect a job that
I canceled in the queue?
No. If you cancel a job while it's in the queue, then the entire job is canceled, including conditional steps.
I've got a conditional step that should run even when the deployment is canceled. How do I specify this?
If you defined the pipelines using a YAML file, then this is supported. This scenario is not yet supported for
release pipelines.
How can I trigger a job if a previous job succeeded with issues?
You can use the result of the previous job. For example, in this YAML file, the condition
eq(dependencies.A.result,'SucceededWithIssues') allows the job to run because Job A succeeded with issues.
jobs:
- job: A
displayName: Job A
continueOnError: true # next job starts even if this one fails
steps:
- script: echo Job A ran
- script: exit 1
- job: B
dependsOn: A
condition: eq(dependencies.A.result,'SucceededWithIssues') # targets the result of the previous job
displayName: Job B
steps:
- script: echo Job B ran
I've got a conditional step that runs even when a job is canceled. How do I manage to cancel all jobs at
once?
You'll experience this issue if the condition that's configured in the stage doesn't include a job status check
function. To resolve the issue, add a job status check function to the condition. If you cancel a job while it's in
the queue, the entire job is canceled, including all the other stages, with this function configured. For more
information, see Job status functions.
stages:
- stage: Stage1
displayName: Stage 1
dependsOn: []
condition: and(contains(variables['build.sourceBranch'], 'refs/heads/master'), succeeded())
jobs:
- job: ShowVariables
displayName: Show variables
steps:
- task: CmdLine@2
displayName: Show variables
inputs:
script: 'printenv'
- stage: Stage2
displayName: stage 2
dependsOn: Stage1
condition: contains(variables['build.sourceBranch'], 'refs/heads/master')
jobs:
- job: ShowVariables
displayName: Show variables 2
steps:
- task: CmdLine@2
displayName: Show variables 2
inputs:
script: 'printenv'
- stage: Stage3
displayName: stage 3
dependsOn: Stage2
condition: and(contains(variables['build.sourceBranch'], 'refs/heads/master'), succeeded())
jobs:
- job: ShowVariables
displayName: Show variables 3
steps:
- task: CmdLine@2
displayName: Show variables 3
inputs:
script: 'printenv'
Related articles
Specify jobs in your pipeline
Add stages, dependencies, & conditions
Specify demands
11/2/2020 • 2 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Use demands to make sure that the capabilities your pipeline needs are present on the agents that run it.
Demands are asserted automatically by tasks or manually by you.
NOTE
Demands and capabilities are designed for use with self-hosted agents so that jobs can be matched with an agent that
meets the requirements of the job. When using Microsoft-hosted agents, you select an image for the agent that matches
the requirements of the job, so although it is possible to add capabilities to a Microsoft-hosted agent, you don't need to
use capabilities with Microsoft-hosted agents.
Task demands
Some tasks won't run unless one or more demands are met by the agent. For example, the Visual Studio Build
task demands that msbuild and visualstudio are installed on the agent.
pool:
name: Default
demands: SpecialSoftware # Check if SpecialSoftware capability exists
pool:
name: Default
demands:
- SpecialSoftware # Check if SpecialSoftware capability exists
- Agent.OS -equals Linux # Check if Agent.OS == Linux
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
NOTE
Microsoft-hosted agents don't display system capabilities. For a list of software installed on Microsoft-
hosted agents, see Use a Microsoft-hosted agent.
From the Agent pools tab, select the desired agent, and choose the Capabilities tab.
F IRST B O X SEC O N D B O X
TIP
When you manually queue a build you can change the demands for that run.
Library
2/26/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Library is a collection of includes_ build and release assets for a project. Assets defined in a library can be used in
multiple build and release pipelines of the project. The Librar y tab can be accessed directly in Azure Pipelines and
Team Foundation Server (TFS).
At present, the library contains two types of assets: variable groups and secure files.
Variable groups are available to only release pipelines in TFS 2017 and earlier. They are available to build and
release pipelines in TFS 2018 and in Azure Pipelines. Task groups and service connections are available to build
and release pipelines in TFS 2015 and newer, and in Azure Pipelines.
Library Security
All assets defined in the Librar y tab share a common security model. You can control who can define new items in
a library, and who can use an existing item. Roles are defined for library items, and membership of these roles
governs the operations you can perform on those items.
RO L E O N A L IB RA RY IT EM P URP O SE
User Members of this role can use the item when authoring build
or release pipelines. For example, you must be a 'User' for a
variable group to be able to use it in a release pipeline.
The security settings for the Librar y tab control access for all items in the library. Role memberships for individual
items are automatically inherited from those of the Librar y node. In addition to the three roles listed above, the
Creator role on the library defines who can create new items in the library, but it does not include Reader and
User permissions and cannot be used to manage permissions for other users. By default, the following groups are
added to the Administrator role of the library: Build Administrators , Release Administrators , and Project
Administrators .
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.
Variables give you a convenient way to get key bits of data into various parts of the pipeline. The most
common use of variables is to define a value that you can then use in your pipeline. All variables are stored
as strings and are mutable. The value of a variable can change from run to run or job to job of your pipeline.
When you define the same variable in multiple places with the same name, the most locally scoped variable
wins. So, a variable defined at the job level can override a variable set at the stage level. A variable defined
at the stage level will override a variable set at the pipeline root level. A variable set in the pipeline root level
will override a variable set in the Pipeline settings UI.
Variables are different from runtime parameters, which are typed and available during template parsing.
User-defined variables
When you define a variable, you can use different syntaxes (macro, template expression, or runtime) and
what syntax you use will determine where in the pipeline your variable will render.
In YAML pipelines, you can set variables at the root, stage, and job level. You can also specify variables
outside of a YAML pipeline in the UI. When you set a variable in the UI, that variable can be encrypted and
set as secret. Secret variables are not automatically decrypted in YAML pipelines and need to be passed to
your YAML file with env: or a variable at the root level.
User-defined variables can be set as read-only.
You can use a variable group to make variables available across multiple pipelines.
You can use templates to define variables that are used in multiple pipelines in one file.
System variables
In addition to user-defined variables, Azure Pipelines has system variables with predefined values. If you are
using YAML or classic build pipelines, see predefined variables for a comprehensive list of system variables.
If you are using classic release pipelines, see release variables.
System variables are set with their current value when you run the pipeline. Some variables are set
automatically. As a pipeline author or end user, you change the value of a system variable before the
pipeline is run.
System variables are read-only.
Environment variables
Environment variables are specific to the operating system you are using. They are injected into a pipeline in
platform-specific ways. The format corresponds to how environment variables get formatted for your
specific scripting platform.
On UNIX systems (macOS and Linux), environment variables have the format $NAME . On Windows, the
format is %NAME% for batch and $env:NAME in PowerShell.
System and user-defined variables also get injected as environment variables for your platform. When
variables are turned into environment variables, variable names become uppercase, and periods turn into
underscores. For example, the variable name any.variable becomes the variable name $ANY_VARIABLE .
Variable characters
User-defined variables can consist of letters, numbers, . , and _ characters. Don't use variable prefixes
that are reserved by the system. These are: endpoint , input , secret , and securefile . Any variable that
begins with one of these strings (regardless of capitalization) will not be available to your tasks and scripts.
variables:
- name: one
value: initialValue
steps:
- script: |
echo ${{ variables.one }} # outputs initialValue
echo $(one)
displayName: First variable pass
- bash: echo '##vso[task.setvariable variable=one]secondValue'
displayName: Set new variable value
- script: |
echo ${{ variables.one }} # outputs initialValue
echo $(one) # outputs secondValue
displayName: Second variable pass
NOTE
Variables are only expanded for stages , jobs , and steps . You cannot, for example, use macro syntax inside a
resource or trigger .
W H ERE DO ES IT
EXPA N D IN A H O W DO ES IT
W H EN IS IT P IP EL IN E REN DER W H EN N OT
SY N TA X EXA M P L E P RO C ESSED? DEF IN IT IO N ? F O UN D?
template expression ${{ compile time key or value (left or empty string
variables.var }} right side)
steps:
Variable scopes
In the YAML file, you can set a variable at various scopes:
At the root level, to make it available to all jobs in the pipeline.
At the stage level, to make it available only to a specific stage.
At the job level, to make it available only to a specific job.
When a variable is defined at the top of a YAML, it will be available to all jobs and stages in the pipeline and
is a global variable. Global variables defined in a YAML are not visible in the pipeline settings UI.
Variables at the job level override variables at the root and stage level. Variables at the stage level override
variables at the root level.
variables:
global_variable: value # this is available to all jobs
jobs:
- job: job1
pool:
vmImage: 'ubuntu-16.04'
variables:
job_variable1: value1 # this is only available in job1
steps:
- bash: echo $(global_variable)
- bash: echo $(job_variable1)
- bash: echo $JOB_VARIABLE1 # variables are available in the script environment too
- job: job2
pool:
vmImage: 'ubuntu-16.04'
variables:
job_variable2: value2 # this is only available in job2
steps:
- bash: echo $(global_variable)
- bash: echo $(job_variable2)
- bash: echo $GLOBAL_VARIABLE
Specify variables
In the preceding examples, the variables keyword is followed by a list of key-value pairs. The keys are the
variable names and the values are the variable values.
There is another syntax, useful when you want to use variable templates or variable groups. This syntax
should be used at the root level of a pipeline.
In this alternate syntax, the variables keyword takes a list of variable specifiers. The variable specifiers are
name for a regular variable, group for a variable group, and template to include a variable template. The
following example demonstrates all three.
variables:
# a regular variable
- name: myvariable
value: myvalue
# a variable group
- group: myvariablegroup
# a reference to a variable template
- template: myvariabletemplate.yml
IMPORTANT
We make an effort to mask secrets from appearing in Azure Pipelines output, but you still need to take precautions.
Never echo secrets as output. Some operating systems log command line arguments. Never pass secrets on the
command line. Instead, we suggest that you map your secrets into environment variables.
We never mask substrings of secrets. If, for example, "abc123" is set as a secret, "abc" isn't masked from the logs. This
is to avoid masking secrets at too granular of a level, making the logs unreadable. For this reason, secrets should not
contain structured data. If, for example, "{ "foo": "bar" }" is set as a secret, "bar" isn't masked from the logs.
Unlike a normal variable, they are not automatically decrypted into environment variables for scripts. You
need to explicitly map secret variables.
The following example shows how to use a secret variable called mySecret in PowerShell and Bash scripts.
Unlike a normal pipeline variable, there's no environment variable called MYSECRET .
variables:
GLOBAL_MYSECRET: $(mySecret) # this will not work because the secret variable needs to be mapped as
env
GLOBAL_MY_MAPPED_ENV_VAR: $(nonSecretVariable) # this works because it's not a secret.
steps:
- powershell: |
Write-Host "Using an input-macro works: $(mySecret)"
Write-Host "Using the env var directly does not work: $env:MYSECRET"
Write-Host "Using a global secret var mapped in the pipeline does not work either:
$env:GLOBAL_MYSECRET"
Write-Host "Using a global non-secret var mapped in the pipeline works:
$env:GLOBAL_MY_MAPPED_ENV_VAR"
Write-Host "Using the mapped env var for this task works and is recommended:
$env:MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(mySecret) # the recommended way to map to an env variable
- bash: |
echo "Using an input-macro works: $(mySecret)"
echo "Using the env var directly does not work: $MYSECRET"
echo "Using a global secret var mapped in the pipeline does not work either: $GLOBAL_MYSECRET"
echo "Using a global non-secret var mapped in the pipeline works: $GLOBAL_MY_MAPPED_ENV_VAR"
echo "Using the mapped env var for this task works and is recommended: $MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(mySecret) # the recommended way to map to an env variable
The output from both tasks in the preceding script would look like this:
You can also map secret variables using the variables definition. This example shows how to use secret
variables $(vmsUser) and $(vmsAdminPass) in an Azure file copy task.
variables:
VMS_USER: $(vmsUser)
VMS_PASS: $(vmsAdminPass)
pool:
vmImage: 'ubuntu-latest'
steps:
- task: AzureFileCopy@4
inputs:
SourcePath: 'my/path'
azureSubscription: 'my-subscription'
Destination: 'AzureVMs'
storage: 'my-storage'
resourceGroup: 'my-rg'
vmsAdminUserName: $(VMS_USER)
vmsAdminPassword: $(VMS_PASS)
variables:
- group: 'my-var-group' # variable group
- name: 'devopsAccount' # new variable defined in YAML
value: 'contoso'
- name: 'projectName' # new variable defined in YAML
value: 'contosoads'
steps:
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
# Encode the Personal Access Token (PAT)
# $env:USER is a normal variable in the variable group
# $env:MY_MAPPED_TOKEN is a mapped secret variable
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f
$env:USER,$env:MY_MAPPED_TOKEN)))
IMPORTANT
By default with GitHub repositories, secret variables associated with your pipeline aren't made available to pull
request builds of forks. For more information, see Contributions from forks.
steps:
- task: MyTask@1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- script: echo $(ProduceVar.MyVar) # this step uses the output variable
jobs:
- job: A
steps:
- task: MyTask@1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- job: B
dependsOn: A
variables:
# map the output variable from A into this job
varFromA: $[ dependencies.A.outputs['ProduceVar.MyVar'] ]
steps:
- script: echo $(varFromA) # this step uses the mapped-in variable
stages:
- stage: One
jobs:
- job: A
steps:
- task: MyTask@1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- stage: Two
- job: B
variables:
# map the output variable from A into this job
varFromA: $[ stageDependencies.One.A.outputs['ProduceVar.MyVar'] ]
steps:
- script: echo $(varFromA) # this step uses the mapped-in variable
List variables
You can list all of the variables in your pipeline with the az pipelines variable list command. To get started,
see Get started with Azure DevOps CLI.
Parameters
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
pipeline-id : Required if pipeline-name is not supplied. ID of the pipeline.
pipeline-name : Required if pipeline-id is not supplied, but ignored if pipeline-id is supplied. Name
of the pipeline.
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up by using
git config .
Example
The following command lists all of the variables in the pipeline with ID 12 and shows the result in table
format.
steps:
# Create a variable
- bash: |
echo "##vso[task.setvariable variable=sauce]crushed tomatoes"
Subsequent steps will also have the pipeline variable added to their environment.
steps:
# Create a variable
# Note that this does not update the environment of the current script.
- bash: |
echo "##vso[task.setvariable variable=sauce]crushed tomatoes"
# An environment variable called `SAUCE` has been added to all downstream steps
- bash: |
echo "my environment variable is $SAUCE"
- pwsh: |
Write-Host "my environment variable is $env:SAUCE"
jobs:
# Set an output variable from job A
- job: A
pool:
vmImage: 'vs2017-win2016'
steps:
- powershell: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the value"
name: setvarStep
- script: echo $(setvarStep.myOutputVar)
name: echovar
- stage: B
dependsOn: A
variables:
myVarfromStageA: $[ stageDependencies.A.A1.outputs['printvar.myStageOutputVar'] ]
jobs:
- job: B1
steps:
- script: echo $(myVarfromStageA)
If you're setting a variable from a matrix or slice, then, to reference the variable when you access it from a
downstream job, you must include:
The name of the job.
The step.
jobs:
# Map the variable from the job for the first slice
- job: B
dependsOn: A
pool:
vmImage: 'ubuntu-18.04'
variables:
myVarFromJobsA1: $[ dependencies.A.outputs['job1.setvarStep.myOutputVar'] ]
steps:
- script: "echo $(myVarFromJobsA1)"
name: echovar
Be sure to prefix the job name to the output variables of a deployment job. In this case, the job name is A :
jobs:
# Map the variable from the job for the first slice
- job: B
dependsOn: A
pool:
vmImage: 'ubuntu-18.04'
variables:
myVarFromDeploymentJob: $[ dependencies.A.outputs['A.setvarStep.myOutputVar'] ]
steps:
- bash: "echo $(myVarFromDeploymentJob)"
name: echovar
- job: B
dependsOn: A
variables:
myVarFromJobsA1: $[ dependencies.A.outputs['job1.setvarStep.myOutputVar'] ] # remember to use
single quotes
You can use any of the supported expressions for setting a variable. Here's an example of setting a variable
to act as a counter that starts at 100, gets incremented by 1 for every run, and gets reset to 100 every day.
jobs:
- job:
variables:
a: $[counter(format('{0:yyyyMMdd}', pipeline.startTime), 100)]
steps:
- bash: echo $(a)
For more information about counters, dependencies, and other expressions, see expressions.
YAML is not supported in TFS.
Expansion of variables
YAML
Classic
Azure DevOps CLI
When you set a variable with the same name in multiple scopes, the following precedence applies (highest
precedence first).
1. Job level variable set in the YAML file
2. Stage level variable set in the YAML file
3. Pipeline level variable set in the YAML file
4. Variable set at queue time
5. Pipeline variable set in Pipeline settings UI
In the following example, the same variable a is set at the pipeline level and job level in YAML file. It's also
set in a variable group G , and as a variable in the Pipeline settings UI.
variables:
a: 'pipeline yaml'
stages:
- stage: one
displayName: one
variables:
- name: a
value: 'stage yaml'
jobs:
- job: A
variables:
- name: a
value: 'job yaml'
steps:
- bash: echo $(a) # This will be 'job yaml'
When you set a variable with the same name in the same scope, the last set value will take precedence.
stages:
- stage: one
displayName: Stage One
variables:
- name: a
value: alpha
- name: a
value: beta
jobs:
- job: I
displayName: Job I
variables:
- name: b
value: uno
- name: b
value: dos
steps:
- script: echo $(a) #outputs beta
- script: echo $(b) #outputs dos
NOTE
When you set a variable in the YAML file, don't define it in the web editor as settable at queue time. You can't
currently change variables that are set in the YAML file at queue time. If you need a variable to be settable at queue
time, don't set it in the YAML file.
Variables are expanded once when the run is started, and again at the beginning of each step. For example:
jobs:
- job: A
variables:
a: 10
steps:
- bash: |
echo $(a) # This will be 10
echo '##vso[task.setvariable variable=a]20'
echo $(a) # This will also be 10, since the expansion of $(a) happens before the step
- bash: echo $(a) # This will be 20, since the variables are expanded just before the step
There are two steps in the preceding example. The expansion of $(a) happens once at the beginning of the
job, and once at the beginning of each of the two steps.
Because variables are expanded at the beginning of a job, you can't use them in a strategy. In the following
example, you can't use the variable a to expand the job matrix, because the variable is only available at the
beginning of each expanded job.
jobs:
- job: A
variables:
a: 10
strategy:
matrix:
x:
some_variable: $(a) # This does not work
If the variable a is an output variable from a previous job, then you can use it in a future job.
- job: A
steps:
- powershell: echo "##vso[task.setvariable variable=a;isOutput=true]10"
name: a_step
Recursive expansion
On the agent, variables referenced using $( ) syntax are recursively expanded. However, for service-side
operations such as setting display names, variables aren't expanded recursively. For example:
variables:
myInner: someValue
myOuter: $(myInner)
steps:
- script: echo $(myOuter) # prints "someValue"
displayName: Variable is $(myOuter) # display name is "Variable is $(myInner)"
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Variables give you a convenient way to get key bits of data into various parts of your pipeline. This is the
comprehensive list of predefined variables.
These variables are automatically set by the system and read-only. (The exceptions are Build.Clean and
System.Debug.) Learn more about working with variables.
NOTE
You can use release variables in your deploy tasks to share the common information (e.g. — Environment Name, Resource
Group, etc)
Build.Clean
This is a deprecated variable that modifies how the build agent cleans up source. To learn how to clean up source,
see Clean the local repo on the agent.
This variable modifies how the build agent cleans up source. To learn more, see Clean the local repo on the agent.
System.AccessToken
System.AccessToken is a special variable that carries the security token used by the running build.
YAML
Classic
In YAML, you must explicitly map System.AccessToken into the pipeline using a variable. You can do this at the step
or task level:
steps:
- bash: echo This script could use $SYSTEM_ACCESSTOKEN
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
- powershell: Write-Host "This is a script that could use $env:SYSTEM_ACCESSTOKEN"
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
You can configure the default scope for System.AccessToken using build job authorization scope.
System.Debug
For more detailed logs to debug pipeline problems, define System.Debug and set it to true .
1. Edit your pipeline.
2. Select Variables .
3. Add a new variable with the name System.Debug and value true .
4. Save the new variable.
Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created. This variable has the same value
as Pipeline.Workspace .
For example: /home/vsts/work/1
{
"one_container": {
"id":
"bdbb357d73a0bd3550a1a5b778b62a4c88ed2051c7802a06
59f1ff6e76910190"
},
"another_container": {
"id":
"82652975109ec494876a8ccbb875459c945982952e0a72ad
74c91216707162bb"
}
}
Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .
Agent.JobName The name of the running job. This will usually be "Job" or
"__default", but in multi-config scenarios, will be the
configuration.
Agent.JobStatus The status of the build.
Canceled
Failed
Succeeded
SucceededWithIssues (partially successful)
The environment variable should be referenced as
AGENT_JOBSTATUS . The older agent.jobstatus is
available for backwards compatibility.
Agent.Name The name of the agent that is registered with the pool.
If you are using a self-hosted agent, then this name is
specified by you. See agents.
Agent.OS The operating system of the agent host. Valid values are:
Windows_NT
Darwin
Linux
Agent.ToolsDirectory The directory used by tasks such as Node Tool Installer and
Use Python Version to switch between multiple versions of a
tool. These tasks will add tools from this directory to PATH so
that subsequent build steps can use them.
Note: Build.ArtifactStagingDirectory
and Build.StagingDirectory are
interchangeable. This directory is
purged before each new build, so
you don't have to clean it up
yourself.
Note: Build.ArtifactStagingDirectory
and Build.StagingDirectory are
interchangeable. This directory is
purged before each new build, so
you don't have to clean it up
yourself.
Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created. This variable has the same value
as Pipeline.Workspace .
For example: /home/vsts/work/1
Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .
Agent.Name The name of the agent that is registered with the pool.
If you are using a self-hosted agent, then this name is
specified by you. See agents.
Agent.OS The operating system of the agent host. Valid values are:
Windows_NT
Darwin
Linux
If you're running in a container, the agent host and container
may be running different operating systems.
Agent.ToolsDirectory The directory used by tasks such as Node Tool Installer and
Use Python Version to switch between multiple versions of a
tool. These tasks will add tools from this directory to PATH so
that subsequent build steps can use them.
Note: Build.ArtifactStagingDirectory
and Build.StagingDirectory are
interchangeable. This directory is
purged before each new build, so
you don't have to clean it up
yourself.
Note: Build.ArtifactStagingDirectory
and Build.StagingDirectory are
interchangeable. This directory is
purged before each new build, so
you don't have to clean it up
yourself.
Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created.
For example: c:\agent_work\1
Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .
Agent.JobName The name of the running job. This will usually be "Job" or
"__default", but in multi-config scenarios, will be the
configuration.
Agent.Name The name of the agent that is registered with the pool.
If you are using a self-hosted agent, then this name is
specified by you. See agents.
Agent.OS The operating system of the agent host. Valid values are:
Windows_NT
Darwin
Linux
If you're running in a container, the agent host and container
may be running different operating systems.
Agent.TempDirectory A temporary folder that is cleaned after each pipeline job. This
directory is used by tasks such as .NET Core CLI task to hold
temporary items like test results before they are published.
Agent.ToolsDirectory The directory used by tasks such as Node Tool Installer and
Use Python Version to switch between multiple versions of a
tool. These tasks will add tools from this directory to PATH so
that subsequent build steps can use them.
Build.ArtifactStagingDirectory The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a
Build.BuildNumber The name of the completed build. You can specify the build
number format that generates this value in the pipeline
options.
Build.Repository.Clean The value you've selected for Clean in the source repository
settings.
When you use this variable in your build number format, the
forward slash characters ( / ) are replaced with underscore
characters _ ).
Build.SourceBranchName The name of the branch the build was queued for.
Git repo branch or pull request: The last path segment
in the ref. For example, in refs/heads/master this
value is master . In refs/heads/feature/tools this
value is tools .
TFVC repo branch: The last path segment in the root
server path for the workspace. For example in
$/teamproject/main this value is main .
TFVC repo gated check-in or shelveset build is the
name of the shelveset. For example,
Gated_2016-06-
06_05.20.51.4369;[email protected]
or myshelveset;[email protected] .
Note: In TFVC, if you are running a gated check-in build or
manually building a shelveset, you cannot use this variable in
your build number format.
Build.SourcesDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
Build.SourceVersion The latest version control change that is included in this build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped. It can be used as an environment
variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.SourceVersionMessage The comment of the commit or changeset. We truncate the
message to the first line or 200 characters, whichever is
shorter.
This variable is agent-scoped. It can be used as an
environment variable in a script and as a parameter in a
build task, but not as part of the build number or as a
version control tag.
Note: This variable is available in TFS 2015.4.
Build.StagingDirectory The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a
Note: This variable yields a value that is invalid for build use in
a build number format.
Build.TriggeredBy.BuildId If the build was triggered by another build, then this variable is
set to the BuildID of the triggering build.
Build.TriggeredBy.DefinitionId If the build was triggered by another build, then this variable is
set to the DefinitionID of the triggering build.
Build.TriggeredBy.DefinitionName If the build was triggered by another build, then this variable is
set to the name of the triggering build pipeline.
Build.TriggeredBy.ProjectID If the build was triggered by another build, then this variable is
set to ID of the project that contains the triggering build.
Common.TestResultsDirectory The local path on the agent where the test results are created.
For example: c:\agent_work\1\TestResults
System.DefaultWorkingDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
System.HostType Set to build if the pipeline is a build. For a release, the values
are deployment for a Deployment group job and release
for an Agent job.
System.PullRequest.IsFork If the pull request is from a fork of the repository, this variable
is set to True . Otherwise, it is set to False .
System.PullRequest.PullRequestId The ID of the pull request that caused this build. For example:
17 . (This variable is initialized only if the build ran because of
a Git PR affected by a branch policy.)
System.PullRequest.PullRequestNumber The number of the pull request that caused this build. This
variable is populated for pull requests from GitHub which have
a different pull request ID and pull request number.
System.PullRequest.SourceRepositoryURI The URL to the repo that contains the pull request. For
example:
https://dev.azure.com/ouraccount/_git/OurProject . (This
variable is initialized only if the build ran because of a Azure
Repos Git PR affected by a branch policy. It is not initialized for
GitHub PRs.)
System.PullRequest.TargetBranch The branch that is the target of a pull request. For example:
refs/heads/master . This variable is initialized only if the build
ran because of a Git PR affected by a branch policy.
Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created.
For example: c:\agent_work\1
Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .
Agent.Name The name of the agent that is registered with the pool.
This name is specified by you. See agents.
Agent.TempDirectory A temporary folder that is cleaned after each pipeline job. This
directory is used by tasks such as .NET Core CLI task to hold
temporary items like test results before they are published.
Agent.ToolsDirectory The directory used by tasks such as Node Tool Installer and
Use Python Version to switch between multiple versions of a
tool. These tasks will add tools from this directory to PATH so
that subsequent build steps can use them.
Build.ArtifactStagingDirectory The local path on the agent where any artifacts are copied to
before being pushed to their destination.
The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a
Build.BinariesDirectory The local path on the agent you can use as an output folder
for compiled binaries.
Build.Repository.Clean The value you've selected for Clean in the source repository
settings.
Build.Repository.LocalPath The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
Build.SourceBranch The branch the build was queued for. Some examples:
Git repo branch: refs/heads/master
Git repo pull request: refs/pull/1/merge
TFVC repo branch: $/teamproject/main
TFVC repo gated check-in:
Gated_2016-06-
06_05.20.51.4369;[email protected]
TFVC repo shelveset build:
myshelveset;[email protected]
When you use this variable in your build number format, the
forward slash characters ( / ) are replaced with underscore
characters _ ).
Build.SourceBranchName The name of the branch the build was queued for.
Git repo branch or pull request: The last path segment
in the ref. For example, in refs/heads/master this
value is master . In refs/heads/feature/tools this
value is tools .
TFVC repo branch: The last path segment in the root
server path for the workspace. For example in
$/teamproject/main this value is main .
TFVC repo gated check-in or shelveset build is the
name of the shelveset. For example,
Gated_2016-06-
06_05.20.51.4369;[email protected]
or myshelveset;[email protected] .
Note: In TFVC, if you are running a gated check-in build or
manually building a shelveset, you cannot use this variable in
your build number format.
Build.SourcesDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
Build.SourceVersion The latest version control change that is included in this build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped. It can be used as an environment
variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Build.StagingDirectory The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a
Note: This variable yields a value that is invalid for build use in
a build number format.
Common.TestResultsDirectory The local path on the agent where the test results are created.
For example: c:\agent_work\1\TestResults
System.DefaultWorkingDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
System.PullRequest.IsFork If the pull request is from a fork of the repository, this variable
is set to True . Otherwise, it is set to False . Available in TFS
2018.2 .
System.PullRequest.PullRequestId The ID of the pull request that caused this build. For example:
17 . (This variable is initialized only if the build ran because of
a Git PR affected by a branch policy.)
System.PullRequest.TargetBranch The branch that is the target of a pull request. For example:
refs/heads/master . This variable is initialized only if the build
ran because of a Git PR affected by a branch policy.
Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created.
For example: c:\agent_work\1
Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software. For example: c:\agent .
Build.ArtifactStagingDirectory The local path on the agent where any artifacts are copied to
before being pushed to their destination.
The local path on the agent where any artifacts are copied
to before being pushed to their destination. For example:
c:\agent_work\1\a
Build.BuildNumber The name of the completed build. You can specify the build
number format that generates this value in the pipeline
options.
Build.Reason The event that caused the build to run. Available in TFS
2017.3 .
Manual : A user manually queued the build.
IndividualCI : Continuous integration (CI)
triggered by a Git push or a TFVC check-in.
BatchedCI : Continuous integration (CI) triggered
by a Git push or a TFVC check-in, and the Batch
changes was selected.
Schedule : Scheduled trigger.
ValidateShelveset : A user manually queued the
build of a specific TFVC shelveset.
CheckInShelveset : Gated check-in trigger.
PullRequest : The build was triggered by a Git branch
policy that requires a build.
See Build pipeline triggers, Improve code quality with branch
policies.
Build.Repository.Clean The value you've selected for Clean in the source repository
settings.
Build.Repository.LocalPath The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
Build.SourceBranch The branch the build was queued for. Some examples:
Git repo branch: refs/heads/master
Git repo pull request: refs/pull/1/merge
TFVC repo branch: $/teamproject/main
TFVC repo gated check-in:
Gated_2016-06-
06_05.20.51.4369;[email protected]
TFVC repo shelveset build:
myshelveset;[email protected]
When you use this variable in your build number format, the
forward slash characters ( / ) are replaced with underscore
characters _ ).
Build.SourcesDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
Build.SourceVersion The latest version control change that is included in this build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped. It can be used as an environment
variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
Note: This variable yields a value that is invalid for build use in
a build number format.
Common.TestResultsDirectory The local path on the agent where the test results are created.
For example: c:\agent_work\1\TestResults
System.DefaultWorkingDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
System.PullRequest.PullRequestId The ID of the pull request that caused this build. For example:
17 . (This variable is initialized only if the build ran because of
a Git PR affected by a branch policy.)
System.PullRequest.SourceRepositoryURI The URL to the repo that contains the pull request. For
example:
http://our-
server:8080/tfs/DefaultCollection/_git/OurProject
. (This variable is initialized only if the build ran because of a
Azure Repos Git PR affected by a branch policy.)
System.PullRequest.TargetBranch The branch that is the target of a pull request. For example:
refs/heads/master . This variable is initialized only if the build
ran because of a Git PR affected by a branch policy.
Agent.BuildDirectory The local path on the agent where all folders for a given
build pipeline are created.
For example:
TFS 2015.4:
C:\TfsData\Agents\Agent-MACHINENAME_work\1
TFS 2015 RTM user-installed agent:
C:\Agent_work\6c3842c6
TFS 2015 RTM built-in agent:
C:\TfsData\Build_work\6c3842c6
Agent.HomeDirectory The directory the agent is installed into. This contains the
agent software.
For example:
TFS 2015.4: C:\TfsData\Agents\Agent-MACHINENAME
TFS 2015 RTM user-installed agent: C:\Agent
TFS 2015 RTM built-in agent:
C:\Program Files\Microsoft Team Foundation
Server 14.0\Build
Agent.MachineName The name of the machine on which the agent is installed. This
variable is available in TFS 2015.4 , not in TFS 2015 RTM .
Agent.Name The name of the agent that is registered with the pool.
This name is specified by you. See agents.
For example:
TFS 2015.4:
C:\TfsData\Agents\Agent-MACHINENAME_work\1\a
TFS 2015 RTM default agent:
C:\TfsData\Build_work\6c3842c6\artifacts
TFS 2015 RTM agent installed by you:
C:\Agent_work\6c3842c6\artifacts
Build.BuildNumber The name of the completed build. You can specify the build
number format that generates this value in the pipeline
options.
A typical use of this variable is to make it part of the label
format, which you specify on the repository tab.
Note: This value can contain whitespace or other invalid
label characters. In these cases, the label format will fail.
Build.BinariesDirectory The local path on the agent you can use as an output folder
for compiled binaries. Available in TFS 2015.4 .
For example:
C:\TfsData\Agents\Agent-MACHINENAME_work\1\b
Build.Repository.Clean The value you've selected for Clean in the source repository
settings.
Build.Repository.LocalPath The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
Build.SourceBranch The branch the build was queued for. Some examples:
Git repo branch: refs/heads/master
Git repo pull request: refs/pull/1/merge
TFVC repo branch: $/teamproject/main
TFVC repo gated check-in:
Gated_2016-06-
06_05.20.51.4369;[email protected]
TFVC repo shelveset build:
myshelveset;[email protected]
When you use this variable in your build number format, the
forward slash characters ( / ) are replaced with underscore
characters _ ).
Build.SourceBranchName The name of the branch the build was queued for.
Git repo branch or pull request: The last path segment
in the ref. For example, in refs/heads/master this
value is master . In refs/heads/feature/tools this
value is tools .
TFVC repo branch: The last path segment in the root
server path for the workspace. For example in
$/teamproject/main this value is main .
TFVC repo gated check-in or shelveset build is the
name of the shelveset. For example,
Gated_2016-06-
06_05.20.51.4369;[email protected]
or myshelveset;[email protected] .
Note: In TFVC, if you are running a gated check-in build or
manually building a shelveset, you cannot use this variable in
your build number format.
Build.SourcesDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
Build.SourcesDirectoryHash Note: This variable is available in TFS 2015 RTM, but not in TFS
2015.4.
Build.SourceVersion The latest version control change that is included in this build.
Git: The commit ID.
TFVC: the changeset.
This variable is agent-scoped. It can be used as an environment
variable in a script and as a parameter in a build task, but not
as part of the build number or as a version control tag.
The local path on the agent you can use as an output folder
for compiled binaries. For example:
C:\TfsData\Build_work\6c3842c6\staging .
TFS 2015.4
The local path on the agent where any artifacts are copied to
before being pushed to their destination. For example:
C:\TfsData\Agents\Agent-MACHINENAME_work\1\a
Note: This variable yields a value that is invalid for build use in
a build number format.
Common.TestResultsDirectory The local path on the agent where the test results are created.
For example: c:\agent_work\1\TestResults . Available in
TFS 2015.4 .
System.AccessToken Available in TFS 2015.4 . Use the OAuth token to access the
REST API.
System.DefaultWorkingDirectory The local path on the agent where your source code files
are downloaded. For example: c:\agent_work\1\s
System.PullRequest.PullRequestId The ID of the pull request that caused this build. For example:
17 . (This variable is initialized only if the build ran because of
a Git PR affected by a branch policy.)
System.PullRequest.TargetBranch The branch that is the target of a pull request. For example:
refs/heads/master . This variable is initialized only if the build
ran because of a Git PR affected by a branch policy.
In Git or TFVC by the Continuous The system identity, for example: The person who pushed or checked in
integration (CI) triggers [DefaultCollection]\Project the changes.
Collection Service Accounts
In Git or by a branch policy build. The system identity, for example: The person who checked in the changes.
[DefaultCollection]\Project
Collection Service Accounts
In TFVC by a gated check-in trigger The person who checked in the changes. The person who checked in the changes.
In Git or TFVC by the Scheduled triggers The system identity, for example: The system identity, for example:
[DefaultCollection]\Project [DefaultCollection]\Project
Collection Service Accounts Collection Service Accounts
Runtime parameters let you have more control over what values can be passed to a pipeline. With runtime
parameters you can:
Supply different values to scripts and tasks at runtime
Control parameter types, ranges allowed, and defaults
Dynamically select jobs and stages with template expressions
You can specify parameters in templates and in the pipeline. Parameters have data types such as number and
string, and they can be restricted to a subset of values. The parameters section in a YAML defines what parameters
are available.
Parameters are only available at template parsing time. Parameters are expanded just before the pipeline runs so
that values surrounded by ${{ }} are replaced with parameter values. Use variables if you need your values to be
more widely available during your pipeline run.
Parameters must contain a name and data type. Parameters cannot be optional. A default value needs to be
assigned in your YAML file or when you run your pipeline.
parameters:
- name: image
displayName: Pool Image
type: string
default: ubuntu-latest
values:
- windows-latest
- vs2017-win2016
- ubuntu-latest
- ubuntu-16.04
- macOS-latest
- macOS-10.14
trigger: none
jobs:
- job: build
displayName: build
pool:
vmImage: ${{ parameters.image }}
steps:
- script: echo building $(Build.BuildNumber) with ${{ parameters.image }}
When the pipeline runs, you select the Pool Image. If you do not make a selection, the default option,
ubuntu-latest gets used.
Use conditionals with parameters
You can also use parameters as part of conditional logic. With conditionals, part of a YAML will only run if it meets
the if criteria.
Use parameters to determine what steps run
This pipeline only runs a step when the boolean parameter test is true.
parameters:
- name: image
displayName: Pool Image
default: ubuntu-latest
values:
- windows-latest
- vs2017-win2016
- ubuntu-latest
- ubuntu-16.04
- macOS-latest
- macOS-10.14
- name: test
displayName: Run Tests?
type: boolean
default: false
trigger: none
jobs:
- job: build
displayName: Build and Test
pool:
vmImage: ${{ parameters.image }}
steps:
- script: echo building $(Build.BuildNumber)
- ${{ if eq(parameters.test, true) }}:
- script: echo "Running all the tests"
parameters:
- name: configs
type: string
default: 'x86,x64'
trigger: none
jobs:
- ${{ if contains(parameters.configs, 'x86') }}:
- job: x86
steps:
- script: echo Building x86...
- ${{ if contains(parameters.configs, 'x64') }}:
- job: x64
steps:
- script: echo Building x64...
- ${{ if contains(parameters.configs, 'arm') }}:
- job: arm
steps:
- script: echo Building arm...
trigger: none
stages:
- stage: Build
displayName: Build
jobs:
- job: Build
steps:
- script: echo running Build
- stage: UnitTest
displayName: Unit Test
dependsOn: Build
jobs:
- job: UnitTest
steps:
- script: echo running UnitTest
- stage: Deploy
displayName: Deploy
dependsOn: UnitTest
jobs:
- job: Deploy
steps:
- script: echo running UnitTest
steps:
- ${{ each parameter in parameters }}:
- script: echo ${{ parameter.Key }}
- script: echo ${{ parameter.Value }}
# azure-pipeline.yaml
trigger: none
extends:
template: start.yaml
parameters:
- name: foo
type: object
default: []
steps:
- checkout: none
- ${{ if eq(length(parameters.foo), 0) }}:
- script: echo Foo is empty
displayName: Foo is empty
string string
The step, stepList, job, jobList, deployment, deploymentList, stage, and stageList data types all use standard YAML
schema format. This example includes string, number, boolean, object, step, and stepList.
parameters:
- name: myString
type: string
default: a string
- name: myMultiString
type: string
default: default
values:
- default
- ubuntu
- name: myNumber
type: number
default: 2
values:
- 1
- 2
- 4
- 8
- 16
- name: myBoolean
type: boolean
default: true
- name: myObject
type: object
default:
foo: FOO
bar: BAR
things:
- one
- two
- three
nested:
one: apple
two: pear
count: 3
- name: myStep
type: step
default:
script: echo my step
- name: mySteplist
type: stepList
default:
- script: echo step one
- script: echo step two
trigger: none
jobs:
- job: stepList
steps: ${{ parameters.mySteplist }}
- job: myStep
steps:
- ${{ parameters.myStep }}
Classic release and artifacts variables
11/2/2020 • 14 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Classic release and artifacts variables are a convenient way to exchange and transport data throughout your
pipeline. Each variable is stored as a string and its value can change between runs of your pipeline.
Variables are different from Runtime parameters which are only available at template parsing time.
NOTE
This is a reference article that covers the classic release and artifacts variables. To understand variables in YAML pipelines, see
user-defined variables.
As you compose the tasks for deploying your application into each stage in your DevOps CI/CD processes, variables
will help you to:
Define a more generic deployment pipeline once, and then customize it easily for each stage. For example, a
variable can be used to represent the connection string for web deployment, and the value of this variable
can be changed from one stage to another. These are custom variables .
Use information about the context of the particular release, stage, artifacts, or agent in which the deployment
pipeline is being run. For example, your script may need access to the location of the build to download it, or
to the working directory on the agent to create temporary files. These are default variables .
TIP
You can view the current values of all variables for a release, and use a default variable to run a release in debug mode.
Default variables
Information about the execution context is made available to running tasks through default variables. Your tasks
and scripts can use these variables to find information about the system, release, stage, or agent they are running
in. With the exception of System.Debug , these variables are read-only and their values are automatically set by the
system. Some of the most significant variables are described in the following tables. To view the full list, see View
the current values of all variables.
Example: https://fabrikam.vsrm.visualstudio.com/
Example: https://dev.azure.com/fabrikam/
Example: 6c6f3423-1c84-4625-995a-f7f143a1e43d
Example: 1
System.TeamProject The name of the project to which this build or release belongs.
Example: Fabrikam
Example: 79f5c12e-3337-4151-be41-a268d2c73344
Example: C:\agent\_work\r1\a
Example: C:\agent\_work\r1\a
System.WorkFolder The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.RootDirectory and Agent.WorkFolder.
Example: C:\agent\_work
VA RIA B L E N A M E DESC RIP T IO N
System.Debug This is the only system variable that can be set by the users.
Set this to true to run the release in debug mode to assist in
fault-finding.
Example: true
Release.AttemptNumber The number of times this release is deployed in this stage. Not
available in TFS 2015.
Example: 1
Example: 1
Example: 1
Release.DefinitionName The name of the release pipeline to which the current release
belongs.
Example: fabrikam-cd
Release.Deployment.RequestedFor The display name of the identity that triggered (started) the
deployment currently in progress. Not available in TFS 2015.
Example: 2f435d07-769f-4e46-849d-10d1ab9ba6ab
Example: 254
Example: 127
Example: 276
VA RIA B L E N A M E DESC RIP T IO N
Example: Dev
Example: vstfs://ReleaseManagement/Environment/276
Example: InProgress
Example: fabrikam\_web
Example: 118
Example: Release-47
Example: vstfs://ReleaseManagement/Release/118
Example:
https://dev.azure.com/fabrikam/f3325c6c/_release?
releaseId=392&_a=release-summary
Example: [email protected]
VA RIA B L E N A M E DESC RIP T IO N
Example: 2f435d07-769f-4e46-849d-10d1ab9ba6ab
Example: FALSE
Release.TriggeringArtifact.Alias The alias of the artifact which triggered the release. This is
empty when the release was scheduled or triggered manually.
Example: fabrikam\_app
Example: NotStarted
Agent.Name The name of the agent as registered with the agent pool. This
is likely to be different from the computer name.
Example: fabrikam-agent
Example: fabrikam-agent
Example: 2.109.1
Agent.JobName The name of the job that is running, such as Release or Build.
Example: Release
Agent.HomeDirectory The folder where the agent is installed. This folder contains the
code and resources for the agent.
Example: C:\agent
VA RIA B L E N A M E DESC RIP T IO N
Example: C:\agent\_work\r1\a
Agent.RootDirectory The working directory for this agent, where subfolders are
created for every build or release. Same as Agent.WorkFolder
and System.WorkFolder.
Example: C:\agent\_work
Agent.WorkFolder The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.RootDirectory and System.WorkFolder.
Example: C:\agent\_work
Example: 1
Release.Artifacts.{alias}.SourceBranch The full path and name of the branch from which the source
was built.
Release.Artifacts.{alias}.SourceBranchName The name only of the branch from which the source was built.
Release.Artifacts.{alias}.Repository.Provider The type of repository from which the source was built.
Release.Artifacts.{alias}.PullRequest.TargetBranch The full path and name of the branch that is the target of a
pull request. This variable is initialized only if the release is
triggered by a pull request flow.
Release.Artifacts.{alias}.PullRequest.TargetBranchName The name only of the branch that is the target of a pull
request. This variable is initialized only if the release is
triggered by a pull request flow.
VA RIA B L E N A M E SA M E A S
To use a default variable in your script, you must first replace the . in the default variable names with _ . For
example, to print the value of artifact variable Release.Artifacts.{Artifact alias}.DefinitionName for the artifact
source whose alias is ASPNET4.CI in a PowerShell script, you would use
$env:RELEASE_ARTIFACTS_ASPNET4_CI_DEFINITIONNAME .
Note that the original name of the artifact source alias, ASPNET4.CI , is replaced by ASPNET4_CI .
View the current values of all variables
1. Open the pipelines view of the summary for the release, and choose the stage you are interested in. In the
list of steps, choose Initialize job .
2. This opens the log for this step. Scroll down to see the values used by the agent for this job.
Run a release in debug mode
Show additional information as a release executes and in the log files by running the entire release, or just the tasks
in an individual release stage, in debug mode. This can help you resolve issues and failures.
To initiate debug mode for an entire release, add a variable named System.Debug with the value true to the
Variables tab of a release pipeline.
To initiate debug mode for a single stage, open the Configure stage dialog from the shortcut menu of the
stage and add a variable named System.Debug with the value true to the Variables tab.
Alternatively, create a variable group containing a variable named System.Debug with the value true and
link this variable group to a release pipeline.
TIP
If you get an error related to an Azure RM service connection, see How to: Troubleshoot Azure Resource Manager service
connections.
Custom variables
Custom variables can be defined at various scopes.
Share values across all of the definitions in a project by using variable groups. Choose a variable group when
you need to use the same values across all the definitions, stages, and tasks in a project, and you want to be
able to change the values in a single place. You define and manage variable groups in the Librar y tab.
Share values across all of the stages by using release pipeline variables . Choose a release pipeline
variable when you need to use the same value across all the stages and tasks in the release pipeline, and you
want to be able to change the value in a single place. You define and manage these variables in the
Variables tab in a release pipeline. In the Pipeline Variables page, open the Scope drop-down list and select
"Release". By default, when you add a variable, it is set to Release scope.
Share values across all of the tasks within one specific stage by using stage variables . Use a stage-level
variable for values that vary from stage to stage (and are the same for all the tasks in an stage). You define
and manage these variables in the Variables tab of a release pipeline. In the Pipeline Variables page, open
the Scope drop-down list and select the required stage. When you add a variable, set the Scope to the
appropriate environment.
Using custom variables at project, release pipeline, and stage scope helps you to:
Avoid duplication of values, making it easier to update all occurrences as one operation.
Store sensitive values in a way that they cannot be seen or changed by users of the release pipelines.
Designate a configuration property to be a secure (secret) variable by selecting the (padlock) icon next to
the variable.
IMPORTANT
The values of the hidden (secret) variables are securely stored on the server and cannot be viewed by users after they
are saved. During a deployment, the Azure Pipelines release service decrypts these values when referenced by the
tasks and passes them to the agent over a secure HTTPS channel.
NOTE
Creating custom variables can overwrite standard variables. For example, the PowerShell Path environment variable. If you
create a custom Path variable on a Windows agent, it will overwrite the $env:Path variable and PowerShell won't be able
to run.
NOTE
At present, variables in different groups that are linked to a pipeline in the same scope (e.g., job or stage) will collide and the
result may be unpredictable. Ensure that you use different names for variables across all your variable groups.
You can use custom variables to prompt for values during the execution of a release. For more information, see
Approvals.
Define and modify your variables in a script
To define or modify a variable from a script, use the task.setvariable logging command. Note that the updated
variable value is scoped to the job being executed, and does not flow across jobs or stages. Variable names are
transformed to uppercase, and the characters "." and " " are replaced by "_".
For example, becomes AGENT_WORKFOLDER . On Windows, you access this as
Agent.WorkFolder %AGENT_WORKFOLDER% or
$env:AGENT_WORKFOLDER . On Linux and macOS, you use $AGENT_WORKFOLDER .
TIP
You can run a script on a:
Windows agent using either a Batch script task or PowerShell script task.
macOS or Linux agent using a Shell script task.
Batch
PowerShell
Shell
Batch script
"$(sauce)" "$(secret.Sauce)"
Script
@echo off
set sauceArgument=%~1
set secretSauceArgument=%~2
@echo No problem reading %sauceArgument% or %SAUCE%
@echo But I cannot read %SECRET_SAUCE%
@echo But I can read %secretSauceArgument% (but the log is redacted so I do not spoil
the secret)
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
NOTE
This tutorial will guide you through working with Azure key vault in your pipeline. Another way of working with secrets is
using Secret variables in your Azure Pipeline or referencing secrets in a variable group.
Azure Key Vault helps teams to securely store and manage sensitive information such as API keys, passwords,
certificates, etc.
In this tutorial, you will learn about:
Creating an Azure Key Vault using the Azure CLI
Adding a secret and configuring access to Azure key vault
Using secrets in your pipeline
Prerequisites
An Azure DevOps organization. If you don't have one, you can create one for free.
2. Run the following command to set a default Azure region for your subscription. You can use
az account list-locations to generate a list of available regions.
5. Run the following command to create a new secret in your key vault. Secrets are stored as a key value pair. In
the example below, Password is the key and mysecretpassword is the value.
Create a project
Sign in to Azure Pipelines. Your browser will then navigate to https://dev.azure.com/your-organization-name and
displays your Azure DevOps dashboard.
If you don't have any projects in your organization yet, select Create a project to get star ted to create a new
project. Otherwise, select the New project button in the upper-right corner of the dashboard.
Create a repo
We will use YAML to create our pipeline but first we need to create a new repo.
1. Sign in to your Azure DevOps organization and navigate to your project.
2. Go to Repos , and then select Initialize to initialize a new repo with a README.
3. Select the repo you created earlier. It should have the same name as your Azure DevOps project.
4. Select Star ter pipeline .
5. The default pipeline will include a few scripts that run echo commands. Those are not needed so we can
delete them. Your new YAML file will now look like this:
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
steps:
6. Select Show assistant to expand the assistant panel. This panel provides convenient and searchable list of
pipeline tasks.
7. Search for vault and select the Azure Key Vault task.
8. Select and authorize the Azure subscription you used to create your Azure key vault earlier. Select the key
vault and select Add to insert the task at the end of the pipeline. This task allows the pipeline to connect to
your Azure Key Vault and retrieve secrets to use as pipeline variables.
NOTE
Make secrets available to whole job feature is not currently supported in Azure DevOps Server 2019 and 2020.
9. This step is optional. To verify the retrieval and processing of our secret through the pipeline, add the script
below to your YAML to write the secret to a text file and publish it for review. This is not recommended and it
is for demonstration purposes only.
- publish: secret.txt
TIP
YAML is very particular about formatting and indentation. Make sure your YAML file is indented properly.
10. Do not save or run the pipeline yet. It will fail because the pipeline does not have permissions to access the
key vault yet. Keep this browser tab open, we will resume once we set up the key vault permissions.
TIP
You may need to minimize the Azure CLI panel to see the Select button.
NOTE
You may be asked to allow the pipeline to access Azure resources, if prompted select Allow . You will only have to
approve it once.
3. Select the CmdLine job to view the logs. Note that the actual secret is not part of the logs.
Clean up resources
Follow the steps below to delete the resources you created:
1. If you created a new organization to host your project, see how to delete your organization, otherwise delete
your project).
2. All Azure resources created during this tutorial are hosted under a single resource group
PipelinesKeyVaultResourceGroup . Run the following command to delete the resource group and all of its
resources.
Next steps
Architect secure infrastructure in Azure Secure your cloud data
Release approvals and gates overview
4/22/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
A release pipeline specifies the end-to-end release pipeline for an app to be deployed across a range of stages.
Deployments to each stage are fully automated by using jobs and tasks.
Approvals and gates give you additional control over the start and completion of the deployment pipeline.
Each stage in a release pipeline can be configured with pre-deployment and post-deployment conditions that
can include waiting for users to manually approve or reject deployments, and checking with other automated
systems until specific conditions are verified. In addition, you can configure a manual intervention to pause the
deployment pipeline and prompt users to carry out manual tasks, then resume or reject the deployment.
The following diagram shows how these features are combined in a stage of a release pipeline.
By using approvals, gates, and manual intervention you can take full control of your releases to meet a wide
range of deployment requirements. Typical scenarios where approvals, gates, and manual intervention are
useful include the following.
SC EN A RIO F EAT URE( S) TO USE
Some users must manually validate the change request and Pre-deployment approvals
approve the deployment to a stage.
Some users must manually sign out from the app after Post-deployment approvals
deployment before the release is promoted to other stages.
You want to ensure there are no active issues in the work Pre-deployment gates
item or problem management system before deploying a
build to a stage.
You want to ensure there are no incidents from the Post-deployment gates
monitoring or incident management system for the app
after it's been deployed, before promoting the release.
After deployment, you want to wait for a specified time Post-deployment gates and post-deployment approvals
before prompting some users for a manual sign out.
During the deployment pipeline, a user must manually follow Manual Intervention
specific instructions and then resume the deployment.
During the deployment pipeline, you want to prompt the Manual Intervention
user to enter a value for a parameter used by the
deployment tasks, or allow the user to edit the details of this
release.
You can combine all three techniques within a release pipeline to fully achieve your own deployment
requirements.
In addition, you can install an extension that integrates with Ser viceNow to help you control and manage your
deployments though Service Management methodologies such as ITIL. For more information, see Release
deployment control using ServiceNow.
NOTE
The time delay before the pre-deployment gates are executed is capped at 48 hours. If you need to delay the overall
launch of your gates instead, it is recommended to use a delay task in your release pipeline.
# Delay
# Delay further execution of a workflow by a fixed time
jobs:
- job: RunsOnServer
pool: Server
steps:
- task: Delay@1
inputs:
delayForMinutes: '0'
Related articles
Approvals
Gates
Manual intervention
ServiceNow release and deployment control
Stages
Triggers
Release pipelines and releases
Additional resources
Video: Deploy quicker and safer with gates in Azure Pipelines
Configure your release pipelines for safe deployments
Azure Pipelines
A pipeline is made up of stages. A pipeline author can control whether a stage should run by defining conditions
on the stage. Another way to control if and when a stage should run is through approvals and checks .
Pipelines rely on resources such as environments, service connections, agent pools, variable groups, and secure
files. Checks enable the resource owner to control if and when a stage in any pipeline can consume a resource. As
an owner of a resource, you can define checks that must be satisfied before a stage consuming that resource can
start. For example, a manual approval check on an environment would ensure that deployment to that
environment only happens after the designated user(s) has reviewed the changes being deployed.
A stage can consist of many jobs, and each job can consume several resources. Before the execution of a stage can
begin, all checks on all the resources used in that stage must be satisfied. Azure Pipelines pauses the execution of
a pipeline prior to each stage, and waits for all pending checks to be completed. Checks are re-evaluation based
on the retry interval specified in each check. If all checks are not successful till the timeout specified, then that
stage is not executed. If any of the checks terminally fails (for example, if you reject an approval on one of the
resources), then that stage is not executed.
Approvals and other checks are not defined in the yaml file. Users modifying the pipeline yaml file cannot modify
the checks performed before start of a stage. Administrators of resources manage checks using the web interface
of Azure Pipelines.
IMPORTANT
Checks can be configured on environments, service connections and agent pools.
Approvals
You can manually control when a stage should run using approval checks. This is commonly used to control
deployments to production environments.
1. In your Azure DevOps project, go to the resource (eg environment) that needs to be protected.
2. Navigate to Approvals and Checks for the resource.
3. Select Create , provide the approvers and an optional message, and select Create again to complete the
addition of the manual approval check.
You can add multiple approvers to an environment. These approvers can be individual users or groups of users.
When a group is specified as an approver, only one of the users in that group needs to approve for the run to
move forward.
Using the advanced options, you can configure minimum number of approvers to complete the approval. A group
is considered as one approver.
You can also restrict the user who requested (initiated or created) the run from completing the approval. This
option is commonly used for segregation of roles amongst the users.
When you run a pipeline, the execution of that run pauses before entering a stage that uses the environment.
Users configured as approvers must review and approve or reject the deployment. If you have multiple runs
executing simultaneously, you must approve or reject each of them independently. If all required approvals are
not complete within the Timeout specified for the approval and all other checks succeed, the stage is marked
skipped.
Branch control
Using the branch control check, you can ensure all the resources linked with the pipeline are built from the
allowed branches and that they branches have protection enabled. This helps in control the release readiness and
quality of deployments. In case multiple resources are linked with the pipeline, source for all the resources is
verified. If you have linked another pipeline, then the branch of the specific run being deployed is verified for
protection.
To define the branch control check:
1. In your Azure DevOps project, go to the resource (eg environment) that needs to be protected.
2. Navigate to Approvals and Checks for the resource.
3. Choose the Branch control check and provide a comma separated list of allowed branches. You can
mandate that the branch should have protection enabled and the behavior of the check in case protection
status for one of the branches is not known.
At run time, the check would validate branches for all linked resources in the run against the allowed list. If any
one of the branches do not match the criteria, the check fails and the stage is marked failed.
NOTE
The check requires the branch names to be fully qualified. Make sure the format for branch name is
refs/heads/<branch name>
Business hours
In case you want all deployments to your environment to happen in a specific time window only, then business
hours check is the ideal solution. When you run a pipeline, the execution of the stage that uses the resource waits
for business hours. If you have multiple runs executing simultaneously, each of them is independently verified. At
the start of the business hours, the check is marked successful for all the runs.
If execution of the stage has not started at the end of business hours (held up by to some other check), then the
business hours approval is automatically withdrawn and a re-evaluation is scheduled for the next day. The check
fails if execution of the stage does not start within the Timeout period specified for the check, and the stage is
marked failed.
NOTE
User defined pipeline variables are not accessbile to the check. You can only access the pre-defined variables and variables
from the linked variable group in the request body.
Required template
With the required template check, you can enforce pipelines to use a specific YAML template. When this check is
in place, a pipeline will fail if it doesn't extend from the referenced template.
To define a required template approval:
1. In your Azure DevOps project, go to the service connection that you want to restrict.
2. Open Approvals and Checks in the menu next to Edit .
3. In the Add your first check menu, select Required template .
4. Enter details on how to get to your required template file.
Repositor y type : The location of your repository (GitHub, Azure, or Bitbucket).
Repositor y : The name of your repository that contains your template.
Ref : The branch or tag of the required template.
Path to required template : The name of your template.
You can have multiple required templates for the same service connection. In this example, the required template
is required.yml .
Evaluate artifact
You can evaluate artifact(s) to be deployed to an environment against custom policies.
NOTE
Currently, this works with container image artifacts only
To define a custom policy evaluation over the artifact(s), follow the below steps.
1. In your Azure DevOps Services project, navigate to the environment that needs to be protected. Learn more
about creating an environment.
4. Paste the policy definition and click Save . See more about writing policy definitions.
When you run a pipeline, the execution of that run pauses before entering a stage that uses the environment. The
specified policy is evaluated against the available image metadata. The check passes when the policy is successful
and fails otherwise. The stage is marked failed if the check fails.
Passed
Failed
You can also see the complete logs of the policy checks from the pipeline view.
Exclusive lock
The exclusive lock check allows only a single run from the pipeline to proceed. All stages in all runs of that
pipeline which use the resource are paused. When the stage using the lock completes, then another stage can
proceed to use the resource. Also, only one stage will be allowed to continue. Any other stages which tried to take
the lock will be cancelled.
FAQ
The checks defined did not start. What happened?
The evaluation of checks starts once the stage conditions are satisfied. You should confirm run of the stage started
after the checks were added on the resource and that the resource is consumed in the stage.
How can I use checks for scheduling a stage?
Using the business hours check, you can control the time for start of stage execution. You can achieve the same
behavior as predefined schedule on a stage in designer releases.
How can I take advance approvals for a stage scheduled to run in future?
This scenario can be enabled
1. The business hours check enables all stages deploying to a resource to be scheduled for execution between the
time window
2. When approvals configured on the same resource, then the stage would wait for approvals before starting.
3. You can configure both the checks on a resource. The stage would wait on approvals and business hours. It
would start in the next scheduled window after approvals are complete.
Can I wait for completion of security scanning on the artifact being deployed?
In order to wait for completion of security scanning on the artifact being deployed, you would need to use an
external scanning service like AquaScan. The artifact being deployed would need to be uploaded at a location
accessible to the scanning service before the start of checks, and can be identified using pre-defined variables.
Using the Invoke REST API check, you can add a check to wait on the API in the security service and pass the
artifact identifier as an input.
How can I use output variables from previous stages in a check?
By default, only pre-defined variables are available to checks. You can use a linked variable group to access other
variables. The output variable from the previous stage can be written to the variable group and accessed in the
check.
Release deployment control using gates
2/26/2020 • 5 minutes to read • Edit Online
Azure Pipelines
Gates allow automatic collection of health signals from external services, and then promote the release when all
the signals are successful at the same time or stop the deployment on timeout. Typically, gates are used in
connection with incident management, problem management, change management, monitoring, and external
approval systems.
The following diagram illustrates the flow of gate evaluation where, after the initial stabilization delay period, not
all gates have succeeded at each sampling interval. In this case, after the timeout period expires, the deployment
is rejected.
Video
Related articles
Approvals and gates overview
Manual intervention
Use approvals and gates to control your deployment
Security Compliance and Assessment task
Stages
Triggers
Additional resources
Video: Deploy quicker and safer with gates in Azure Pipelines
Configure your release pipelines for safe deployments
Tutorial: Use approvals and gates to control your deployment
Twitter sentiment as a release gate
GitHub issues as a release gate
Author custom gates. Library with examples
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
By using a combination of manual deployment approvals, gates, and manual intervention within a release pipeline
in Azure Pipelines and Team Foundation Server (TFS), you can quickly and easily configure a release pipeline with
all the control and auditing capabilities you require for your DevOps CI/CD processes.
In this tutorial, you learn about:
Extending the approval process with gates
Extending the approval process with manual intervention
Viewing and monitoring approvals and gates
Prerequisites
This tutorial extends the tutorial Define your multi-stage continuous deployment (CD) pipeline. You must have
completed that tutorial first.
You'll also need a work item quer y that returns some work items from Azure Pipelines or TFS. This query is used
in the gate you will configure. You can use one of the built-in queries, or create a new one just for this gate to use.
For more information, see Create managed queries with the query editor.
In the previous tutorial, you saw a simple use of manual approvals to allow an administrator to confirm that a
release is ready to deploy to the production stage. In this tutorial, you'll see some additional and more powerful
ways to configure approvals for releases and deployments by using manual intervention and gates. For more
information about the ways you can configure approvals for a release, see Approvals and gates overview.
Configure a gate
First, you will extend the approval process for the release by adding a gate. Gates allow you to configure
automated calls to external services, where the results are used to approve or reject a deployment. You can use
gates to ensure that the release meets a wide range or criteria, without requiring user intervention.
1. In the Releases tab of Azure Pipelines , select your release pipeline and choose Edit to open the pipeline
editor.
2. Choose the pre-deployment conditions icon for the Production stage to open the conditions panel. Enable
gates by using the switch control in the Gates section.
3. To allow gate functions to initialize and stabilize (it may take some time for them to begin returning
accurate results), you configure a delay before the results are evaluated and used to determine if the
deployment should be approved or rejected. For this example, so that you can see a result reasonably
quickly, set the delay to a short period such as one minute.
6. Open the Evaluation options section and specify the timeout and the sampling interval. For this example,
choose short periods so that you can see the results reasonably quickly. The minimum values you can
specify are 6 minutes timeout and 5 minutes sampling interval.
The sampling interval and timeout work together so that the gates will call their functions at suitable
intervals, and reject the deployment if they don't all succeed during the same sampling interval and
within the timeout period. For more details, see Gates.
For more information about using other types of approval gates, see Approvals and gates.
2. Choose the ellipses (...) in the QA deployment pipeline bar and then choose Add agentless job .
Several tasks, including the Manual Inter vention task, can be used only in an agentless job.
3. Drag and drop the new agentless job to the start of the QA process, before the existing agent job. Then
choose + in the Agentless job bar and add a Manual Inter vention task to the job.
4. Configure the task by entering a message (the Instructions ) to display when it executes and pauses the
release pipeline.
Notice that you can specify a list of users who will receive a notification that the deployment is waiting for
manual approval. You can also specify a timeout and the action (approve or reject) that will occur if there is
no user response within the timeout period. For more details, see Manual Intervention task.
5. Save the release pipeline and then start a new release.
3. You see the intervention message, and can choose to resume or reject the deployment. Enter some text
response to the intervention and choose Resume .
4. Go back to the pipeline view of the release. After deployment to the QA stage succeeds, you see the pre-
deployment approval pending message for the Production environment.
5. Enter your approval message and choose Approve to continue the deployment.
6. Go back to the pipeline view of the release. Now you see that the gates are being processed before the
release continues.
7. After the gate evaluation has successfully completed, the deployment occurs for the Production stage.
Choose the Production stage icon in the release summary to see more details of the approvals and gate
evaluations.
Altogether, by using a combination of manual approvals, approval gates, and the manual intervention task, you've
seen how can configure a release pipeline with all the control and auditing capabilities you may require.
Next step
Integrate with ServiceNow change management
Release deployment control using approvals
2/26/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
When a release is created from a release pipeline that defines approvals, the deployment stops at each point
where approval is required until the specified approver grants approval or rejects the release (or re-assigns the
approval to another user). You can enable manual deployment approvals for each stage in a release pipeline.
The link in the email message opens the Summar y page for the release where the user can approve or reject the
release.
Related articles
Approvals and gates overview
Manual intervention
Stages
Triggers
Runs represent one execution of a pipeline. During a run, the pipeline is processed, and agents process one or
more job. A pipeline run includes jobs, steps, and tasks. Runs power both continuous integration (CI) and
continuous delivery (CD) pipelines.
When you run a pipeline, a lot of things happen under the covers. While you often won't need to know about
them, once in a while it's useful to have the big picture. At a high level, Azure Pipelines will:
Process the pipeline
Request one or more agents to run jobs
Hand off jobs to agents and collect the results
On the agent side, for each job, an agent will:
Get ready for the job
Run each step in the job
Report results to Azure Pipelines
Jobs may succeed, fail, or be canceled. There are also situations where a job may not complete. Understanding
how this happens can help you troubleshoot issues.
Let's break down each action one by one.
To turn a pipeline into a run, Azure Pipelines goes through several steps in this order:
1. First, expand templates and evaluate template expressions.
2. Next, evaluate dependencies at the stage level to pick the first stage(s) to run.
3. For each stage selected to run, two things happen:
All resources used in all jobs are gathered up and validated for authorization to run.
Evaluate dependencies at the job level to pick the first job(s) to run.
4. For each job selected to run, expand multi-configs ( strategy: matrix or strategy: parallel in YAML) into
multiple runtime jobs.
5. For each runtime job, evaluate conditions to decide whether that job is eligible to run.
6. Request an agent for each eligible runtime job.
As runtime jobs complete, Azure Pipelines will see if there are new jobs eligible to run. If so, steps 4 - 6 repeat with
the new jobs. Similarly, as stages complete, steps 2 - 6 will be repeated for any new stages.
This ordering helps answer a common question: why can't I use certain variables in my template parameters? Step
1, template expansion, operates solely on the text of the YAML document. Runtime variables don't exist during that
step. After step 1, template parameters have been completely resolved and no longer exist.
It also answers another common issue: why can't I use variables to resolve service connection / environment
names? Resources are authorized before a stage can start running, so stage- and job-level variables aren't
available. Pipeline-level variables can be used, but only those explicitly included in the pipeline. Variable groups
are themselves a resource subject to authorization, so their data is likewise not available when checking resource
authorization.
Request an agent
Whenever Azure Pipelines needs to run a job, it will ask the pool for an agent. (Server jobs are an exception, since
they run on the Azure Pipelines server itself.) Microsoft-hosted and self-hosted agent pools work slightly
differently.
Microsoft-hosted agent pool requests
First, the service checks on your organization's parallel jobs. It adds up all running jobs on all Microsoft-hosted
agents and compares that with the number of parallel jobs purchased. If there are no available parallel slots, the
job has to wait on a slot to free up.
Once a parallel slot is available, the job is routed to the requested agent type. Conceptually, the Microsoft-hosted
pool is one giant, global pool of machines. (In reality, it's a number of different physical pools split by geography
and operating system type.) Based on the vmImage (in YAML) or pool name (in the classic editor) requested, an
agent is selected.
All agents in the Microsoft pool are fresh, new virtual machines which haven't run any pipelines before. When the
job completes, the agent VM will be discarded.
Self-hosted agent pool requests
Similar to the Microsoft-hosted pool, the service first checks on your organization's parallel jobs. It adds up all
running jobs on all self-hosted agents and compares that with the number of parallel jobs purchased. If there are
no available parallel slots, the job has to wait on a slot to free up.
Once a parallel slot is available, the self-hosted pool is examined for a compatible agent. Self-hosted agents offer
capabilities, which are strings indicating that particular software is installed or settings are configured. The
pipeline has demands, which are the capabilities required to run the job. If a free agent whose capabilities match
the pipeline's demands cannot be found, the job will continue waiting. If there are no agents in the pool whose
capabilities match the demands, the job will fail.
Self-hosted agents are typically re-used from run to run. This means that a pipeline job can have side effects:
warming up caches, having most commits already available in the local repo, and so on.
Steps are implemented by tasks. Tasks themselves are implemented as Node.js or PowerShell scripts. The task
system routes inputs and outputs to the backing scripts. It also provides some common services such as altering
the system path and creating new pipeline variables.
Each step runs in its own process, isolating it from the environment left by previous steps. Because of this process-
per-step model, environment variables are not preserved between steps. However, tasks and scripts have a
mechanism to communicate back to the agent: logging commands. When a task or script writes a logging
command to standard out, the agent will take whatever action is requested.
There is an agent command to create new pipeline variables. Pipeline variables will be automatically converted
into environment variables in the next step. In order to set a new variable myVar with a value of myValue , a script
can do this:
As steps run, the agent is constantly sending output lines to the service. That's why you can see a live feed of the
console. At the end of each step, the entire output from the step is also uploaded as a log file. Logs can be
downloaded once the pipeline has finished. Other items that the agent can upload include artifacts and test
results. These are also available after the pipeline completes.
Optional parameters
branch : Filter by builds for this branch.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
pipeline-ids : Space-separated IDs of definitions for which to list builds.
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
quer y-order : Define the order in which pipeline runs are listed. Accepted values are FinishTimeAsc,
FinishTimeDesc, QueueTimeAsc, QueueTimeDesc, StartTimeAsc, and StartTimeDesc.
reason : Only list builds for this specified reason. Accepted values are batchedCI, buildCompletion,
checkInShelveset, individualCI, manual, pullRequest, schedule, triggered, userCreated, and validateShelveset.
requested-for : Limit to the builds requested for a specified user or group.
result : Limit to the builds with a specified result. Accepted values are canceled, failed, none, partiallySucceeded,
and succeeded.
status : Limit to the builds with a specified status. Accepted values are all, cancelling, completed, inProgress,
none, notStarted, and postponed.
tags : Limit to the builds with each of the specified tags. Space separated.
top : Maximum number of builds to list.
Example
The following command lists the first three pipeline runs which have a status of completed and a result of
succeeded , and returns the result in table format.
az pipelines runs list --status completed --result succeeded --top 3 --output table
Run ID Number Status Result Pipeline ID Pipeline Name Source Branch Queued
Time Reason
-------- ---------- --------- --------- ------------- -------------------------- --------------- ------
-------------------- ------
125 20200124.1 completed succeeded 12 Githubname.pipelines-java master 2020-
01-23 18:56:10.067588 manual
123 20200123.2 completed succeeded 12 Githubname.pipelines-java master 2020-
01-23 11:55:56.633450 manual
122 20200123.1 completed succeeded 12 Githubname.pipelines-java master 2020-
01-23 11:48:05.574742 manual
Parameters
id : Required. ID of the pipeline run.
open : Optional. Opens the build results page in your web browser.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
Example
The following command shows details for the pipeline run with the ID 123 and returns the results in table format.
It also opens your web browser to the build results page.
Run ID Number Status Result Pipeline ID Pipeline Name Source Branch Queued
Time Reason
-------- ---------- --------- --------- ------------- -------------------------- --------------- ------
-------------------- --------
123 20200123.2 completed succeeded 12 Githubname.pipelines-java master 2020-
01-23 11:55:56.633450 manual
Parameters
run-id : Required. ID of the pipeline run.
tags : Required. Tags to be added to the pipeline run (comma-separated values).
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
Example
The following command adds the tag YAML to the pipeline run with the ID 123 and returns the result in JSON
format.
az pipelines runs tag add --run-id 123 --tags YAML --output json
[
"YAML"
]
Parameters
run-id : Required. ID of the pipeline run.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
Example
The following command lists the tags for the pipeline run with the ID 123 and returns the result in table format.
Tags
------
YAML
Parameters
run-id : Required. ID of the pipeline run.
tag : Required. Tag to be deleted from the pipeline run.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
Example
The following command deletes the YAML tag from the pipeline run with ID 123 .
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
At run-time, each job in a pipeline may access other resources in Azure DevOps. For example, a job may:
Check out source code from a Git repository
Add a tag to the repository
Access a feed in Azure Artifacts
Upload logs from the agent to the service
Upload test results and other artifacts from the agent to the service
Update a work item
Azure Pipelines uses job access tokens to perform these tasks. A job access token is a security token that is
dynamically generated by Azure Pipelines for each job at run time. The agent on which the job is running uses the
job access token in order to access these resources in Azure DevOps. You can control which resources your
pipeline has access to by controlling how permissions are granted to job access tokens.
The token's permissions are derived from (a) job authorization scope and (b) the permissions you set on project or
collection build service account.
NOTE
In Azure DevOps Server 2020, Limit job authorization scope to current project applies only to YAML pipelines and
classic build pipelines. It does not apply to classic release pipelines. Classic release pipelines always run with project collection
scope.
NOTE
If the scope is set to project at the organization level, you cannot change the scope in each project.
IMPORTANT
If the scope is not restricted at either the organization level or project level, then every job in your YAML pipeline gets a
collection scoped job access token. In other words, your pipeline has access to any repository in any project of your
organization. If an adversary is able to gain access to a single pipeline in a single project, they will be able to gain access to
any repository in your organization. This is why, it is recommended that you restrict the scope at the highest level
(organization settings) in order to contain the attack to a single project.
If you use Azure DevOps Server 2019, then all YAML jobs run with the job authorization scope set to collection . In
other words, these jobs have access to all repositories in your project collection. You cannot change this in Azure
DevOps server 2019.
YAML pipelines are not available in TFS.
NOTE
If your pipeline is in a public project , then the job authorization scope is automatically restricted to project no matter
what you configure in any setting. Jobs in a public project can access resources such as build artifacts or test results only
within the project and not from other projects of the organization.
By default, the collection-scoped identity is used, unless configured otherwise as described in the previous Job
athorization scope section.
2. Choose the + icon, start to type in the name SpaceGameWeb , and select the SpaceGameWeb Build
Ser vice account.
3. Configure the desired permissions for that user.
Example - Configure permissions to access other resources in the same project collection
In this example, the fabrikam-tailspin/SpaceGameWeb project-scoped build identity is granted permissions to access
other resources in the fabrikam-tailspin/FabrikamFiber project.
1. In the FabrikamFiber project, navigate to Project settings , Permissions .
2. Choose Users , start to type in the name SpaceGameWeb , and select the SpaceGameWeb Build
Ser vice account. If you don't see any search results initially, select Expand search .
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
Teams track their pipeline health and efficiency to ensure continuous delivery to their customers. You can gain
visibility into your team's pipeline(s) using Pipeline analytics. The source of information for pipeline analytics is the
set of runs for your pipeline. These analytics are accrued over a period of time, and form the basis of the rich
insights offered. Pipelines reports show you metrics, trends, and can help you identify insights to improve the
efficiency of your pipeline.
Prerequisites
Ensure that you have installed the Analytics Marketplace extension for Azure DevOps Server.
Failure trend : Shows the number of failures per day. This data is divided by stages if multiple stages are
applicable for the pipeline.
Top failing tasks & their failed runs : Lists the top failing tasks, their trend and provides pointers to their
failed runs. Analyze the failures in the build to fix your failing task and improve the pass rate of the pipeline.
Pipeline duration report
The Pipeline duration report shows how long your pipeline typically takes to complete successfully. You can
review the duration trend and analyze the top tasks by duration to optimize the duration of the pipeline.
Test failures report
The Test failures report provides a granular view of the top failing tests in the pipeline, along with the failure
details. For more information on this report, see Test failures.
Filters
Pipelines reports can be further filtered by date range or branch.
Date range : The default view shows data from the last 14 days. The filter helps change this range.
Branch filter : View the report for a particular branch or a set of branches.
Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2015
Widgets smartly format data to provide access to easily consumable data. You add widgets to your team
dashboards to gain visibility into the status and trends occurring as you develop your software project.
Each widget provides access to a chart, user-configurable information, or a set of links that open a feature or
function. You can add one or more charts or widgets to your dashboard. Up to 200 widgets total. You add several
widgets at a time simply by selecting each one. See Manage dashboards to determine the permissions you need to
add and remove widgets from a dashboard.
Prerequisites
You must be a member of a project. If you don't have a project yet, create one.
If you haven't been added as a project member, get added now.
Anyone with access to a project, including stakeholders, can view dashboards.
To add, edit, or manage a team dashboard, you must have Basic access or greater and be a team admin, a
project admin, or have dashboard permissions. In general, you need to be a team member for the currently
selected team to edit dashboards.
You must be a member of a project. If you don't have a project yet, create one.
If you haven't been added as a project member, get added now.
Anyone with access to a project, including stakeholders, can view dashboards.
To add, edit, or manage a team dashboard, you must have Basic access or greater and be a team admin, a
project admin, or have dashboard permissions. In general, you need to be a team admin for the currently
selected team to edit dashboards. Request your current team or project admin to add you as a team admin.
You must be a member of a project. If you don't have a project yet, create one.
If you haven't been added as a project member, get added now.
Anyone with access to a project, including stakeholders, can view dashboards.
To add, edit, or manage a team dashboard, you must have Basic access or greater and be added to the team
administrator role for the team.
NOTE
Widgets specific to a service are disabled if the service they depend on has been disabled. For example, if Boards is disabled,
New Work item and all work tracking Analytics widgets are disabled and won't appear in the widget catalog. If Analytics is
disabled or not installed, then all Analytics widgets are disabled.
To re-enable a service, see Turn an Azure DevOps service on or off. For Analytics, see enable or install Analytics].
Select a dashboard
All dashboards are associated with a team. You need to be a team administrator, project administrator, or a team
member with permissions to modify a dashboard.
1. Open a web browser, connect to your project, and choose Over view>Dashboards . The dashboard
directory page opens.
If you need to switch to a different project, choose the Azure DevOps logo to browse all projects.
2. Choose the dashboard you want to modify.
Open a web browser, connect to your project, and choose Dashboards .
Select the team whose dashboards you want to view. To switch your team focus, see Switch project or team focus.
Choose the name of the dashboard to modify it.
For example, here we choose to view the Work in Progress dashboard.
If you need to switch to a different project, choose the Azure DevOps logo to browse all projects.
Add a widget
To add widgets to the dashboard, choose Edit .
The widget catalog will automatically open. Add all the widgets that you want and drag their tiles into the sequence
you want.
When you're finished with your additions, choose Done Editing to exit dashboard editing. This will dismiss the
widget catalog. You can then configure the widgets as needed.
TIP
When you're in dashboard edit mode, you can remove, rearrange, and configure widgets, as well as add new widgets. Once
you leave edit mode, the widget tiles remain locked, reducing the chances of accidentally moving a widget.
To remove a widget, choose the actions icon and select the Delete option from the menu.
Configure a widget
Most widgets support configuration, which may include specifying the title, setting the widget size, and other
widget-specific variables.
To configure a widget, add the widget to a dashboard, choose open the menu, and select Configure .
To configure a widget, add the widget to a dashboard and then choose the configure icon.
Once you've configured the widget, you can edit it by opening the actions menu.
When you're finished with your changes, choose Done Editing to exit dashboard editing.
Choose to modify your dashboard. You can then drag tiles to reorder their sequence on the dashboard.
To remove a widget, choose the actions icon and select the Delete option from the menu.
When you're finished with your changes, choose to exit dashboard editing.
Copy a widget
You can copy a widget to the same dashboard or to another team dashboard. If you want to move widgets you
have configured to another dashboard, this is how you do it. Before you begin, add the dashboard you want to
copy or move the widget to. Once you've copied the widget, you can delete it from the current dashboard.
To copy a configured widget to another team dashboard, choose the actions icon and select Copy to
dashboard and then the dashboard to copy it to.
To copy a configured widget to another team dashboard, choose the actions icon and select Add to
dashboard and then the dashboard to copy it to.
Widget size
Some widgets are pre-sized and can't be changed. Others are configurable through their configuration dialog.
For example, the Chart for work items widget allows you to select an area size ranging from 2 x 2 to 4 x 4 (tiles).
Extensibility and Marketplace widgets
In addition to the widgets described in the Widget catalog, you can add widgets from the Marketplace, or create
your own widgets using the Widget REST APIs.
Disabled Marketplace widget
If your organization owner or project collection administrator disables a marketplace widget, you'll see the
following image:
To regain access to it, request your admin to reinstate or reinstall the widget.
Related articles
Analytics-based widgets
What is Analytics?
Burndown guidance
Cumulative flow & lead/cycle time guidance
Velocity guidance
Burndown guidance
Cumulative flow & lead/cycle time guidance
Velocity guidance
Widgets based on Analytics
11/2/2020 • 4 minutes to read • Edit Online
Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
Analytics supports several dashboard widgets that take advantage of the power of the service. Using these widgets,
you and your team can gain valuable insights into the health and status of your work.
Analytics supports several dashboard widgets that take advantage of the power of the service. Once you enable or
install Analytics on a project collection, you can add these widgets to your dashboard. You must be an organization
owner or a member of the Project Collection Administrator group to add extensions or enable the service. Using
these widgets, you and your team can gain valuable insights into the health and status of your work.
You add an Analytics widget to a dashboard the same way you add any other type of widget. For details, see Add a
widget to your dashboard.
NOTE
If Boards is disabled, then Analytics views will also be disabled and all widgets associated with work item tracking won't
appear in the widget catalog and will become disabled. To re-enable a service, see Turn an Azure DevOps service on or off.
Burndown
The Burndown widget lets you display a trend of remaining work across multiple teams and multiple sprints. You
can use it to create a release burndown, a bug burndown, or a burndown on any scope of work over time. It will
help you answer questions like:
Will we complete the scope of work by the targeted completion date? If not, what is the projected completion
date?
What kind of scope creep does my project have?
What is the projected completion date for my project?
Burndown widget showing a release Burndown
To learn more, see Configure a Burndown or Burnup widget.
Burnup
The Burnup widget lets you display a trend of completed work across multiple teams and multiple sprints. You can
use it to create a release burnup, a bug burnup, or a burnup on any scope of work over time. When completed
work meets total scope, your project is done!
Burnup widget showing a release Burnup
Cycle Time
The Cycle time widget will help you analyze the time it takes for your team to complete work items once they begin
actively working on them. A lower cycle time is typically indicative of a healthier team process. Using the Cycle time
widget you will be able to answer questions like:
On average, how long does it take my team to build a feature or fix a bug?
Are bugs costing my team a lot of development time?
Cycle time widget showing 30 days of data
To learn more, see Cycle time and lead time control charts.
Lead Time
The Lead time widget will help you analyze the time it takes to deliver work from your backlog. Lead time
measures the total time elapsed from the creation of work items to their completion. Using the Lead time widget
you will be able to answer questions like:
How long does it take for work requested by a customer to be delivered?
Did work items take longer than usual to complete?
Lead time widget showing 60 days of data
To learn more, see Cycle time and lead time control charts.
Velocity
The Velocity widget will help you learn how much work your team can complete during a sprint. The widget shows
the team's velocity by Story Points, work item count, or any custom field. You can also compare the work delivered
against your plan and track work completed late. Using the Velocity widget, you will be able to answer questions
like:
On average, what is the velocity of my team?
Is my team consistently delivering what we planned?
How much work can we commit to deliver in upcoming sprints?
Velocity widget showing 8 sprints of data based on Stor y Points
To learn more, see Configure and view Velocity widgets.
Build and deploy your apps. Find guidance based on your language and platform.
Anaconda
Android
ASP.NET
Containers
Go
Java
PHP
Python
Ruby
UWP
Xamarin
Xcode
.NET Core
Android
ASP.NET
Containers
Go
Java
PHP
Python
Ruby
UWP
Xamarin
Xcode
Azure Stack
Azure SQL database
Linux VM
npm
NuGet
VMware
Windows VM
Linux VM
npm
NuGet
VMware
Windows VM
Build, test, and deploy .NET Core apps
11/2/2020 • 16 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use a pipeline to automatically build and test your .NET Core projects. Learn how to:
Set up your build environment with Microsoft-hosted or self-hosted agents.
Restore dependencies, build your project, and test with the .NET Core CLI task or a script.
Use the publish code coverage task to publish code coverage results.
Package and deliver your code with the .NET Core CLI task and the publish build artifacts task.
Publish to a NuGet feed.
Deploy your web app to Azure.
NOTE
For help with .NET Framework projects, see Build ASP.NET apps with .NET Framework.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
NOTE
This guidance applies to TFS version 2017.3 and newer.
https://github.com/MicrosoftDocs/pipelines-dotnet-core
1. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .
2. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job.
You just created and ran a pipeline that we automatically created for you, because your code
appeared to be a good match for the ASP.NET Core template.
You now have a working YAML pipeline ( azure-pipelines.yml ) in your repository that's ready for you to
customize!
3. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.
4. See the sections below to learn some of the more common ways to customize your pipeline.
YAML
1. Add an azure-pipelines.yml file in your repository. Customize this snippet for your build.
trigger:
- master
pool: Default
variables:
buildConfiguration: 'Release'
2. Create a pipeline (if you don't know how, see Create your first pipeline), and for the template select YAML .
3. Set the Agent pool and YAML file path for your pipeline.
4. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
5. When you're ready to make changes to your pipeline, Edit it.
6. See the sections below to learn some of the more common ways to customize your pipeline.
Classic
1. Create a pipeline (if you don't know how, see Create your first pipeline), and for the template select
Empty Pipeline .
2. In the task catalog, find and add the .NET Core task. This task will run dotnet build to build the code in
the sample repository.
3. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
You now have a working pipeline that's ready for you to customize!
4. When you're ready to make changes to your pipeline, Edit it.
5. See the sections below to learn some of the more common ways to customize your pipeline.
Build environment
You can use Azure Pipelines to build your .NET Core projects on Windows, Linux, or macOS without needing to
set up any infrastructure of your own. The Microsoft-hosted agents in Azure Pipelines have several released
versions of the .NET Core SDKs preinstalled.
Ubuntu 18.04 is set here in the YAML file.
pool:
vmImage: 'ubuntu-18.04' # examples of other options: 'macOS-10.15', 'windows-2019'
See Microsoft-hosted agents for a complete list of images and Pool for further examples.
The Microsoft-hosted agents don't include some of the older versions of the .NET Core SDK. They also don't
typically include prerelease versions. If you need these kinds of SDKs on Microsoft-hosted agents, add the
UseDotNet@2 task to your YAML file.
To install the preview version of the 5.0.x SDK for building and 3.0.x for running tests that target .NET Core 3.0.x,
add this snippet:
steps:
- task: UseDotNet@2
inputs:
version: '5.0.x'
includePreviewVersions: true # Required for preview versions
- task: UseDotNet@2
inputs:
version: '3.0.x'
packageType: runtime
If you are installing on a Windows agent, it will already have a .NET Core runtime on it. To install a newer SDK,
set performMultiLevelLookup to true in this snippet:
steps:
- task: UseDotNet@2
displayName: 'Install .NET Core SDK'
inputs:
version: 5.0.x
performMultiLevelLookup: true
includePreviewVersions: true # Required for preview versions
TIP
As an alternative, you can set up a self-hosted agent and save the cost of running the tool installer. See Linux, MacOS, or
Windows. You can also use self-hosted agents to save additional time if you have a large repository or you run
incremental builds. A self-hosted agent can also help you in using the preview or private SDKs that are not officially
supported by Azure DevOps or you have available on your corporate or on-premises environments only.
You can build your .NET Core projects by using the .NET Core SDK and runtime on Windows, Linux, or macOS.
Your builds run on a self-hosted agent. Make sure that you have the necessary version of the .NET Core SDK and
runtime installed on the agent.
Restore dependencies
NuGet is a popular way to depend on code that you don't build. You can download NuGet packages and project-
specific tools that are specified in the project file by running the dotnet restore command either through the
.NET Core task or directly in a script in your pipeline.
You can download NuGet packages from Azure Artifacts, NuGet.org, or some other external or internal NuGet
repository. The .NET Core task is especially useful to restore packages from authenticated NuGet feeds.
This pipeline uses an artifact feed for dotnet restore in the .NET Core CLI task.
trigger:
- master
pool:
vmImage: 'windows-latest'
variables:
buildConfiguration: 'Release'
steps:
- task: DotNetCoreCLI@2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-vsts-feed' # A series of numbers and letters
- task: DotNetCoreCLI@2
inputs:
command: 'build'
arguments: '--configuration $(buildConfiguration)'
displayName: 'dotnet build $(buildConfiguration)'
For more information about NuGet service connections, see publish to NuGet feeds.
1. Select Tasks in the pipeline. Select the job that runs your build tasks. Then select + to add a new task to
that job.
2. In the task catalog, find and add the .NET Core task.
3. Select the task and, for Command , select restore .
4. Specify any other options you need for this task. Then save the build.
NOTE
Make sure the custom feed is specified in your NuGet.config file and that credentials are specified in the NuGet service
connection.
steps:
- task: DotNetCoreCLI@2
displayName: Build
inputs:
command: build
projects: '**/*.csproj'
arguments: '--configuration $(buildConfiguration)' # Update this to match your need
You can run any custom dotnet command in your pipeline. The following example shows how to install and use
a .NET global tool, dotnetsay:
steps:
- task: DotNetCoreCLI@2
displayName: 'Install dotnetsay'
inputs:
command: custom
custom: tool
arguments: 'install -g dotnetsay'
Build
1. Select Tasks in the pipeline. Select the job that runs your build tasks. Then select + to add a new task to
that job.
2. In the task catalog, find and add the .NET Core task.
3. Select the task and, for Command , select build or publish .
4. Specify any other options you need for this task. Then save the build.
Install a tool
To install a .NET Core global tool like dotnetsay in your build running on Windows, take the following steps:
1. Add the .NET Core task and set the following properties:
Command : custom.
Path to projects : leave empty.
Custom command : tool.
Arguments : install -g dotnetsay .
2. Add a Command Line task and set the following properties:
Script: dotnetsay .
steps:
# ...
# do this after other tasks such as building
- task: DotNetCoreCLI@2
inputs:
command: test
projects: '**/*Tests/*.csproj'
arguments: '--configuration $(buildConfiguration)'
An alternative is to run the dotnet test command with a specific logger and then use the Publish Test
Results task:
steps:
# ...
# do this after your tests have run
- script: dotnet test <test-project> --logger trx
- task: PublishTestResults@2
condition: succeededOrFailed()
inputs:
testRunner: VSTest
testResultsFiles: '**/*.trx'
Use the .NET Core task with Command set to test . Path to projects should refer to the test projects in your
solution.
steps:
# ...
# do this after other tasks such as building
- task: DotNetCoreCLI@2
inputs:
command: test
projects: '**/*Tests/*.csproj'
arguments: '--configuration $(buildConfiguration) --collect "Code coverage"'
If you choose to run the dotnet test command, specify the test results logger and coverage options. Then use
the Publish Test Results task:
steps:
# ...
# do this after your tests have run
- script: dotnet test <test-project> --logger trx --collect "Code coverage"
- task: PublishTestResults@2
inputs:
testRunner: VSTest
testResultsFiles: '**/*.trx'
1. Add the .NET Core task to your build job and set the following properties:
Command : test.
Path to projects : Should refer to the test projects in your solution.
Arguments : --configuration $(BuildConfiguration) --collect "Code coverage" .
2. Ensure that the Publish test results option remains selected.
Collect code coverage metrics with Coverlet
If you're building on Linux or macOS, you can use Coverlet or a similar tool to collect code coverage metrics.
Code coverage results can be published to the server by using the Publish Code Coverage Results task. To
leverage this functionality, the coverage tool must be configured to generate results in Cobertura or JaCoCo
coverage format.
To run tests and publish code coverage with Coverlet:
Add a reference to the coverlet.msbuild NuGet package in your test project(s).
Add this snippet to your azure-pipelines.yml file:
- task: DotNetCoreCLI@2
displayName: 'dotnet test'
inputs:
command: 'test'
arguments: '--configuration $(buildConfiguration) /p:CollectCoverage=true
/p:CoverletOutputFormat=cobertura /p:CoverletOutput=$(Build.SourcesDirectory)/TestResults/Coverage/'
publishTestResults: true
projects: '**/test-library/*.csproj' # update with your test project directory
- task: PublishCodeCoverageResults@1
displayName: 'Publish code coverage report'
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: '$(Build.SourcesDirectory)/**/coverage.cobertura.xml'
steps:
- task: DotNetCoreCLI@2
inputs:
command: publish
publishWebProjects: True
arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)'
zipAfterPublish: True
# this code takes all the files in $(Build.ArtifactStagingDirectory) and uploads them as an artifact of your
build.
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'myWebsiteName'
NOTE
The dotNetCoreCLI@2 task has a publishWebProjects input that is set to true by default. This publishes all web
projects in your repo by default. You can find more help and information in the open source task on GitHub.
To copy additional files to Build directory before publishing, use Utility: copy files.
Publish to a NuGet feed
To create and publish a NuGet package, add the following snippet:
steps:
# ...
# do this near the end of your pipeline in most cases
- script: dotnet pack /p:PackageVersion=$(version) # define version variable elsewhere in your pipeline
- task: NuGetAuthenticate@0
input:
nuGetServiceConnections: '<Name of the NuGet service connection>'
- task: NuGetCommand@2
inputs:
command: push
nuGetFeedType: external
publishFeedCredentials: '<Name of the NuGet service connection>'
versioningScheme: byEnvVar
versionEnvVar: version
For more information about versioning and publishing NuGet packages, see publish to NuGet feeds.
Deploy a web app
To create a .zip file archive that's ready for publishing to a web app, add the following snippet:
steps:
# ...
# do this after you've built your app, near the end of your pipeline in most cases
# for example, you do this before you deploy to an Azure web app on Windows
- task: DotNetCoreCLI@2
inputs:
command: publish
publishWebProjects: True
arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)'
zipAfterPublish: True
To publish this archive to a web app, see Azure Web Apps deployment.
Publish artifacts to Azure Pipelines
To simply publish the output of your build to Azure Pipelines or TFS, use the Publish Ar tifacts task.
Publish to a NuGet feed
If you want to publish your code to a NuGet feed, take the following steps:
1. Use a .NET Core task with Command set to pack.
2. Publish your package to a NuGet feed.
Deploy a web app
1. Use a .NET Core task with Command set to publish.
2. Make sure you've selected the option to create a .zip file archive.
3. To publish this archive to a web app, see Azure Web Apps deployment.
Troubleshooting
If you're able to build your project on your development machine, but you're having trouble building it on Azure
Pipelines or TFS, explore the following potential causes and corrective actions:
We don't install prerelease versions of the .NET Core SDK on Microsoft-hosted agents. After a new version of
the .NET Core SDK is released, it can take a few weeks for us to roll it out to all the datacenters that Azure
Pipelines runs on. You don't have to wait for us to finish this rollout. You can use the .NET Core Tool
Installer , as explained in this guidance, to install the desired version of the .NET Core SDK on Microsoft-
hosted agents.
Check that the versions of the .NET Core SDK and runtime on your development machine match those on
the agent. You can include a command-line script dotnet --version in your pipeline to print the version
of the .NET Core SDK. Either use the .NET Core Tool Installer , as explained in this guidance, to deploy
the same version on the agent, or update your projects and development machine to the newer version
of the .NET Core SDK.
You might be using some logic in the Visual Studio IDE that isn't encoded in your pipeline. Azure Pipelines
or TFS runs each of the commands you specify in the tasks one after the other in a new process. Look at
the logs from the Azure Pipelines or TFS build to see the exact commands that ran as part of the build.
Repeat the same commands in the same order on your development machine to locate the problem.
If you have a mixed solution that includes some .NET Core projects and some .NET Framework projects,
you should also use the NuGet task to restore packages specified in packages.config files. Similarly, you
should add MSBuild or Visual Studio Build tasks to build the .NET Framework projects.
If your builds fail intermittently while restoring packages, either NuGet.org is having issues, or there are
networking problems between the Azure datacenter and NuGet.org. These aren't under our control, and
you might need to explore whether using Azure Artifacts with NuGet.org as an upstream source
improves the reliability of your builds.
Occasionally, when we roll out an update to the hosted images with a new version of the .NET Core SDK
or Visual Studio, something might break your build. This can happen, for example, if a newer version or
feature of the NuGet tool is shipped with the SDK. To isolate these problems, use the .NET Core Tool
Installer task to specify the version of the .NET Core SDK that's used in your build.
FAQ
Where can I learn more about Azure Artifacts and the TFS Package Management service?
Package Management in Azure Artifacts and TFS
Where can I learn more about .NET Core commands?
.NET Core CLI tools
Where can I learn more about running tests in my solution?
Unit testing in .NET Core projects
Where can I learn more about tasks?
Build and release tasks
Build ASP.NET apps with .NET Framework
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
NOTE
This article focuses on building .NET Framework projects with Azure Pipelines. For help with .NET Core projects, see .NET
Core.
NOTE
This guidance applies to TFS version 2017.3 and newer.
https://github.com/Microsoft/devops-project-samples.git
The sample repo includes several different projects, and the sample application for this article is located in the
following path:
https://github.com/Microsoft/devops-project-samples
You will use the code in /dotnet/aspnet/webapp/ . Your azure-pipelines.yml file needs to run from within the
dotnet/aspnet/webapp/Application folder for the build to complete successfully.
The sample app is a Visual Studio solution that has two projects:
An ASP.NET Web Application project that targets .NET Framework 4.5
A Unit Test project
Sign in to Azure Pipelines
Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name
and displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner
of the dashboard.
NOTE
This scenario works on TFS, but some of the following instructions might not exactly match the version of TFS that you are
using. Also, you'll need to set up a self-hosted agent, possibly also installing software. If you are a new user, you might have
a better learning experience by trying this procedure out first using a free Azure DevOps organization. Then change the
selector in the upper-left corner of this page from Team Foundation Server to Azure DevOps .
After you have the sample code in your own repository, create a pipeline using the instructions in Create
your first pipeline and select the ASP.NET template. This automatically adds the tasks required to build the
code in the sample repository.
Save the pipeline and queue a build to see it in action.
Build environment
You can use Azure Pipelines to build your .NET Framework projects without needing to set up any infrastructure
of your own. The Microsoft-hosted agents in Azure Pipelines have several released versions of Visual Studio pre-
installed to help you build your projects.
Use windows-2019 for Windows Server 2019 with Visual Studio 2019
Use vs2017-win2016 for Windows Server 2016 with Visual Studio 2017
You can also use a self-hosted agent to run your builds. This is particularly helpful if you have a large repository
and you want to avoid downloading the source code to a fresh machine for every build.
Your builds run on a self-hosted agent. Make sure that you have the necessary version of the Visual Studio
installed on the agent.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use a pipeline to build and test JavaScript and Node.js apps, and then deploy or publish to targets. Learn how
to:
Set up your build environment with Microsoft-hosted or self-hosted agents.
Use the npm task or a script to download packages for your build.
Implement JavaScript frameworks: Angular, React, or Vue.
Run unit tests and publish them with the publish test results task.
Use the publish code coverage task to publish code coverage results.
Publish npm packages with Azure artifacts.
Create a .zip file archive that is ready for publishing to a web app with the Archive Files task and deploy to
Azure.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.
NOTE
This guidance applies to Team Foundation Server (TFS) version 2017.3 and newer.
https://github.com/MicrosoftDocs/pipelines-javascript
https://github.com/MicrosoftDocs/pipelines-javascript
TIP
To make changes to the YAML file as described in this topic, select the pipeline in the Pipelines page, and then Edit the
azure-pipelines.yml file.
YAML
1. The following code is a simple Node server implemented with the Express.js framework. Tests for the
app are written through the Mocha framework. To get started, fork this repo in GitHub.
https://github.com/MicrosoftDocs/pipelines-javascript
2. Add an azure-pipelines.yml file in your repository. This YAML assumes that you have Node.js with npm
installed on your server.
trigger:
- master
pool: Default
- script: |
npm install
npm run build
displayName: 'npm install and build'
3. Create a pipeline (if you don't know how, see Create your first pipeline), and for the template select
YAML .
4. Set the Agent pool and YAML file path for your pipeline.
5. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
6. When you're ready to make changes to your pipeline, Edit it.
7. See the sections below to learn some of the more common ways to customize your pipeline.
Classic
1. The following code is a simple Node server implemented with the Express.js framework. Tests for the
app are written through the Mocha framework. To get started, fork this repo in GitHub.
https://github.com/MicrosoftDocs/pipelines-javascript
2. After you have the sample code in your own repository, create a pipeline by using the instructions in
Create your first pipeline and select the Empty process template.
3. Select Process under the Tasks tab in the pipeline editor and change the properties as follows:
Agent queue: Hosted Ubuntu 1604
4. Add the following tasks to the pipeline in the specified order:
npm
Command: install
npm
Display name: npm test
Command: custom
Command and arguments: test
Publish Test Results
Leave all the default values for properties
Archive Files
Root folder or file to archive: $(System.DefaultWorkingDirectory)
Prepend root folder name to archive paths: Unchecked
Publish Build Ar tifacts
Leave all the default values for properties
5. Save the pipeline and queue a build to see it in action.
Learn some of the common ways to customize your JavaScript build process.
Build environment
You can use Azure Pipelines to build your JavaScript apps without needing to set up any infrastructure of your
own. You can use either Windows or Linux agents to run your builds.
Update the following snippet in your azure-pipelines.yml file to select the appropriate image.
pool:
vmImage: 'ubuntu-latest' # examples of other options: 'macOS-10.15', 'vs2017-win2016'
Tools that you commonly use to build, test, and run JavaScript apps - like npm, Node, Yarn, and Gulp - are pre-
installed on Microsoft-hosted agents in Azure Pipelines. For the exact version of Node.js and npm that is
preinstalled, refer to Microsoft-hosted agents. To install a specific version of these tools on Microsoft-hosted
agents, add the Node Tool Installer task to the beginning of your process.
You can also use a self-hosted agent.
Use a specific version of Node.js
If you need a version of Node.js and npm that is not already installed on the Microsoft-hosted agent, use the
Node tool installer task. Add the following snippet to your azure-pipelines.yml file.
NOTE
The hosted agents are regularly updated, and setting up this task will result in spending significant time updating to a
newer minor version every time the pipeline is run. Use this task only when you need a specific Node version in your
pipeline.
- task: NodeTool@0
inputs:
versionSpec: '12.x' # replace this value with the version that you need for your project
If you need a version of Node.js/npm that is not already installed on the agent:
1. In the pipeline, select Tasks , choose the phase that runs your build tasks, and then select + to add a new
task to that phase.
2. In the task catalog, find and add the Node Tool Installer task.
3. Select the task and specify the version of the Node.js runtime that you want to install.
To update just the npm tool, run the npm i -g npm@version-number command in your build process.
Use multiple node versions
You can build and test your app on multiple versions of Node by using a strategy and the Node tool installer
task.
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
node_12_x:
node_version: 12.x
node_13_x:
node_version: 13.x
steps:
- task: NodeTool@0
inputs:
versionSpec: $(node_version)
- task: Npm@1
inputs:
command: 'install'
Run tools installed this way by using npm's npx package runner, which will first look for tools installed this
way in its path resolution. The following example calls the mocha test runner but will look for the version
installed as a dev dependency before using a globally installed (through npm install -g ) version.
To install tools that your project needs but that are not set as dev dependencies in package.json , call
npm install -g from a script stage in your pipeline.
The following example installs the latest version of the Angular CLI by using npm . The rest of the pipeline can
then use the ng tool from other script stages.
NOTE
On Microsoft-hosted Linux agents, preface the command with sudo , like sudo npm install -g .
These tasks will run every time your pipeline runs, so be mindful of the impact that installing tools has on build
times. Consider configuring self-hosted agents with the version of the tools you need if overhead becomes a
serious impact to your build performance.
Use the npm or command line tasks in your pipeline to install tools on your build agent.
Dependency management
In your build, use Yarn or Azure Artifacts/TFS to download packages from the public npm registry, which is a
type of private npm registry that you specify in the .npmrc file.
npm
You can use NPM in a few ways to download packages for your build:
Directly run npm install in your pipeline. This is the simplest way to download packages from a registry
that does not need any authentication. If your build doesn't need development dependencies on the agent to
run, you can speed up build times with the --only=prod option to npm install .
Use an npm task. This is useful when you're using an authenticated registry.
Use an npm Authenticate task. This is useful when you run npm install from inside your task runners -
Gulp, Grunt, or Maven.
If you want to specify an npm registry, put the URLs in an .npmrc file in your repository. If your feed is
authenticated, manage its credentials by creating an npm service connection on the Ser vices tab under
Project Settings .
To install npm packages by using a script in your pipeline, add the following snippet to azure-pipelines.yml .
To use a private registry specified in your .npmrc file, add the following snippet to azure-pipelines.yml .
- task: Npm@1
inputs:
customEndpoint: <Name of npm service connection>
To pass registry credentials to npm commands via task runners such as Gulp, add the following task to
azure-pipelines.yml before you call the task runner.
- task: npmAuthenticate@0
inputs:
customEndpoint: <Name of npm service connection>
Use the npm or npm authenticate task in your pipeline to download and install packages.
If your builds occasionally fail because of connection issues when you're restoring packages from the npm
registry, you can use Azure Artifacts in conjunction with upstream sources, and cache the packages. The
credentials of the pipeline are automatically used when you're connecting to Azure Artifacts. These credentials
are typically derived from the Project Collection Build Ser vice account.
If you're using Microsoft-hosted agents, you get a new machine every time you run a build - which means
restoring the dependencies every time.
This can take a significant amount of time. To mitigate this, you can use Azure Artifacts or a self-hosted agent.
You'll then get the benefit of using the package cache.
Yarn
Use a script stage to invoke Yarn to restore dependencies. Yarn is available preinstalled on some Microsoft-
hosted agents. You can install and configure it on self-hosted agents like any other tool.
You can call compilers directly from the pipeline by using the script task. These commands will run from the
root of the cloned source-code repository.
- script: tsc --target ES6 --strict true --project tsconfigs/production.json
Use the npm task in your pipeline if you have a compile script defined in your project's package.json to build
the code. Use the Bash task to compile your code if you don't have a separate script defined in your project
configuration.
mocha mocha-junit-reporter
cypress-multi-reporters
jasmine jasmine-reporters
jest jest-junit
jest-junit-reporter
karma karma-junit-reporter
Ava tap-xunit
This example uses the mocha-junit-reporter and invokes mocha test directly by using a script. This produces
the JUnit XML output at the default location of ./test-results.xml .
If you have defined a test script in your project's package.json file, you can invoke it by using npm test .
- task: PublishTestResults@2
condition: succeededOrFailed()
inputs:
testRunner: JUnit
testResultsFiles: '**/TEST-RESULTS.xml'
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura # or JaCoCo
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/*coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/coverage'
Use the Publish Test Results and Publish Code Coverage Results tasks in your pipeline to publish test results
along with code coverage results by using Istanbul.
Set the Control Options for the Publish Test Results task to run the task even if a previous task has failed, unless
the deployment was canceled.
- script: webpack
The next example uses the npm task to call npm run build to call the build script object defined in the project
package.json. Using script objects in your project moves the logic for the build into the source code and out of
the pipeline.
Use the CLI or Bash task in your pipeline to invoke your packaging tool, such as webpack or Angular's
ng build .
JavaScript frameworks
Angular
For Angular apps, you can include Angular-specific commands such as ng test , ng build , and ng e2e . To use
Angular CLI commands in your pipeline, you need to install the angular/cli npm package on the build agent.
NOTE
On Microsoft-hosted Linux agents, preface the command with sudo , like sudo npm install -g .
- script: |
npm install -g @angular/cli
npm install
ng build --prod
For tests in your pipeline that require a browser to run (such as the ng test command in the starter app, which
runs Karma), you need to use a headless browser instead of a standard browser. In the Angular starter app:
1. Change the browsers entry in your karma.conf.js project file from browsers: ['Chrome'] to
browsers: ['ChromeHeadless'] .
2. Change the singleRun entry in your karma.conf.js project file from a value of false to true . This helps
make sure that the Karma process stops after it runs.
React and Vue
All the dependencies for your React and Vue apps are captured in your package.json file. Your azure-
pipelines.yml file contains the standard Node.js script:
- script: |
npm install
npm run build
displayName: 'npm install and build'
The build files are in a new folder, dist (for Vue) or build (for React). This snippet builds an artifact, www , that
is ready for release. It uses the Node Installer, Copy Files, and Publish Build Artifacts tasks.
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool@0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
npm run build
displayName: 'npm install and build'
- task: CopyFiles@2
inputs:
Contents: 'build/**' # Pull the build directory (React)
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: $(Build.ArtifactStagingDirectory) # dist or build files
ArtifactName: 'www' # output artifact named www
To release, point your release task to the dist or build artifact and use the Azure Web App Deploy task.
Webpack
You can use a webpack configuration file to specify a compiler (such as Babel or TypeScript) to transpile JSX or
TypeScript to plain JavaScript, and to bundle your app.
- script: |
npm install webpack webpack-cli --save-dev
npx webpack --config webpack.config.js
If the steps in your gulpfile.js file require authentication with an npm registry:
- task: npmAuthenticate@0
inputs:
customEndpoint: <Name of npm service connection>
Add the Publish Test Results task to publish JUnit or xUnit test results to the server.
- task: PublishTestResults@2
inputs:
testResultsFiles: '**/TEST-RESULTS.xml'
testRunTitle: 'Test results for JavaScript using gulp'
Add the Publish Code Coverage Results task to publish code coverage results to the server. You can find
coverage metrics in the build summary, and you can download HTML reports for further analysis.
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/*coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/coverage'
The simplest way to create a pipeline if your app uses Gulp is to use the Node.js with gulp build template
when creating the pipeline. This will automatically add various tasks to invoke Gulp commands and to publish
artifacts. In the task, select Enable Code Coverage to enable code coverage by using Istanbul.
Grunt
Grunt is preinstalled on Microsoft-hosted agents. To run the grunt command in the YAML file:
If the steps in your Gruntfile.js file require authentication with a npm registry:
- task: npmAuthenticate@0
inputs:
customEndpoint: <Name of npm service connection>
The simplest way to create a pipeline if your app uses Grunt is to use the Node.js with Grunt build template
when creating the pipeline. This will automatically add various tasks to invoke Gulp commands and to publish
artifacts. In the task, select the Publish to TFS/Team Ser vices option to publish test results, and select
Enable Code Coverage to enable code coverage by using Istanbul.
To upload a subset of files, first copy the necessary files from the working directory to a staging directory with
the Copy Files task, and then use the Publish Build Artifacts task.
- task: CopyFiles@2
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: |
**\*.js
package.json
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
The next example publishes to a custom registry defined in your repo's .npmrc file. You'll need to set up an
npm service connection to inject authentication credentials into the connection as the build runs.
- task: Npm@1
inputs:
command: publish
publishRegistry: useExternalRegistry
publishEndpoint: https://my.npmregistry.com
The final example publishes the module to an Azure DevOps Services package management feed.
- task: Npm@1
inputs:
command: publish
publishRegistry: useFeed
publishFeed: https://my.npmregistry.com
For more information about versioning and publishing npm packages, see Publish npm packages and How can
I version my npm packages as part of the build process?.
Deploy a web app
To create a .zip file archive that is ready for publishing to a web app, use the Archive Files task:
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
To publish this archive to a web app, see Azure web app deployment.
Publish artifacts to Azure Pipelines
Use the Publish Build Artifacts task to publish files from your build to Azure Pipelines or TFS.
Publish to an npm registry
To create and publish an npm package, use the npm task. For more information about versioning and
publishing npm packages, see Publish npm packages.
Deploy a web app
To create a .zip file archive that is ready for publishing to a web app, use the Archive Files task. To publish this
archive to a web app, see Azure Web App deployment.
Troubleshooting
If you can build your project on your development machine but are having trouble building it on Azure
Pipelines or TFS, explore the following potential causes and corrective actions:
Check that the versions of Node.js and the task runner on your development machine match those on
the agent. You can include command-line scripts such as node --version in your pipeline to check what
is installed on the agent. Either use the Node Tool Installer (as explained in this guidance) to deploy
the same version on the agent, or run npm install commands to update the tools to desired versions.
If your builds fail intermittently while you're restoring packages, either the npm registry is having issues
or there are networking problems between the Azure datacenter and the registry. These factors are not
under our control, and you might need to explore whether using Azure Artifacts with an npm registry as
an upstream source improves the reliability of your builds.
If you're using nvm to manage different versions of Node.js, consider switching to the Node Tool
Installer task instead. ( nvm is installed for historical reasons on the macOS image.) nvm manages
multiple Node.js versions by adding shell aliases and altering PATH , which interacts poorly with the way
Azure Pipelines runs each task in a new process. The Node Tool Installer task handles this model
correctly. However, if your work requires the use of nvm , you can add the following script to the
beginning of each pipeline:
steps:
- bash: |
NODE_VERSION=12 # or whatever your preferred version is
npm config delete prefix # avoid a warning
. ${NVM_DIR}/nvm.sh
nvm use ${NODE_VERSION}
nvm alias default ${NODE_VERSION}
VERSION_PATH="$(nvm_version_path ${NODE_VERSION})"
echo "##vso[task.prependPath]$VERSION_PATH"
Then node and other command-line tools will work for the rest of the pipeline job. In each step where you
need to use the nvm command, you'll need to start the script with:
- bash: |
. ${NVM_DIR}/nvm.sh
nvm <command>
FAQ
Where can I learn more about Azure Artifacts and the Package Management service?
Package Management in Azure Artifacts and TFS
Where can I learn more about tasks?
Build, release, and test tasks
How can I version my npm packages as part of the build process?
One option is to use a combination of version control and npm version. At the end of a pipeline run, you can
update your repo with the new version. In this YAML, there is a GitHub repo and the package gets deployed to
npmjs. Note that your build will fail if there is a mismatch between your package version on npmjs and your
package.json file.
variables:
MAP_NPMTOKEN: $(NPMTOKEN) # Mapping secret var
trigger:
- none
pool:
vmImage: 'ubuntu-latest'
- task: npmAuthenticate@0
inputs:
workingFile: .npmrc
customEndpoint: 'my-npm-connection'
- task: NodeTool@0
inputs:
versionSpec: '12.x'
displayName: 'Install Node.js'
- script: |
npm install
displayName: 'npm install'
- script: |
npm pack
displayName: 'Package for release'
- task: CopyFiles@2
inputs:
contents: '*.tgz'
targetFolder: $(Build.ArtifactStagingDirectory)/npm
displayName: 'Copy archives to artifacts staging directory'
displayName: 'Copy archives to artifacts staging directory'
- task: CopyFiles@2
inputs:
sourceFolder: '$(Build.SourcesDirectory)'
contents: 'package.json'
targetFolder: $(Build.ArtifactStagingDirectory)/npm
displayName: 'Copy package.json'
- task: PublishBuildArtifacts@1
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)/npm'
artifactName: npm
displayName: 'Publish npm artifact'
Azure Pipelines
Use a pipeline to automatically build and test your Python apps or scripts. After those steps are done, you can
then deploy or publish your project.
If you want an end-to-end walkthrough, see Use CI/CD to deploy a Python web app to Azure App Service on
Linux.
To create and activate an Anaconda environment and install Anaconda packages with conda , see Run pipelines
with Anaconda environments.
https://github.com/Microsoft/python-sample-vscode-flask-tutorial
When the Configure tab appears, select Python package . This will create a Python package to test on
multiple Python versions.
7. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .
8. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job.
You just created and ran a pipeline that we automatically created for you, because your code appeared
to be a good match for the Python package template.
You now have a working YAML pipeline ( azure-pipelines.yml ) in your repository that's ready for you to
customize!
9. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.
See the sections below to learn some of the more common ways to customize your pipeline.
YAML
1. Add an azure-pipelines.yml file in your repository. Customize this snippet for your build.
trigger:
- master
pool: Default
steps:
- script: python -m pip install --upgrade pip
displayName: 'Install dependencies'
2. Create a pipeline (if you don't know how, see Create your first pipeline), and for the template select YAML .
3. Set the Agent pool and YAML file path for your pipeline.
4. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
5. When you're ready to make changes to your pipeline, Edit it.
6. See the sections below to learn some of the more common ways to customize your pipeline.
Build environment
You don't have to set up anything for Azure Pipelines to build Python projects. Python is preinstalled on
Microsoft-hosted build agents for Linux, macOS, or Windows. To see which Python versions are preinstalled, see
Use a Microsoft-hosted agent.
Use a specific Python version
To use a specific version of Python in your pipeline, add the Use Python Version task to azure-pipelines.yml. This
snippet sets the pipeline to use Python 3.6:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.6'
jobs:
- job: 'Test'
pool:
vmImage: 'ubuntu-16.04' # other options: 'macOS-10.14', 'vs2017-win2016'
strategy:
matrix:
Python27:
python.version: '2.7'
Python35:
python.version: '3.5'
Python36:
python.version: '3.6'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(python.version)'
You can add tasks to run using each Python version in the matrix.
You can also run inline Python scripts with the Python Script task:
- task: PythonScript@0
inputs:
scriptSource: 'inline'
script: |
print('Hello world 1')
print('Hello world 2')
To parameterize script execution, use the PythonScript task with arguments values to pass arguments into the
executing process. You can use sys.argv or the more sophisticated argparse library to parse the arguments.
- task: PythonScript@0
inputs:
scriptSource: inline
script: |
import sys
print ('Executing script file is:', str(sys.argv[0]))
print ('The arguments are:', str(sys.argv))
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--world", help="Provide the name of the world to greet.")
args = parser.parse_args()
print ('Hello ', args.world)
arguments: --world Venus
Install dependencies
You can use scripts to install specific PyPI packages with pip . For example, this YAML installs or upgrades pip
and the setuptools and wheel packages.
Install requirements
After you update pip and friends, a typical next step is to install dependencies from requirements.txt:
Run tests
You can use scripts to install and run various tests in your pipeline.
Run lint tests with flake8
To install or upgrade flake8 and use it to run lint tests, use this YAML:
- script: |
python -m pip install flake8
flake8 .
displayName: 'Run lint tests'
- script: |
pip install pytest
pip install pytest-cov
pytest tests --doctest-modules --junitxml=junit/test-results.xml --cov=. --cov-report=xml --cov-
report=html
displayName: 'Test with pytest'
- job:
pool:
vmImage: 'ubuntu-16.04'
strategy:
matrix:
Python27:
python.version: '2.7'
Python35:
python.version: '3.5'
Python36:
python.version: '3.6'
Python37:
python.version: '3.7'
steps:
- task: UsePythonVersion@0
displayName: 'Use Python $(python.version)'
inputs:
versionSpec: '$(python.version)'
- script: tox -e py
displayName: 'Run Tox'
- task: PublishTestResults@2
condition: succeededOrFailed()
inputs:
testResultsFiles: '**/test-*.xml'
testRunTitle: 'Publish test results for Python $(python.version)'
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov'
- task: TwineAuthenticate@0
inputs:
artifactFeed: '<Azure Artifacts feed name>'
pythonUploadServiceConnection: '<twine service connection from external organization>'
Then, add a custom script that uses twine to publish your packages.
- script: |
twine upload -r "<feed or service connection name>" --config-file $(PYPIRC_PATH) <package path/files>
You can also use Azure Pipelines to build an image for your Python app and push it to a container registry.
Related extensions
PyLint Checker (Darren Fuller)
Python Test (Darren Fuller)
Azure DevOps plugin for PyCharm (IntelliJ) (Microsoft)
Python in Visual Studio Code (Microsoft)
Use CI/CD to deploy a Python web app to Azure
App Service on Linux
11/2/2020 • 17 minutes to read • Edit Online
Azure Pipelines
In this article, you use Azure Pipelines continuous integration and continuous delivery (CI/CD) to deploy a Python
web app to Azure App Service on Linux. You begin by running app code from a GitHub repository locally. You then
provision a target App Service through the Azure portal. Finally, you create an Azure Pipelines CI/CD pipeline that
automatically builds the code and deploys it to the App Service whenever there's a commit to the repository.
NOTE
If your app uses Django and a SQLite database, it won't work for this walkthrough. For more information, see considerations
for Django later in this article. If your Django app uses a separate database, you can use it with this walkthrough.
If you need an app to work with, you can fork and clone the repository at https://github.com/Microsoft/python-
sample-vscode-flask-tutorial. The code is from the tutorial Flask in Visual Studio Code.
To test the example app locally, from the folder containing the code, run the following appropriate commands for
your operating system:
# Mac/Linux
sudo apt-get install python3-venv # If needed
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
export set FLASK_APP=hello_app.webapp
python3 -m flask run
# Windows
py -3 -m venv .env
.env\scripts\activate
pip install -r requirements.txt
$env:FLASK_APP = "hello_app.webapp"
python -m flask run
Open a browser and navigate to http://localhost:5000 to view the app. When you're finished, close the browser, and
stop the Flask server with Ctrl +C .
3. The Cloud Shell appears along the bottom of the browser. Select Bash from the dropdown:
4. In the Cloud Shell, clone your repository using git clone . For the example app, use:
Replace <your-alias> with the name of the GitHub account you used to fork the repository.
TIP
To paste into the Cloud Shell, use Ctrl+Shift +V , or right-click and select Paste from the context menu.
NOTE
The Cloud Shell is backed by an Azure Storage account in a resource group called cloud-shell-storage-<your-region>.
That storage account contains an image of the Cloud Shell's file system, which stores the cloned repository. There is a
small cost for this storage. You can delete the storage account at the end of this article, along with other resources
you create.
5. In the Cloud Shell, change directories into the repository folder that has your Python app, so the
az webapp up command will recognize the app as Python.
cd python-sample-vscode-flask-tutorial
6. In the Cloud Shell, use az webapp up to create an App Service and initially deploy your app.
az webapp up -n <your-appservice>
Change <your-appservice> to a name for your app service that's unique across Azure. Typically, you use a
personal or company name along with an app identifier, such as <your-name>-flaskpipelines . The app URL
becomes <your-appservice>.azurewebsites.net.
When the command completes, it shows JSON output in the Cloud Shell.
TIP
If you encounter a "Permission denied" error with a .zip file, you may have tried to run the command from a folder
that doesn't contain a Python app. The az webapp up command then tries to create a Windows app service plan,
and fails.
7. If your app uses a custom startup command, set the az webapp config property. For example, the python-
sample-vscode-flask-tutorial app contains a file named startup.txt that contains its specific startup
command, so you set the az webapp config property to startup.txt .
a. From the first line of output from the previous az webapp up command, copy the name of your
resource group, which is similar to <your-name>_rg_Linux_<your-region> .
b. Enter the following command, using your resource group name, your app service name, and your
startup file or command:
Again, when the command completes, it shows JSON output in the Cloud Shell.
8. To see the running app, open a browser and go to http://<your-appservice>.azurewebsites.net. If you see a
generic page, wait a few seconds for the App Service to start, and refresh the page.
NOTE
For a detailed description of the specific tasks performed by the az webapp up command, see Provision an App
Service with single commands at the end of this article.
IMPORTANT
To simplify the service connection, use the same email address for Azure DevOps as you use for Azure.
2. Once you sign in, the browser displays your Azure DevOps dashboard, at the URL
https://dev.azure.com/<your-organization-name>. An Azure DevOps account can belong to one or more
organizations, which are listed on the left side of the Azure DevOps dashboard. If more than one
organization is listed, select the one you want to use for this walkthrough. By default, Azure DevOps creates
a new organization using the email alias you used to sign in.
A project is a grouping for boards, repositories, pipelines, and other aspects of Azure DevOps. If your
organization doesn't have any projects, enter the project name Flask Pipelines under Create a project to
get star ted , and then select Create project .
If your organization already has projects, select New project on the organization page. In the Create new
project dialog box, enter the project name Flask Pipelines, and select Create .
3. From the new project page, select Project settings from the left navigation.
4. On the Project Settings page, select Pipelines > Ser vice connections , then select New ser vice
connection , and then select Azure Resource Manager from the dropdown.
5. In the Add an Azure Resource Manager ser vice connection dialog box:
a. Give the connection a name. Make note of the name to use later in the pipeline.
b. For Scope level , select Subscription .
c. Select the subscription for your App Service from the Subscription drop-down list.
d. Under Resource Group , select your resource group from the dropdown.
e. Make sure the option Allow all pipelines to use this connection is selected, and then select OK .
The new connection appears in the Ser vice connections list, and is ready for Azure Pipelines to use from
the project.
NOTE
If you need to use an Azure subscription from a different email account, follow the instructions on Create an Azure
Resource Manager service connection with an existing service principal.
3. On the Where is your code screen, select GitHub . You may be prompted to sign into GitHub.
4. On the Select a repositor y screen, select the repository that contains your app, such as your fork of the
example app.
5. You may be prompted to enter your GitHub password again as a confirmation, and then GitHub prompts
you to install the Azure Pipelines extension:
On this screen, scroll down to the Repositor y access section, choose whether to install the extension on all
repositories or only selected ones, and then select Approve and install :
6. On the Configure your pipeline screen, select Python to Linux Web App on Azure .
Your new pipeline appears. When prompted, select the Azure subscription in which you created your Web
App.
Select the Web App
Select Validate and configure
Azure Pipelines creates an azure-pipelines.yml file that defines your CI/CD pipeline as a series of stages,
Jobs, and steps, where each step contains the details for different tasks and scripts. Take a look at the
pipeline to see what it does. Make sure all the default inputs are appropriate for your code.
YAML pipeline explained
The YAML file contains the following key elements:
The trigger at the top indicates the commits that trigger the pipeline, such as commits to the master
branch.
The variables that parameterize the YAML template
TIP
To avoid hard-coding specific variable values in your YAML file, you can define variables in the pipeline's web interface
instead. For more information, see Variables - Secrets.
The stages
Build stage , which builds your project, and a Deploy stage, which deploys it to Azure as a Linux web app.
Deploy stage that also creates an Environment with default name same as the Web App. You can choose
to modify the environment name.
Each stage has a pool element that specifies one or more virtual machines (VMs) in which the pipeline runs
the steps . By default, the pool element contains only a single entry for an Ubuntu VM. You can use a pool
to run tests in multiple environments as part of the build, such as using different Python versions for
creating a package.
The steps element can contain children like task , which runs a specific task as defined in the Azure
Pipelines task reference, and script , which runs an arbitrary set of commands.
The first task under Build stage is UsePythonVersion, which specifies the version of Python to use on the
build agent. The @<n> suffix indicates the version of the task. The @0 indicates preview version. Then we
have script-based task that creates a virtual environment and installs dependencies from file
(requirements.txt).
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
python -m pip install --upgrade pip
pip install setup
pip install -r requirements.txt
workingDirectory: $(projectRoot)
displayName: "Install requirements"
```
Next step creates the .zip file that the steps under deploy stage of the pipeline deploys. To create the .zip file,
add an ArchiveFiles task to the end of the YAML file:
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
replaceExistingArchive: true
verbose: # (no value); this input is optional
- publish: $(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop
You use $() in a parameter value to reference variables. The built-in Build.SourcesDirectory variable
contains the location on the build agent where the pipeline cloned the app code. The archiveFile
parameter indicates where to place the .zip file. In this case, the archiveFile parameter uses the built-in
variable Build.ArtifactsStagingDirectory .
IMPORTANT
When deploying to Azure App Service, be sure to use includeRootFolder: false . Otherwise, the contents of the
.zip file are put in a folder named s, for "sources," which is replicated on the App Service. The App Service on Linux
container then can't find the app code.
jobs:
- deployment: DeploymentJob
pool:
vmImage: $(vmImageName)
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python version'
- task: AzureWebApp@1
displayName: 'Deploy Azure Web App : {{ webAppName }}'
inputs:
azureSubscription: $(azureServiceConnectionId)
appName: $(webAppName)
package: $(Pipeline.Workspace)/drop/$(Build.BuildId).zip
# The following parameter is specific to the Flask example code. You may
# or may not need a startup command for your app.
The StartupCommand parameter shown here is specific to the python-vscode-flask-tutorial example code,
which defines the app in the startup.py file. By default, Azure App Service looks for the Flask app object in a
file named app.py or application.py. If your code doesn't follow this pattern, you need to customize the
startup command. Django apps may not need customization at all. For more information, see How to
configure Python on Azure App Service - Customize startup command.
Also, because the python-vscode-flask-tutorial repository contains the same startup command in a file
named startup.txt, you could specify that file in the StartupCommand parameter rather than the command, by
using StartupCommand: 'startup.txt' .
IMPORTANT
If your app fails because of a missing dependency, then your requirements.txt file was not processed during deployment. This
behavior happens if you created the web app directly on the portal rather than using the az webapp up command as
shown in this article.
The az webapp up command specifically sets the build action SCM_DO_BUILD_DURING_DEPLOYMENT to true . If you
provisioned the app service through the portal, however, this action is not automatically set.
The following steps set the action:
1. Open the Azure portal, select your App Service, then select Configuration .
2. Under the Application Settings tab, select New Application Setting .
3. In the popup that appears, set Name to SCM_DO_BUILD_DURING_DEPLOYMENT , set Value to true , and select OK .
4. Select Save at the top of the Configuration page.
5. Run the pipeline again. Your dependencies should be installed during deployment.
- script: |
# Put commands to run tests here
displayName: 'Run tests'
- script: |
echo Deleting .env
deactivate
rm -rf .env
displayName: 'Remove .env before zip'
You can also use a task like PublishTestResults@2 to make test results appear in the pipeline results screen. For
more information, see Build Python apps - Run tests.
Again, you see JSON output in the Cloud Shell when the command completes successfully.
3. Create an App Service instance in the plan.
Run the following command to create the App Service instance in the plan, replacing <your-appservice>
with a name that's unique across Azure. Typically, you use a personal or company name along with an app
identifier, such as <your-name>-flaskpipelines . The command fails if the name is already in use. By assigning
the App Service to the same resource group as the plan, it's easy to clean up all the resources at once.
NOTE
If you want to deploy your code at the same time you create the app service, you can use the
--deployment-source-url and --deployment-source-branch arguments with the az webapp create command.
For more information, see az webapp create.
TIP
If you see the error message "The plan (name) doesn't exist", and you're sure that the plan name is correct, check that
the resource group specified with the -g argument is also correct, and the plan you identify is part of that resource
group. If you misspell the resource group name, the command doesn't find the plan in that nonexistent resource
group, and gives this particular error.
4. If your app requires a custom startup command, use the az webapp config set command, as described
earlier in Provision the target Azure App Service. For example, to customize the App Service with your
resource group, app name, and startup command, run:
The App Service at this point contains only default app code. You can now use Azure Pipelines to deploy
your specific app code.
Clean up resources
To avoid incurring ongoing charges for any Azure resources you created in this walkthrough, such as a B1 App
Service Plan, delete the resource group that contains the App Service and the App Service Plan. To delete the
resource group from the Azure portal, select Resource groups in the left navigation. In the resource group list,
select the ... to the right of the resource group you want to delete, select Delete resource group , and follow the
prompts.
You can also use az group delete in the Cloud Shell to delete resource groups.
To delete the storage account that maintains the file system for Cloud Shell, which incurs a small monthly charge,
delete the resource group that begins with cloud-shell-storage- .
Next steps
Build Python apps
Learn about build agents
Configure Python app on App Service
Run pipelines with Anaconda environments
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
This guidance explains how to set up and use Anaconda environments in your pipelines.
Get started
Follow these instructions to set up a pipeline for a sample Python app with Anaconda environment.
1. The code in the following repository is a simple Python app. To get started, fork this repo to your GitHub
account.
https://github.com/MicrosoftDocs/pipelines-anaconda
TIP
To make changes to the YAML file as described in this topic, select the pipeline in the Pipelines page, and then Edit the
azure-pipelines.yml file.
Create an environment
From command-line arguments
The conda create command will create an environment with the arguments you pass it.
Hosted Ubuntu 16.04
Hosted macOS
Hosted VS2017
From YAML
You can check in an environment.yml file to your repo that defines the configuration for an Anaconda environment.
NOTE
If you are using a self-hosted agent and don't remove the environment at the end, you'll get an error on the next build since
the environment already exists. To resolve, use the --force argument:
conda env create --quiet --force --file environment.yml .
- bash: |
source activate myEnvironment
conda install --yes --quiet --name myEnvironment scipy
displayName: Install Anaconda packages
- task: PublishTestResults@2
inputs:
testResultsFiles: 'junit/*.xml'
condition: succeededOrFailed()
FAQs
Why am I getting a "Permission denied" error?
On Hosted macOS, the agent user doesn't have ownership of the directory where Miniconda is installed. For a fix,
see the "Hosted macOS" tab under Add conda to your system path.
Why does my build stop responding on a conda create or conda install step?
If you forget to pass --yes , conda will stop and wait for user interaction.
Why is my script on Windows stopping after it activates the environment?
On Windows, activate is a Batch script. You must use the call command to resume running your script after
activating. See examples of using call above.
How can I run my tests with multiple versions of Python?
See Build Python apps in Azure Pipelines.
Build C++ Windows apps
2/26/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
This guidance explains how to automatically build C++ projects for Windows.
NOTE
This guidance applies to TFS version 2017.3 and newer.
Example
This example shows how to build a C++ project. To start, import (into Azure Repos or TFS) or fork (into GitHub)
this repo:
https://github.com/adventworks/cpp-sample
NOTE
This scenario works on TFS, but some of the following instructions might not exactly match the version of TFS that you are
using. Also, you'll need to set up a self-hosted agent, possibly also installing software. If you are a new user, you might have
a better learning experience by trying this procedure out first using a free Azure DevOps organization. Then change the
selector in the upper-left corner of this page from Team Foundation Server to Azure DevOps .
After you have the sample code in your own repository, create a pipeline using the instructions in Create
your first pipeline and select the .NET Desktop template. This automatically adds the tasks required to
build the code in the sample repository.
Save the pipeline and queue a build to see it in action.
2. Select Tasks and click on the agent job . From the Execution plan section, select Multi-configuration to
change the options for the job:
Specify Multipliers: BuildConfiguration, BuildPlatform
Copy output
To copy the results of the build to Azure Pipelines or TFS, perform these steps:
1. Click the Copy Files task. Specify the following arguments:
Contents: **\$(BuildConfiguration)\**\?(*.exe|*.dll|*.pdb)
Build Java apps
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
This guidance uses YAML-based pipelines available in Azure Pipelines. For TFS, use tasks that correspond to those used in
the YAML below.
This guidance explains how to automatically build Java projects. (If you're working on an Android project, see
Build, test, and deploy Android apps.)
https://github.com/MicrosoftDocs/pipelines-java
1. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .
2. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job.
You just created and ran a pipeline that we automatically created for you, because your code
appeared to be a good match for the Maven template.
You now have a working YAML pipeline ( azure-pipelines.yml ) in your repository that's ready for you to
customize!
3. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.
4. See the sections below to learn some of the more common ways to customize your pipeline.
1. Create a pipeline (if you don't know how, see Create your first pipeline, and for the template select
Maven . This template automatically adds the tasks you need to build the code in the sample repository.
2. Save the pipeline and queue a build. When the Build #nnnnnnnn.n has been queued message
appears, select the number link to see your pipeline in action.
You now have a working pipeline that's ready for you to customize!
3. When you're ready to make changes to your pipeline, Edit it.
4. See the sections below to learn some of the more common ways to customize your pipeline.
Build environment
You can use Azure Pipelines to build Java apps without needing to set up any infrastructure of your own. You can
build on Windows, Linux, or MacOS images. The Microsoft-hosted agents in Azure Pipelines have modern JDKs
and other tools for Java pre-installed. To know which versions of Java are installed, see Microsoft-hosted agents.
Update the following snippet in your azure-pipelines.yml file to select the appropriate image.
pool:
vmImage: 'ubuntu-16.04' # other options: 'macOS-10.14', 'vs2017-win2016'
steps:
- task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
mavenOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'
goals: 'package'
steps:
- task: Gradle@2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'
tasks: 'build'
Adjust the gradleWrapperFile value if your gradlew file isn't in the root of the repository. The file path value
should be relative to the root of the repository, such as IdentityService/gradlew or
$(system.defaultWorkingDirectory)/IdentityService/gradlew .
Adjust Gradle tasks
Adjust the tasks value for the tasks that Gradle should execute, such as build or check .
For details about common Java Plugin tasks for Gradle, see Gradle's documentation.
Ant
To build with Ant, add the following snippet to your azure-pipelines.yml file. Change values, such as the path to
your build.xml file, to match your project configuration. See the Ant task for more about these options.
steps:
- task: Ant@1
inputs:
workingDirectory: ''
buildFile: 'build.xml'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'
Script
To build with a command line or script, add one of the following snippets to your azure-pipelines.yml file.
Inline script
The script: step runs an inline script using Bash on Linux and macOS and Command Prompt on Windows. For
details, see the Bash or Command line task.
steps:
- script: |
echo Starting the build
mvn package
displayName: 'Build with Maven'
Script file
This snippet runs a script file that is in your repository. For details, see the Shell Script, Batch script, or
PowerShell task.
steps:
- task: ShellScript@2
inputs:
scriptPath: 'build.sh'
Next Steps
After you've built and tested your app, you can upload the build output to Azure Pipelines or TFS, create and
publish a Maven package, or package the build output into a .war/jar file to be deployed to a web application.
Next we recommend that you learn more about creating a CI/CD pipeline for the deployment target you choose:
Build and deploy to a Java web app
Build and deploy Java to Azure Functions
Build and deploy Java to Azure Kubernetes service
Build and deploy to a Java web app
2/26/2020 • 4 minutes to read • Edit Online
Azure Pipelines
A web app is a lightweight way to host a web application. In this step-by-step guide you'll learn how to create a
pipeline that continuously builds and deploys a your Java app. Your team can then automatically build each commit
in GitHub, and if you want, automatically deploy the change to an Azure App Service. You can use whatever
runtime your prefer: Tomcat, or Java SE.
Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.
TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.
https://github.com/spring-petclinic/spring-framework-petclinic
# Create an App Service from the plan with Tomcat and JRE 8 as the runtime
az webapp create -g myapp-rg -p myapp-service-plan -n my-app-name --runtime "TOMCAT|8.5-jre8"
Also explore deployment history for the App by navigating to the "Environment". From the pipeline summary:
1. Select the Environments tab.
2. Select View environment .
Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:
Azure Pipelines
You can use Azure Functions to run small pieces of code in the cloud without the overhead of running a server. In
this step-by-step guide you'll learn how to create a pipeline that continuously builds and deploys a your Java
function app. Your team can then automatically build each commit in GitHub, and if you want, automatically deploy
the change to Azure Functions.
Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.
TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.
https://github.com/MicrosoftDocs/pipelines-java-function
You just created and ran a pipeline that we automatically created for because your code appeared to be a good
match for the Maven Azure Pipelines template.
# ...
# ...
# add these as the last steps
# to deploy to your app service
- task: CopyFiles@2
displayName: Copy Files
inputs:
SourceFolder: $(system.defaultworkingdirectory)/target/azure-functions/
Contents: '**'
TargetFolder: $(build.artifactstagingdirectory)
- task: PublishBuildArtifacts@1
displayName: Publish Artifact
inputs:
PathtoPublish: $(build.artifactstagingdirectory)
- task: AzureFunctionApp@1
displayName: Azure Function App deploy
inputs:
azureSubscription: $(serviceConnectionToAzure)
appType: functionApp
appName: $(appName)
package: $(build.artifactstagingdirectory)
Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:
This guidance explains how to automatically build, test, and deploy Android apps.
Get started
Follow these instructions to set up a pipeline for a sample Android app.
1. The code in the following repository is a simple Android app. To get started, fork this repo to your GitHub
account.
https://github.com/MicrosoftDocs/pipelines-android
TIP
To make changes to the YAML file as described in this topic, select the pipeline in the Pipelines page, and then Edit the
azure-pipelines.yml file.
Gradle
Gradle is a common build tool used for building Android projects. See the Gradle task for more about these
options.
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/android
pool:
vmImage: 'macOS-10.14'
steps:
- task: Gradle@2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'
tasks: 'assembleDebug'
Adjust the gradleWrapperFile value if your gradlew file isn't in the root of the repository. The file path value
should be relative to the root of the repository, such as AndroidApps/MyApp/gradlew or
$(system.defaultWorkingDirectory)/AndroidApps/MyApp/gradlew .
Impor tant: We recommend storing each of the following passwords in a secret variable.
- task: AndroidSigning@2
inputs:
apkFiles: '**/*.apk'
jarsign: true
jarsignerKeystoreFile: 'pathToYourKeystoreFile'
jarsignerKeystorePassword: '$(jarsignerKeystorePassword)'
jarsignerKeystoreAlias: 'yourKeystoreAlias'
jarsignerKeyPassword: '$(jarsignerKeyPassword)'
zipalign: true
Create the Bash Task and copy paste the code below in order to install and run the emulator. Don't forget to
arrange the emulator parameters to fit your testing environment. The emulator will be started as a background
process and available in subsequent tasks.
#!/usr/bin/env bash
# Create emulator
echo "no" | $ANDROID_HOME/tools/bin/avdmanager create avd -n xamarin_android_emulator -k 'system-
images;android-27;google_apis;x86' --force
$ANDROID_HOME/emulator/emulator -list-avds
$ANDROID_HOME/platform-tools/adb devices
- task: CopyFiles@2
inputs:
contents: '**/*.apk'
targetFolder: '$(build.artifactStagingDirectory)'
- task: PublishBuildArtifacts@1
Deploy
App Center
Add the App Center Distribute task to distribute an app to a group of testers or beta users, or promote the app to
Intune or Google Play. A free App Center account is required (no payment is necessary).
# App Center distribute
# Distribute app builds to testers and users via Visual Studio App Center
- task: AppCenterDistribute@1
inputs:
serverEndpoint:
appSlug:
appFile:
#symbolsOption: 'Apple' # Optional. Options: apple
#symbolsPath: # Optional
#symbolsPdbFiles: '**/*.pdb' # Optional
#symbolsDsymFiles: # Optional
#symbolsMappingTxtFile: # Optional
#symbolsIncludeParentDirectory: # Optional
#releaseNotesOption: 'input' # Options: input, file
#releaseNotesInput: # Required when releaseNotesOption == Input
#releaseNotesFile: # Required when releaseNotesOption == File
#isMandatory: false # Optional
#distributionGroupId: # Optional
Google Play
Install the Google Play extension and use the following tasks to automate interaction with Google Play. By default,
these tasks authenticate to Google Play using a service connection that you configure.
Release
Add the Google Play Release task to release a new Android app version to the Google Play store.
- task: GooglePlayRelease@2
inputs:
apkFile: '**/*.apk'
serviceEndpoint: 'yourGooglePlayServiceConnectionName'
track: 'internal'
Promote
Add the Google Play Promote task to promote a previously-released Android app update from one track to
another, such as alpha → beta .
- task: GooglePlayPromote@2
inputs:
packageName: 'com.yourCompany.appPackageName'
serviceEndpoint: 'yourGooglePlayServiceConnectionName'
sourceTrack: 'internal'
destinationTrack: 'alpha'
Increase rollout
Add the Google Play Increase Rollout task to increase the rollout percentage of an app that was previously
released to the rollout track.
- task: GooglePlayIncreaseRollout@1
inputs:
packageName: 'com.yourCompany.appPackageName'
serviceEndpoint: 'yourGooglePlayServiceConnectionName'
userFraction: '0.5' # 0.0 to 1.0 (0% to 100%)
Related extensions
Codified Security (Codified Security)
Google Play (Microsoft)
Mobile App Tasks for iOS and Android (James Montemagno)
Mobile Testing Lab (Perfecto Mobile)
React Native (Microsoft)
Build and test Go projects
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines
Use a pipeline to automatically build and test your Go projects.
https://github.com/MicrosoftDocs/pipelines-go
7. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .
8. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job.
You just created and ran a pipeline that we automatically created for you, because your code appeared
to be a good match for the Go template.
You now have a working YAML pipeline ( azure-pipelines.yml ) in your repository that's ready for you to
customize!
9. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.
See the sections below to learn some of the more common ways to customize your pipeline.
TIP
To make changes to the YAML file as described in this topic, select the pipeline in Pipelines page, and then select Edit to
open an editor for the azure-pipelines.yml file.
Build environment
You can use Azure Pipelines to build your Go projects without needing to set up any infrastructure of your own.
You can use Linux, macOS, or Windows agents to run your builds.
Update the following snippet in your azure-pipelines.yml file to select the appropriate image.
pool:
vmImage: 'ubuntu-latest'
Modern versions of Go are pre-installed on Microsoft-hosted agents in Azure Pipelines. For the exact versions of
Go that are pre-installed, refer to Microsoft-hosted agents.
Set up Go
Go 1.11+
Go < 1.11
Starting with Go 1.11, you no longer need to define a $GOPATH environment, set up a workspace layout, or use
the dep module. Dependency management is now built-in.
This YAML implements the go get command to download Go packages and their dependencies. It then uses
go build to generate the content that is published with PublishBuildArtifacts@1 task.
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: GoTool@0
inputs:
version: '1.13.5'
- task: Go@0
inputs:
command: 'get'
arguments: '-d'
workingDirectory: '$(System.DefaultWorkingDirectory)'
- task: Go@0
inputs:
command: 'build'
workingDirectory: '$(System.DefaultWorkingDirectory)'
- task: CopyFiles@2
inputs:
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
artifactName: drop
Build
Use go build to build your Go project. Add the following snippet to your azure-pipelines.yml file:
- task: Go@0
inputs:
command: 'build'
workingDirectory: '$(System.DefaultWorkingDirectory)'
Test
Use go testto test your go module and its subdirectories ( ./... ). Add the following snippet to your
azure-pipelines.yml file:
- task: Go@0
inputs:
command: 'test'
arguments: '-v'
workingDirectory: '$(modulePath)'
Related extensions
Go extension for Visual Studio Code (Microsoft)
Build and test PHP apps
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use a pipeline to automatically build and test your PHP projects.
https://github.com/MicrosoftDocs/pipelines-php
The sample code includes an azure-pipelines.yml file at the root of the repository. You can use this file to build the
project.
Follow all the instructions in Create your first pipeline to create a build pipeline for the sample project.
See the sections below to learn some of the more common ways to customize your pipeline.
Build environment
You can use Azure Pipelines to build your PHP projects without needing to set up any infrastructure of your own.
PHP is preinstalled on Microsoft-hosted agents in Azure Pipelines, along with many common libraries per PHP
version. You can use Linux, macOS, or Windows agents to run your builds.
For the exact versions of PHP that are preinstalled, refer to Microsoft-hosted agents.
Use a specific PHP version
On the Microsoft-hosted Ubuntu agent, multiple versions of PHP are installed. A symlink at /usr/bin/php points to
the currently set PHP version, so that when you run php , the set version executes. To use a PHP version other than
the default, the symlink can be pointed to that version using the update-alternatives tool. Set the PHP version
that you prefer by adding the following snippet to your azure-pipelines.yml file and changing the value of the
phpVersion variable accordingly.
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/php
pool:
vmImage: 'ubuntu-16.04'
variables:
phpVersion: 7.2
steps:
- script: |
sudo update-alternatives --set php /usr/bin/php$(phpVersion)
sudo update-alternatives --set phar /usr/bin/phar$(phpVersion)
sudo update-alternatives --set phpdbg /usr/bin/phpdbg$(phpVersion)
sudo update-alternatives --set php-cgi /usr/bin/php-cgi$(phpVersion)
sudo update-alternatives --set phar.phar /usr/bin/phar.phar$(phpVersion)
php -version
displayName: 'Use PHP version $(phpVersion)'
Install dependencies
To use Composer to install dependencies, add the following snippet to your azure-pipelines.yml file.
- script: ./phpunit
displayName: 'Run tests with phpunit'
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(system.defaultWorkingDirectory)'
includeRootFolder: false
- task: PublishBuildArtifacts@1
You can also specify the absolute path, using the built-in system variables:
composer install --no-interaction --working-dir='$(system.defaultWorkingDirectory)/pkgs'
Azure Pipelines
A web app is a lightweight way to host a web application. In this step-by-step guide you'll learn how to create a
pipeline that continuously builds and deploys your PHP app. Your team can then automatically build each commit
in GitHub, and if you want, automatically deploy the change to an Azure App Service. You can use whichever
runtime your prefer: PHP|5.6 or PHP|7.0.
Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.
TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.
https://github.com/Azure-Samples/php-docs-hello-world
# Create an App Service from the plan with PHP as the runtime
az webapp create -g myapp-rg -p myapp-service-plan -n my-app-name --runtime "PHP|7.0"
Sign in to Azure Pipelines and connect to Azure
Sign in to Azure Pipelines. After you sign in, your browser goes to https://dev.azure.com/my-organization-name and
displays your Azure DevOps dashboard.
Within your selected organization, create a project. If you don't have any projects in your organization, you see a
Create a project to get star ted screen. Otherwise, select the Create Project button in the upper-right corner of
the dashboard.
Now create the service connection:
1. From your project dashboard, select Project settings on the bottom left.
2. On the settings page, select Pipelines > Ser vice connections , select New ser vice connection , and then
select Azure Resource Manager .
3. The Add an Azure Resource Manager service connection* dialog box appears.
Name Type a name and then copy and paste it into a text file so you can use it later.
Scope Select Subscription.
Subscription Select the subscription in which you created the App Service.
Resource Group Select the resource group you created earlier
Select Allow all pipelines to use this connection .
TIP
If you need to create a connection to an Azure subscription that's owned by someone else, see Create an Azure Resource
Manager service connection with an existing service principal.
You just created and ran a pipeline that we automatically created for you, because your code appeared to be a
good match for the PHP Azure Pipelines template.
# ...
- task: PublishBuildArtifacts@1
displayName: Publish Artifact
inputs:
PathtoPublish: $(build.artifactstagingdirectory)
- task: AzureWebApp@1
displayName: Azure Web App Deploy
inputs:
azureSubscription: $(serviceConnectionToAzure)
appType: webAppLinux
appName: $(appName)
package: $(build.artifactstagingdirectory)/**/*.zip
Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:
Azure Pipelines
This guidance explains how to automatically build Ruby projects.
Get started
Follow these instructions to set up a pipeline for a Ruby app.
1. The code in the following repository is a simple Ruby app. To get started, fork this repo to your GitHub
account.
https://github.com/MicrosoftDocs/pipelines-ruby
TIP
To make changes to the YAML file as described in this topic, select the pipeline in the Pipelines page, and then Edit the
azure-pipelines.yml file.
Build environment
You can use Azure Pipelines to build your Ruby projects without needing to set up any infrastructure of your own.
Ruby is preinstalled on Microsoft-hosted agents in Azure Pipelines. You can use Linux, macOS, or Windows agents
to run your builds.
For the exact versions of Ruby that are preinstalled, refer to Microsoft-hosted agents. To install a specific version of
Ruby on Microsoft-hosted agents, add the Use Ruby Version task to the beginning of your pipeline.
Use a specific Ruby version
Add the Use Ruby Version task to set the version of Ruby used in your pipeline. This snippet adds Ruby 2.4 or later
to the path and sets subsequent pipeline tasks to use it.
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/ruby
pool:
vmImage: 'ubuntu-16.04' # other options: 'macOS-10.14', 'vs2017-win2016'
steps:
- task: UseRubyVersion@0
inputs:
versionSpec: '>= 2.4'
addToPath: true
Install Rails
To install Rails, add the following snippet to your azure-pipelines.yml file.
Install dependencies
To use Bundler to install dependencies, add the following snippet to your azure-pipelines.yml file.
- script: |
CALL gem install bundler
bundle install --retry=3 --jobs=4
displayName: 'bundle install'
Run Rake
To execute Rake in the context of the current bundle (as defined in your Gemfile), add the following snippet to your
azure-pipelines.yml file.
- task: PublishTestResults@2
condition: succeededOrFailed()
inputs:
testResultsFiles: '**/test-*.xml'
testRunTitle: 'Ruby tests'
Azure Pipelines
Get started with Xamarin and Azure Pipelines by using building a pipeline to deploy a Xamarin app. You can
deploy Android and iOS apps in the same or separate pipelines.
Prerequisites
Before you begin, you need:
An Azure account with an active subscription. Create an account for free.
An active Azure DevOps organization. Sign up for Azure Pipelines.
Get code
Fork this repo in GitHub:
https://github.com/MicrosoftDocs/pipelines-xamarin
When the Configure tab appears, select Xamarin.Android to build an Android project or Xamarin.iOS to
build an iOS project.
7. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select
Save and run .
8. You're prompted to commit a new azure-pipelines.yml file to your repository. After you're happy with the
message, select Save and run again.
If you want to watch your pipeline in action, select the build job. You now have a working YAML pipeline (
azure-pipelines.yml ) in your repository that's ready for you to customize!
9. When you're ready to make changes to your pipeline, select it in the Pipelines page, and then Edit the
azure-pipelines.yml file.
10. See the sections below to learn some of the more common ways to customize your pipeline.
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/xamarin
pool:
vmImage: 'macOS-10.15' # For Windows, use 'windows-2019'
steps:
- task: NuGetToolInstaller@0
- task: NuGetCommand@2
inputs:
restoreSolution: '**/*.sln'
- task: XamarinAndroid@1
inputs:
projectFile: '**/*Droid*.csproj'
outputDirectory: '$(outputDirectory)'
configuration: '$(buildConfiguration)'
variables:
buildConfiguration: 'Release'
steps:
- task: XamariniOS@2
inputs:
solutionFile: '**/*iOS.csproj'
configuration: '$(buildConfiguration)'
packageApp: false
buildForSimulator: true
- task: XamariniOS@2
inputs:
solutionFile: '**/*iOS.csproj'
configuration: 'AppStore'
packageApp: true
TIP
The Xamarin.iOS build task will only generate an .ipa package if the agent running the job has the appropriate provisioning
profile and Apple certificate installed. If you enable the packageApp option and the agent does not have the appropriate
apple provisioning profile(.mobileprovision) and apple certificate(.p12) the build may report succeeded but there will be no
.ipa generated.
For Microsoft Hosted agents the .ipa package is by default located under path:
{iOS.csproj root}/bin/{Configuration}/{iPhone/iPhoneSimulator}/
You can configure the output path by adding an argument to the Xamarin.iOS task:
YAML
Classic
- task: XamariniOS@2
inputs:
solutionFile: '**/*iOS.csproj'
configuration: 'AppStore'
packageApp: true
args: /p:IpaPackageDir="/Users/vsts/agent/2.153.2/work/1/a"
This example locates the .ipa in the Build Artifact Staging Directory ready to be pushed into Azure DevOps as an
artifact to each build run.To push it into Azure DevOps simply add a Publish Artifact task to the end of your
pipeline.
For more information about signing and provisioning your iOS app, see Sign your mobile iOS app during CI.
Set the Xamarin SDK version on macOS
To set a specific Xamarin SDK version to use on the Microsoft-hosted macOS agent pool, add the following snippet
before the XamariniOS task in your azure-pipelines.yml file. For details on properly formatting the version
number (shown as 5_4_1 below), see How can I manually select versions of tools on the Hosted macOS agent?.
- job: iOS
pool:
vmImage: 'macOS-10.14'
variables:
buildConfiguration: 'Release'
steps:
- task: NuGetToolInstaller@0
- task: NuGetCommand@2
inputs:
restoreSolution: '**/*.sln'
- task: XamariniOS@2
inputs:
solutionFile: '**/*iOS.csproj'
configuration: '$(buildConfiguration)'
buildForSimulator: true
packageApp: false
Clean up resources
If you don't need the example code, delete your GitHub repository and Azure Pipelines project.
Next steps
Learn more about using Xcode in pipelines
Learn more about using Android in pipelines
Build, test, and deploy Xcode apps
2/26/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
This guidance explains how to automatically build Xcode projects.
Example
For a working example of how to build an app with Xcode, import (into Azure Repos or TFS) or fork (into GitHub)
this repo:
https://github.com/MicrosoftDocs/pipelines-xcode
The sample code includes an azure-pipelines.yml file at the root of the repository. You can use this file to build
the app.
Follow all the instructions in Create your first pipeline to create a build pipeline for the sample app.
Build environment
You can use Azure Pipelines to build your apps with Xcode without needing to set up any infrastructure of your
own. Xcode is preinstalled on Microsoft-hosted macOS agents in Azure Pipelines. You can use the macOS agents
to run your builds.
For the exact versions of Xcode that are preinstalled, refer to Microsoft-hosted agents.
Create a file named azure-pipelines.yml in the root of your repository. Then, add the following snippet to your
azure-pipelines.yml file to select the appropriate agent pool:
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/xcode
pool:
vmImage: 'macOS-10.14'
steps:
- task: Xcode@5
inputs:
sdk: '$(sdk)'
scheme: '$(scheme)'
configuration: '$(configuration)'
xcodeVersion: 'default' # Options: default, 10, 9, 8, specifyPath
exportPath: '$(agent.buildDirectory)/output/$(sdk)/$(configuration)'
packageApp: false
# The `certSecureFile` and `provProfileSecureFile` files are uploaded to the Azure Pipelines secure files
library where they are encrypted.
# The `P12Password` variable is set in the Azure Pipelines pipeline editor and marked 'secret' to be
encrypted.
steps:
- task: InstallAppleCertificate@2
inputs:
certSecureFile: 'chrisid_iOSDev_Nov2018.p12'
certPwd: $(P12Password)
- task: InstallAppleProvisioningProfile@1
inputs:
provProfileSecureFile: '6ffac825-ed27-47d0-8134-95fcf37a666c.mobileprovision'
- task: Xcode@5
inputs:
actions: 'build'
scheme: ''
sdk: 'iphoneos'
configuration: 'Release'
xcWorkspacePath: '**/*.xcodeproj/project.xcworkspace'
xcodeVersion: 'default' # Options: 8, 9, 10, default, specifyPath
signingOption: 'default' # Options: nosign, default, manual, auto
useXcpretty: 'false' # Makes it easier to diagnose build failures
CocoaPods
If your project uses CocoaPods, you can run CocoaPods commands in your pipeline using a script, or with the
CocoaPods task. The task optionally runs pod repo update , then runs pod install , and allows you to set a custom
project directory. Following are common examples of using both.
- script: /usr/local/bin/pod install
displayName: 'pod install using a script'
- task: CocoaPods@0
displayName: 'pod install using the CocoaPods task with defaults'
- task: CocoaPods@0
inputs:
forceRepoUpdate: true
projectDirectory: '$(system.defaultWorkingDirectory)'
displayName: 'pod install using the CocoaPods task with a forced repo update and a custom project directory'
Carthage
If your project uses Carthage with a private Carthage repository, you can set up authentication by setting an
environment variable named GITHUB_ACCESS_TOKEN with a value of a token that has access to the repository.
Carthage will automatically detect and use this environment variable.
Do not add the secret token directly to your pipeline YAML. Instead, create a new pipeline variable with its lock
enabled on the Variables pane to encrypt this value. See secret variables.
Here is an example that uses a secret variable named myGitHubAccessToken for the value of the
GITHUB_ACCESS_TOKEN environment variable.
- task: CopyFiles@2
inputs:
contents: '**/*.ipa'
targetFolder: '$(build.artifactStagingDirectory)'
- task: PublishBuildArtifacts@1
Deploy
App Center
Add the App Center Distribute task to distribute an app to a group of testers or beta users, or promote the app to
Intune or the Apple App Store. A free App Center account is required (no payment is necessary).
# App Center distribute
# Distribute app builds to testers and users via Visual Studio App Center
- task: AppCenterDistribute@1
inputs:
serverEndpoint:
appSlug:
appFile:
#symbolsOption: 'Apple' # Optional. Options: apple
#symbolsPath: # Optional
#symbolsPdbFiles: '**/*.pdb' # Optional
#symbolsDsymFiles: # Optional
#symbolsMappingTxtFile: # Optional
#symbolsIncludeParentDirectory: # Optional
#releaseNotesOption: 'input' # Options: input, file
#releaseNotesInput: # Required when releaseNotesOption == Input
#releaseNotesFile: # Required when releaseNotesOption == File
#isMandatory: false # Optional
#distributionGroupId: # Optional
- task: AppStoreRelease@1
displayName: 'Publish to the App Store TestFlight track'
inputs:
serviceEndpoint: 'My Apple App Store service connection' # This service connection must be added by you
appIdentifier: com.yourorganization.testapplication.etc
ipaPath: '$(build.artifactstagingdirectory)/**/*.ipa'
shouldSkipWaitingForProcessing: true
shouldSkipSubmission: true
Promote
Add the App Store Promote task to automate the promotion of a previously submitted app from iTunes Connect to
the App Store.
- task: AppStorePromote@1
displayName: 'Submit to the App Store for review'
inputs:
serviceEndpoint: 'My Apple App Store service connection' # This service connection must be added by you
appIdentifier: com.yourorganization.testapplication.etc
shouldAutoRelease: false
Related extensions
Apple App Store (Microsoft)
Codified Security (Codified Security)
MacinCloud (Moboware Inc.)
Mobile App Tasks for iOS and Android (James Montemagno)
Mobile Testing Lab (Perfecto Mobile)
Raygun (Raygun)
React Native (Microsoft)
Version Setter (Tom Gilder)
Quickstart: trigger a pipeline run from GitHub Actions
11/2/2020 • 2 minutes to read • Edit Online
Prerequisites
A working Azure pipeline. Create your first pipeline.
A GitHub account with a repository. Join GitHub and create a repository.
An Azure DevOps personal access token (PAT) to use with your GitHub action. Create a PAT.
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
5. Copy this workflow and replace the contents of your GitHub Actions workflow file. Customize the
azure-devops-project-url and azure-pipeline-name values. Your complete workflow should look like this.
name: CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
name: Call Azure Pipeline
runs-on: ubuntu-latest
steps:
- name: Azure Pipelines Action
uses: Azure/pipelines@v1
with:
azure-devops-project-url: https://dev.azure.com/organization/project-name
azure-pipeline-name: 'My Pipeline'
azure-devops-token: ${{ secrets.AZURE_DEVOPS_TOKEN }}
6. On the Actions page, verify that your workflow ran. Select the workflow title to see more information about
the run. You should see a green check mark for the Azure Pipelines Action. Open the Action to see a direct
link to the pipeline run.
Clean up resources
If you're not going to continue to use the GitHub Action, delete the workflow with the following steps:
1. Open .github/workflows in your GitHub repository.
2. Open the workflow you created and Delete .
Next steps
Learn how to connect to the Azure environment and deploy to Azure with GitHub.
Deploy to Azure using GitHub Actions
Build multiple branches
11/2/2020 • 8 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
You can build every commit and pull request to your Git repository using Azure Pipelines or TFS. In this tutorial, we
will discuss additional considerations when building multiple branches in your Git repository. You will learn how to:
Set up a CI trigger for topic branches
Automatically build a change in topic branch
Exclude or include tasks for builds based on the branch being built
Keep code quality high by building pull requests
Use retention policies to clean up completed builds
Prerequisites
You need a Git repository in Azure Pipelines, TFS, or GitHub with your app. If you do not have one, we
recommend importing the sample .NET Core app into your Azure Pipelines or TFS project, or forking it into
your GitHub repository. Note that you must use Azure Pipelines to build a GitHub repository. You cannot use
TFS.
You also need a working build for your repository.
trigger:
- master
- feature/*
Exclude or include tasks for builds based on the branch being built
The master branch typically produces deployable artifacts such as binaries. You do not need to spend time creating
and storing those artifacts for short-lived feature branches. You implement custom conditions in Azure Pipelines or
TFS so that certain tasks only execute on your master branch during a build run. You can use a single build with
multiple branches and skip or perform certain tasks based on conditions.
YAML
Classic
Edit the azure-pipelines.yml file in your master branch, locate a task in your YAML file, and add a condition to it.
For example, the following snippet adds a condition to publish artifacts task.
- task: PublishBuildArtifacts@1
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
Next steps
In this tutorial, you learned how to manage CI for multiple branches in your Git repositories using Azure Pipelines
or TFS.
You learned how to:
Set up a CI trigger for topic branches
Automatically build a change in topic branch
Exclude or include tasks for builds based on the branch being built
Keep code quality high by building pull requests
Use retention policies to clean up completed builds
Create a multi-platform pipeline
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
This is a step-by-step guide to using Azure Pipelines to build on macOS, Linux, and Windows.
Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
Add a pipeline
In the sample repo, there's no pipeline yet. You're going to add jobs that run on three platforms.
1. Go to your fork of the sample code on GitHub.
2. Choose 'Create new file'. Name the file azure-pipelines.yml , and give it the contents below.
# Build NodeJS Express app using Azure Pipelines
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/javascript?view=azure-devops
strategy:
matrix:
linux:
imageName: 'ubuntu-16.04'
mac:
imageName: 'macos-10.14'
windows:
imageName: 'vs2017-win2016'
pool:
vmImage: $(imageName)
steps:
- task: NodeTool@0
inputs:
versionSpec: '8.x'
- script: |
npm install
npm test
- task: PublishTestResults@2
inputs:
testResultsFiles: '**/TEST-RESULTS.xml'
testRunTitle: 'Test results for JavaScript'
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/*coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/coverage'
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
- task: PublishBuildArtifacts@1
FAQ
Can I build my multi-platform pipeline on both self-hosted and Microsoft-hosted agents?
You can, you would need to specify both a vmImage and a Pool variable, like the following example. For the hosted
agent, specify Azure Pipelines as the pool name, and for self-hosted agents, leave the vmImage blank. The blank
vmImage for the self-hosted agent may result in some unusual entries in the logs but they won't affect the pipeline.
strategy:
matrix:
microsofthosted:
poolName: Azure Pipelines
vmImage: ubuntu-latest
selfhosted:
poolName: FabrikamPool
vmImage:
pool:
name: $(poolName)
vmImage: $(vmImage)
steps:
- checkout: none
- script: echo test
Next steps
You've just learned the basics of using multiple platforms with Azure Pipelines. From here, you can learn more
about:
Jobs
Cross-platform scripting
Templates to remove the duplication
Building Node.js apps
Building .NET Core, Go, Java, or Python apps
For details about building GitHub repositories, see Build GitHub repositories.
Service containers
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines
If your pipeline requires the support of one or more services, in many cases you'll want to create, connect to, and
clean up each service on a per-job basis. For instance, a pipeline may run integration tests that require access to a
database and a memory cache. The database and memory cache need to be freshly created for each job in the
pipeline.
A container provides a simple and portable way to run a service that your pipeline depends on. A service container
enables you to automatically create, network, and manage the lifecycle of your containerized service. Each service
container is accessible by only the job that requires it. Service containers work with any kind of job, but they're
most commonly used with container jobs.
Requirements
Service containers must define a CMD or ENTRYPOINT . The pipeline will docker run the provided container without
additional arguments.
Azure Pipelines can run Linux or Windows Containers. Use either hosted Ubuntu for Linux containers, or the
Hosted Windows Container pool for Windows containers. (The Hosted macOS pool does not support running
containers.)
YAML
Classic
resources:
containers:
- container: my_container
image: ubuntu:16.04
- container: nginx
image: nginx
- container: redis
image: redis
pool:
vmImage: 'ubuntu-16.04'
container: my_container
services:
nginx: nginx
redis: redis
steps:
- script: |
apt install -y curl
curl nginx
apt install redis-tools
redis-cli -h redis ping
This pipeline fetches the latest nginx and redis containers from Docker Hub and then starts the containers. The
containers are networked together so that they can reach each other by their services name. The pipeline then
runs the apt , curl and redis-cli commands inside the ubuntu:16.04 container. From inside this job container,
the nginx and redis host names resolve to the correct services using Docker networking. All containers on the
network automatically expose all ports to each other.
Single job
You can also use service containers without a job container. A simple example:
resources:
containers:
- container: nginx
image: nginx
ports:
- 8080:80
env:
NGINX_PORT: 80
- container: redis
image: redis
ports:
- 6379
pool:
vmImage: 'ubuntu-16.04'
services:
nginx: nginx
redis: redis
steps:
- script: |
curl localhost:8080
redis-cli -p "${AGENT_SERVICES_REDIS_PORTS_6379}" ping
This pipeline starts the latest nginx and redis containers, and then publishes the specified ports to the host.
Since the job is not running in a container, there's no automatic name resolution. This example shows how you can
instead reach services by using localhost . In the above example we provide the port explicitly (for example,
8080:80 ).
An alternative approach is to let a random port get assigned dynamically at runtime. You can then access these
dynamic ports by using variables. In a Bash script, you can access a variable by using the process environment.
These variables take the form: agent.services.<serviceName>.ports.<port> . In the above example, redis is
assigned a random available port on the host. The agent.services.redis.ports.6379 variable contains the port
number.
Multiple jobs
Service containers are also useful for running the same steps against multiple versions of the same service. In the
following example, the same steps run against multiple versions of PostgreSQL.
resources:
containers:
- container: my_container
image: ubuntu:16.04
- container: pg11
image: postgres:11
- container: pg10
image: postgres:10
pool:
vmImage: 'ubuntu-16.04'
strategy:
matrix:
postgres11:
postgresService: pg11
postgres10:
postgresService: pg10
container: my_container
services:
postgres: $[ variables['postgresService'] ]
steps:
- script: |
apt install -y postgresql-client
psql --host=postgres --username=postgres --command="SELECT 1;"
Ports
When specifying a container resource or an inline container, you can specify an array of ports to expose on the
container.
resources:
containers:
- container: my_service
image: my_service:latest
ports:
- 8080:80
- 5432
services:
redis:
image: redis
ports:
- 6379/tcp
Specifying ports is not required if your job is running in a container because containers on the same Docker
network automatically expose all ports to each other by default.
If your job is running on the host, then ports are required to access the service. A port takes the form
<hostPort>:<containerPort> or just <containerPort> , with an optional /<protocol> at the end, for example
6379/tcp to expose tcp over port 6379 , bound to a random port on the host machine.
For ports bound to a random port on the host machine, the pipeline creates a variable of the form
agent.services.<serviceName>.ports.<port> so that it can be accessed by the job. For example,
agent.services.redis.ports.6379 resolves to the randomly assigned port on the host machine.
Volumes
Volumes are useful for sharing data between services, or for persisting data between multiple runs of a job.
You can specify volume mounts as an array of volumes . Volumes can either be named Docker volumes,
anonymous Docker volumes, or bind mounts on the host.
services:
my_service:
image: myservice:latest
volumes:
- mydockervolume:/data/dir
- /data/dir
- /src/dir:/dst/dir
Volumes take the form <source>:<destinationPath> , where <source> can be a named volume or an absolute path
on the host machine, and <destinationPath> is an absolute path in the container.
NOTE
If you use our hosted pools, then your volumes will not be persisted between jobs because the host machine is cleaned up
after the job is completed.
Other options
Service containers share the same container resources as container jobs. This means that you can use the same
additional options.
Healthcheck
Optionally, if any service container specifies a HEALTHCHECK, the agent waits until the container is healthy before
running the job.
Run cross-platform scripts
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
With Azure Pipelines and Team Foundation Server (TFS), you can run your builds on macOS, Linux, and Windows.
If you develop on cross-platform technologies such as Node.js and Python, these capabilities bring benefits, and
also some challenges. For example, most pipelines include one or more scripts that you want to run during the
build process. But scripts often don't run the same way on different platforms. Below are some tips on how to
handle this kind of challenge.
steps:
- script: |
npm install
npm test
steps:
- script: echo This is pipeline $(System.DefinitionId)
variables:
Example: 'myValue'
steps:
- script: echo The value passed in is $(Example)
Consider Bash or pwsh
If you have more complex scripting needs than the examples shown above, then consider writing them in Bash.
Most macOS and Linux agents have Bash as an available shell, and Windows agents include Git Bash or Windows
Subsystem for Linux Bash.
For Azure Pipelines, the Microsoft-hosted agents always have Bash available.
For example, if you need to make a decision based on whether this is a pull request build:
YAML
Classic
trigger:
batch: true
branches:
include:
- master
steps:
- bash: |
echo "Hello world from $AGENT_NAME running on $AGENT_OS"
case $BUILD_REASON in
"Manual") echo "$BUILD_REQUESTEDFOR manually queued the build." ;;
"IndividualCI") echo "This is a CI build for $BUILD_REQUESTEDFOR." ;;
"BatchedCI") echo "This is a batched CI build for $BUILD_REQUESTEDFOR." ;;
*) $BUILD_REASON ;;
esac
displayName: Hello world
PowerShell Core ( pwsh ) is also an option. It requires each agent to have PowerShell Core installed.
Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015 | Previous versions (XAML builds)
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
When you are ready to move beyond the basics of compiling and testing your code, use a PowerShell script to add
your team's business logic to your build pipeline.
You can run Windows PowerShell on a Windows build agent. PowerShell Core runs on any platform.
1. Push your script into your repo.
2. Add a pwsh or powershell step:
$(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r)
- task: PowerShell@2
inputs:
targetType: inline
script: |
$url =
"$($env:SYSTEM_TEAMFOUNDATIONCOLLECTIONURI)$env:SYSTEM_TEAMPROJECTID/_apis/build/definitions/$($env:SYSTEM_DEF
INITIONID)?api-version=5.0"
Write-Host "URL: $url"
$pipeline = Invoke-RestMethod -Uri $url -Headers @{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
Write-Host "Pipeline = $($pipeline | ConvertTo-Json -Depth 100)"
FAQ
What variables are available for me to use in my scripts?
Use variables
How do I set a variable so that it can be read by subsequent scripts and tasks?
Define and modify your build variables in a script
Define and modify your release variables in a script
Which branch of the script does the build run?
The build runs the script same branch of the code you are building.
What kinds of parameters can I use?
You can use named parameters. Other kinds of parameters, such as switch parameters, are not yet supported and
will cause errors.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Run Git commands in a script
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 |Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
For some workflows you need your build pipeline to run Git commands. For example, after a CI build on a feature
branch is done, the team might want to merge the branch to master.
Git is available on Microsoft-hosted agents and on on-premises agents.
If you see this page, select the repo, and then click the link:
On the Version Control tab, select the repository in which you want to run Git commands, and then select Project
Collection Build Ser vice . By default, this identity can read from the repo but cannot push any changes back to it.
Grant permissions needed for the Git commands you want to run. Typically you'll want to grant:
Create branch: Allow
Contribute: Allow
Read: Allow
Create tag: Allow
When you're done granting the permissions, make sure to click Save changes .
Enable your pipeline to run command-line Git
On the variables tab set this variable:
NAME VA L UE
system.prefergit true
steps:
- checkout: self
persistCredentials: true
steps:
- checkout: self
clean: true
Examples
List the files in your repo
Make sure to follow the above steps to enable Git.
On the build tab add this task:
TA SK A RGUM EN T S
Tool: git
@echo off
ECHO SOURCE BRANCH IS %BUILD_SOURCEBRANCH%
IF %BUILD_SOURCEBRANCH% == refs/heads/master (
ECHO Building master branch so no merge is needed.
EXIT
)
SET sourceBranch=origin/%BUILD_SOURCEBRANCH:refs/heads/=%
ECHO GIT CHECKOUT MASTER
git checkout master
ECHO GIT STATUS
git status
ECHO GIT MERGE
git merge %sourceBranch% -m "Merge to master"
ECHO GIT STATUS
git status
ECHO GIT PUSH
git push origin
ECHO GIT STATUS
git status
TA SK A RGUM EN T S
Path : merge.bat
FAQ
Can I run Git commands if my remote repo is in GitHub or another Git service such as Bitbucket Cloud?
Yes
Which tasks can I use to run Git commands?
Batch Script
Command Line
PowerShell
Shell Script
How do I avoid triggering a CI build when the script pushes?
Add ***NO_CI*** to your commit message. Here are examples:
git commit -m "This is a commit message ***NO_CI***"
git merge origin/features/hello-world -m "Merge to master ***NO_CI***"
Add [skip ci] to your commit message or description. Here are examples:
git commit -m "This is a commit message [skip ci]"
git merge origin/features/hello-world -m "Merge to master [skip ci]"
You can also use any of the variations below. This is supported for commits to Azure Repos Git, Bitbucket Cloud,
GitHub, and GitHub Enterprise Server.
[skip ci] or [ci skip]
skip-checks: true or skip-checks:true
[skip azurepipelines] or [azurepipelines skip]
[skip azpipelines] or [azpipelines skip]
[skip azp] or [azp skip]
***NO_CI***
How does enabling scripts to run Git commands affect how the build pipeline gets build sources?
When you set system.prefergit to true , the build pipeline uses command-line Git instead of LibGit2Sharp to
clone or fetch the source files.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Pipeline caching
11/2/2020 • 14 minutes to read • Edit Online
Pipeline caching can help reduce build time by allowing the outputs or downloaded dependencies from one run to
be reused in later runs, thereby reducing or avoiding the cost to recreate or redownload the same files again.
Caching is especially useful in scenarios where the same dependencies are downloaded over and over at the start
of each run. This is often a time consuming process involving hundreds or thousands of network calls.
Caching can be effective at improving build time provided the time to restore and save the cache is less than the
time to produce the output again from scratch. Because of this, caching may not be effective in all scenarios and
may actually have a negative impact on build time.
Caching is currently supported in CI and deployment jobs, but not classic release jobs.
When to use artifacts versus caching
Pipeline caching and pipeline artifacts perform similar functions but are designed for different scenarios and
should not be used interchangeably. In general:
Use pipeline ar tifacts when you need to take specific files produced in one job and share them with other
jobs (and these other jobs will likely fail without them).
Use pipeline caching when you want to improve build time by reusing files from previous runs (and not
having these files will not impact the job's ability to run).
NOTE
Caches are immutable, meaning that once a cache is created, its contents cannot be changed. See Can I clear a cache? in the
FAQ section for additional details.
TIP
To avoid a path-like string segment from being treated like a file path, wrap it with double quotes, for example:
"my.key" | $(Agent.OS) | key.file
File patterns :
comma-separated list of glob-style wildcard pattern that must match at least one file. For example:
**/yarn.lock : all yarn.lock files under the sources directory
*/asset.json, !bin/** : all asset.json files located in a directory under the sources directory, except under
the bin directory
The contents of any file identified by a file path or file pattern is hashed to produce a dynamic cache key. This is
useful when your project has file(s) that uniquely identify what is being cached. For example, files like
package-lock.json , yarn.lock , Gemfile.lock , or Pipfile.lock are commonly referenced in a cache key since they
all represent a unique set of dependencies.
Relative file paths or file patterns are resolved against $(System.DefaultWorkingDirectory) .
Example :
Here is an example showing how to cache dependencies installed by Yarn:
variables:
YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn
steps:
- task: Cache@2
inputs:
key: 'yarn | "$(Agent.OS)" | yarn.lock'
restoreKeys: |
yarn | "$(Agent.OS)"
yarn
path: $(YARN_CACHE_FOLDER)
displayName: Cache Yarn packages
In this example, the cache key contains three parts: a static string ("yarn"), the OS the job is running on since this
cache is unique per operating system, and the hash of the yarn.lock file that uniquely identifies the set of
dependencies in the cache.
On the first run after the task is added, the cache step will report a "cache miss" since the cache identified by this
key does not exist. After the last step, a cache will be created from the files in $(Pipeline.Workspace)/.yarn and
uploaded. On the next run, the cache step will report a "cache hit" and the contents of the cache will be downloaded
and restored.
Restore keys
restoreKeys can be used if one wants to query against multiple exact keys or key prefixes. This is used to fallback
to another key in the case that a key does not yield a hit. A restore key will search for a key by prefix and yield the
latest created cache entry as a result. This is useful if the pipeline is unable to find an exact match but wants to use
a partial cache hit instead. To insert multiple restore keys, simply delimit them by using a new line to indicate the
restore key (see the example for more details). The order of which restore keys will be tried against will be from top
to bottom.
Required software on self-hosted agent
A RC H IVE SO F T WA RE /
P L AT F O RM W IN DO W S L IN UX MAC
7-Zip Recommended No No
The above executables need to be in a folder listed in the PATH environment variable. Please note that the hosted
agents come with the software included, this is only applicable for self-hosted agents.
Example :
Here is an example on how to use restore keys by Yarn:
variables:
YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn
steps:
- task: Cache@2
inputs:
key: yarn | $(Agent.OS) | yarn.lock
path: $(YARN_CACHE_FOLDER)
restoreKeys: |
yarn | $(Agent.OS)
yarn
displayName: Cache Yarn packages
In this example, the cache task will attempt to find if the key exists in the cache. If the key does not exist in the
cache, it will try to use the first restore key yarn | $(Agent.OS) . This will attempt to search for all keys that either
exactly match that key or has that key as a prefix. A prefix hit can happen if there was a different yarn.lock hash
segment. For example, if the following key yarn | $(Agent.OS) | old-yarn.lock was in the cache where the old
yarn.lock yielded a different hash than yarn.lock , the restore key will yield a partial hit. If there is a miss on the
first restore key, it will then use the next restore key yarn which will try to find any key that starts with yarn . For
prefix hits, the result will yield the most recently created cache key as the result.
NOTE
A pipeline can have one or more caching task(s). There is no limit on the caching storage capacity, and jobs and tasks from
the same pipeline can access and share the same cache.
TIP
Because caches are already scoped to a project, pipeline, and branch, there is no need to include any project, pipeline, or
branch identifiers in the cache key.
- script: install-deps.sh
condition: ne(variables.CACHE_RESTORED, 'true')
- script: build.sh
Bundler
For Ruby projects using Bundler, override the BUNDLE_PATH environment variable used by Bundler to set the path
Bundler will look for Gems in.
Example :
variables:
BUNDLE_PATH: $(Pipeline.Workspace)/.bundle
steps:
- task: Cache@2
inputs:
key: 'gems | "$(Agent.OS)" | my.gemspec'
restoreKeys: |
gems | "$(Agent.OS)"
gems
path: $(BUNDLE_PATH)
displayName: Cache gems
ccache (C/C++)
ccache is a compiler cache for C/C++. To use ccache in your pipeline make sure ccache is installed, and optionally
added to your PATH (see ccache run modes). Set the CCACHE_DIR environment variable to a path under
$(Pipeline.Workspace) and cache this directory.
Example :
variables:
CCACHE_DIR: $(Pipeline.Workspace)/ccache
steps:
- bash: |
sudo apt-get install ccache -y
echo "##vso[task.prependpath]/usr/lib/ccache"
displayName: Install ccache and update PATH to use linked versions of gcc, cc, etc
- task: Cache@2
inputs:
key: 'ccache | "$(Agent.OS)"'
path: $(CCACHE_DIR)
displayName: ccache
NOTE
In this example, the key is a fixed value (the OS name) and because caches are immutable, once a cache with this key is
created for a particular scope (branch), the cache cannot be updated. This means subsequent builds for the same branch will
not be able to update the cache even if the cache's contents have changed. This problem will be addressed in an upcoming
feature: 10842: Enable fallback keys in Pipeline Caching
See ccache configuration settings for more options, including settings to control compression level.
Gradle
Using Gradle's built-in caching support can have a significant impact on build time. To enable, set the
GRADLE_USER_HOME environment variable to a path under $(Pipeline.Workspace) and either pass --build-cache on
the command line or set org.gradle.caching=true in your gradle.properties file.
Example :
variables:
GRADLE_USER_HOME: $(Pipeline.Workspace)/.gradle
steps:
- task: Cache@2
inputs:
key: 'gradle | "$(Agent.OS)"'
restoreKeys: gradle
path: $(GRADLE_USER_HOME)
displayName: Gradle build cache
- script: |
./gradlew --build-cache build
# stop the Gradle daemon to ensure no files are left open (impacting the save cache operation later)
./gradlew --stop
displayName: Build
NOTE
In this example, the key is a fixed value (the OS name) and because caches are immutable, once a cache with this key is
created for a particular scope (branch), the cache cannot be updated. This means subsequent builds for the same branch will
not be able to update the cache even if the cache's contents have changed. This problem will be addressed in an upcoming
feature: 10842: Enable fallback keys in Pipeline Caching.
Maven
Maven has a local repository where it stores downloads and built artifacts. To enable, set the maven.repo.local
option to a path under $(Pipeline.Workspace) and cache this folder.
Example :
variables:
MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
steps:
- task: Cache@2
inputs:
key: 'maven | "$(Agent.OS)" | **/pom.xml'
restoreKeys: |
maven | "$(Agent.OS)"
maven
path: $(MAVEN_CACHE_FOLDER)
displayName: Cache Maven local repo
If you are using a Maven task, make sure to also pass the MAVEN_OPTS variable because it gets overwritten
otherwise:
- task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
mavenOptions: '-Xmx3072m $(MAVEN_OPTS)'
.NET/NuGet
If you use PackageReferences to manage NuGet dependencies directly within your project file and have
packages.lock.json file(s), you can enable caching by setting the NUGET_PACKAGES environment variable to a path
under $(Pipeline.Workspace) and caching this directory.
Example :
variables:
NUGET_PACKAGES: $(Pipeline.Workspace)/.nuget/packages
steps:
- task: Cache@2
inputs:
key: 'nuget | "$(Agent.OS)" | **/packages.lock.json,!**/bin/**'
restoreKeys: |
nuget | "$(Agent.OS)"
path: $(NUGET_PACKAGES)
displayName: Cache NuGet packages
TIP
Environment variables always override any settings in the NuGet.Config file. If your pipeline failed with the error:
Information, There is a cache miss. , you must create a pipeline variable for NUGET_PACKAGES to point to the new local
path on the agent (exp d:\a\1). Your pipeline should pick up the changes then and continue the task successfully.
Node.js/npm
There are different ways to enable caching in a Node.js project, but the recommended way is to cache npm's shared
cache directory. This directory is managed by npm and contains a cached version of all downloaded modules.
During install, npm checks this directory first (by default) for modules which can reduce or eliminate network calls
to the public npm registry or to a private registry.
Because the default path to npm's shared cache directory is not the same across all platforms, it is recommended
to override the npm_config_cache environment variable to a path under $(Pipeline.Workspace) . This also ensures
the cache is accessible from container and non-container jobs.
Example :
variables:
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: Cache@2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
path: $(npm_config_cache)
displayName: Cache npm
- script: npm ci
If your project does not have a package-lock.json file, reference the package.json file in the cache key input
instead.
TIP
Because npm ci deletes the node_modules folder to ensure that a consistent, repeatable set of modules is used, you
should avoid caching node_modules when calling npm ci .
Node.js/Yarn
Like with npm, there are different ways to cache packages installed with Yarn. The recommended way is to cache
Yarn's shared cache folder. This directory is managed by Yarn and contains a cached version of all downloaded
packages. During install, Yarn checks this directory first (by default) for modules, which can reduce or eliminate
network calls to public or private registries.
Example :
variables:
YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn
steps:
- task: Cache@2
inputs:
key: 'yarn | "$(Agent.OS)" | yarn.lock'
restoreKeys: |
yarn | "$(Agent.OS)"
path: $(YARN_CACHE_FOLDER)
displayName: Cache Yarn packages
Python/pip
For Python projects that use pip or Poetry, override the PIP_CACHE_DIR environment variable. If you use Poetry, in
the key field, replace requirements.txt with poetry.lock .
Example
variables:
PIP_CACHE_DIR: $(Pipeline.Workspace)/.pip
steps:
- task: Cache@2
inputs:
key: 'python | "$(Agent.OS)" | requirements.txt'
restoreKeys: |
python | "$(Agent.OS)"
python
path: $(PIP_CACHE_DIR)
displayName: Cache pip packages
Python/Pipenv
For Python projects that use Pipenv, override the PIPENV_CACHE_DIR environment variable.
Example
variables:
PIPENV_CACHE_DIR: $(Pipeline.Workspace)/.pipenv
steps:
- task: Cache@2
inputs:
key: 'python | "$(Agent.OS)" | Pipfile.lock'
restoreKeys: |
python | "$(Agent.OS)"
python
path: $(PIPENV_CACHE_DIR)
displayName: Cache pipenv packages
PHP/Composer
For PHP projects using Composer, override the COMPOSER_CACHE_DIR environment variable used by Composer.
Example :
variables:
COMPOSER_CACHE_DIR: $(Pipeline.Workspace)/.composer
steps:
- task: Cache@2
inputs:
key: 'composer | "$(Agent.OS)" | composer.lock'
restoreKeys: |
composer | "$(Agent.OS)"
composer
path: $(COMPOSER_CACHE_DIR)
displayName: Cache composer
pool:
vmImage: ubuntu-16.04
steps:
- task: Cache@2
inputs:
key: 'docker | "$(Agent.OS)" | caching-docker.yml'
path: $(Pipeline.Workspace)/docker
cacheHitVar: DOCKER_CACHE_RESTORED
displayName: Caching Docker image
- script: |
docker load $(Pipeline.Workspace)/docker/cache.tar
condition: and(not(canceled()), eq(variables.DOCKER_CACHE_RESTORED, 'true'))
- script: |
mkdir -p $(Pipeline.Workspace)/docker
docker pull ubuntu
docker save ubuntu > $(Pipeline.Workspace)/docker/cache.tar
condition: and(not(canceled()), or(failed(), ne(variables.DOCKER_CACHE_RESTORED, 'true')))
FAQ
Can I clear a cache?
Clearing a cache is currently not supported. However you can add a string literal (e.g. version2 ) to your existing
cache key to change the key in a way that avoids any hits on existing caches. For example, change the following
cache key from this:
to this:
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
You can customize how your pipeline runs are numbered. The default value for run number is
$(Date:yyyyMMdd).$(Rev:r) .
YAML
Classic
In YAML, this property is called name and is at the root level of a pipeline. If not specified, your run is given a
unique integer as its name. You can give runs much more useful names that are meaningful to your team. You can
use a combination of tokens, variables, and underscore characters.
name: $(TeamProject)_$(Build.DefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r)
steps:
- script: echo $(Build.BuildNumber) # outputs customized build number like project_def_master_20200828.1
Example
At the time a run is started:
Project name: Fabrikam
Pipeline name: CIBuild
Branch: master
Build ID/Run ID: 752
Date: May 5, 2019.
Time: 9:07:03 PM.
One run completed earlier today.
If you specify this build number format:
$(TeamProject)_$(Build.DefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r)
Tokens
The following table shows how each token is resolved based on the previous example. You can use these tokens
only to define a run number; they don't work anywhere else in your pipeline.
TO K EN EXA M P L E REP L A C EM EN T VA L UE
$(Build.DefinitionName) CIBuild
$(BuildID) 752
$(DayOfMonth) 5
$(DayOfYear) 217
$(Hours) 21
$(Minutes) 7
$(Month) 8
If you want to show prefix zeros in the number, you can add
additional 'r' characters. For example, specify $(Rev:rr) if you
want the Rev number to begin with 01, 02, and so on.
$(Date:yyyyMMdd) 20090824
$(Seconds) 3
$(SourceBranchName) master
$(TeamProject) Fabrikam
$(Year:yy) 09
$(Year:yyyy) 2009
Variables
You can also use user-defined and predefined variables that have a scope of "All" in your number. For example, if
you've defined My.Variable , you could specify the following number format:
$(Build.DefinitionName)_$(Build.DefinitionVersion)_$(Build.RequestedFor)_$(Build.BuildId)_$(My.Variable)
The first four variables are predefined. My.Variable is defined by you on the variables tab.
FAQ
How large can a run number be?
Runs may be up to 255 characters.
In what time zone are the build number time values expressed?
The time zone is UTC.
The time zone is the same as the time zone of the operating system of the machine where you are running your
application tier server.
How can you reference the run number variable within a script?
The run number variable can be called with $(Build.BuildNumber) . You can define a new variable that includes the
run number or call the run number directly. In this example, $(MyRunNumber) is a new variable that includes the run
number.
# Set MyRunNumber
variables:
MyRunNumber: '1.0.0-CI-$(Build.BuildNumber)'
steps:
- script: echo $(MyRunNumber) # display MyRunNumber
- script: echo $(Build.BuildNumber) #display Run Number
Build options
11/2/2020 • 2 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
F IEL D VA L UE
Q: What other work item fields can I set? A: Work item field index
Select the pool that's attached to the pool that contains the agents you want to run this pipeline.
TIP
If your code is in Azure Pipelines and you run your builds on Windows, in many cases the simplest option is to use the
Hosted pool.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
This article describes commonly used terms used in pipeline test report and test analytics.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
T ERM DEF IN IT IO N
Flaky test A test with non-deterministic behavior. For example, the test
may result in different outcomes for the same configuration,
code, or inputs.
Filter Mechanism to search for the test results within the result
set, using the available attributes. Learn more.
T ERM DEF IN IT IO N
Pass percentage Measure of the success of test outcome for a single instance
of execution or over a period of time.
Test case Uniquely identifies a single test within the specified branch.
Test files Group tests based on the way they are packaged; such as
files, DLLs, or other formats.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
For TFS, this topic applies to only TFS 2017 Update 1 and later.
Running tests to validate changes to code is key to maintaining quality. For continuous integration practice to be
successful, it is essential you have a good test suite that is run with every build. However, as the codebase grows,
the regression test suite tends to grow as well and running a full regression test can take a long time. Sometimes,
tests themselves may be long running - this is typically the case if you write end-to-end tests. This reduces the
speed with which customer value can be delivered as pipelines cannot process builds quickly enough.
Running tests in parallel is a great way to improve the efficiency of CI/CD pipelines. This can be done easily by
employing the additional capacity offered by the cloud. This article discusses how you can configure the Visual
Studio Test task to run tests in parallel by using multiple agents.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Pre-requisite
Familiarize yourself with the concepts of agents and jobs. To run multiple jobs in parallel, you must configure
multiple agents. You also need sufficient parallel jobs.
Test slicing
The Visual Studio Test task (version 2) is designed to work seamlessly with parallel job settings. When a pipeline job
that contains the Visual Studio Test task (referred to as the "VSTest task" for simplicity) is configured to run on
multiple agents in parallel, it automatically detects that multiple agents are involved and creates test slices that can
be run in parallel across these agents.
The task can be configured to create test slices to suit different requirements such as batching based on the
number of tests and agents, the previous test running times, or the location of tests in assemblies.
NOTE
To use the multi-agent capability in build pipelines with on-premises TFS server, you must use TFS 2018 Update 2 or a later
version.
1. Build job using a single agent . Build Visual Studio projects and publish build artifacts using the tasks
shown in the following image. This uses the default job settings (single agent, no parallel jobs).
2. Run tests in parallel using multiple agents :
Add an agent job
Configure the job to use multiple agents in parallel . The example here uses three agents.
TIP
For massively parallel testing, you can specify as many as 99 agents.
Add a Download Build Ar tifacts task to the job. This step is the link between the build job and the
test job, and is necessary to ensure that the binaries generated in the build job are available on the
agents used by the test job to run tests. Ensure that the task is set to download artifacts produced by
the 'Current build' and the artifact name is the same as the artifact name used in the Publish Build
Ar tifacts task in the build job.
Add the Visual Studio Test task and configure it to use the required slicing strategy.
jobs:
- job: ParallelTesting
strategy:
parallel: 2
NOTE
To use the multi-agent capability in release pipelines with on-premises TFS server, you must use TFS 2017 Update 1 or a later
version.
1. Deploy app using a single agent . Use the tasks shown in the image below to deploy a web app to Azure
App Services. This uses the default job settings (single agent, no parallel jobs).
Configure the job to use multiple agents in parallel . The example here uses three agents.
TIP
For massively parallel testing, you can specify as many as 99 agents.
Add any additional tasks that must run before the Visual Studio test task is run. For example, run a
PowerShell script to set up any data required by your tests.
TIP
Jobs in release pipelines download all artifacts linked to the release pipeline by default. To save time, you can
configure the job to download only the test artifacts required by the job. For example, web app binaries are
not required to run Selenium tests and downloading these can be skipped if the app and test artifacts are
published separately by your build pipeline.
Add the Visual Studio Test task and configure it to use the required slicing strategy.
TIP
If the test machines do not have Visual Studio installed, you can use the Visual Studio Test Platform Installer
task to acquire the required version of the test platform.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
Running tests to validate changes to code is key to maintaining quality. For continuous integration practice to be
successful, it is essential you have a good test suite that is run with every build. However, as the codebase grows,
the regression test suite tends to grow as well and running a full regression test can take a long time. Sometimes,
tests themselves may be long running - this is typically the case if you write end-to-end tests. This reduces the
speed with which customer value can be delivered as pipelines cannot process builds quickly enough.
Running tests in parallel is a great way to improve the efficiency of CI/CD pipelines. This can be done easily by
employing the additional capacity offered by the cloud. This article discusses how you can parallelize tests by using
multiple agents to process jobs.
Pre-requisite
Familiarize yourself with the concepts of agents and jobs. Each agent can run only one job at a time. To run multiple
jobs in parallel, you must configure multiple agents. You also need sufficient parallel jobs.
jobs:
- job: ParallelTesting
strategy:
parallel: 2
TIP
You can specify as many as 99 agents to scale up testing for large test suites.
System.TotalJobsInPhase indicates the total number of slices (you can think of this as "totalSlices")
System.JobPositionInPhase identifies a particular slice (you can think of this as "sliceNum")
If you represent all test files as a single dimensional array, each job can run a test file indexed at [sliceNum +
totalSlices], until all the test files are run. For example, if you have six test files and two parallel jobs, the first job
(slice0) will run test files numbered 0, 2, and 4, and second job (slice1) will run test files numbered 1, 3, and 5.
If you use three parallel jobs instead, the first job (slice0) will run test files numbered 0 and 3, the second job
(slice1) will run test files numbered 1 and 4, and the third job (slice2) will run test files numbered 2 and 5.
Sample code
This .NET Core sample uses --list-tests and --filter parameters of dotnet test to slice the tests. The tests are
run using NUnit. Test results created by DotNetCoreCLI@2 test task are then published to the server. Import (into
Azure Repos or Azure DevOps Server) or fork (into GitHub) this repo:
https://github.com/idubnori/ParallelTestingSample-dotnet-core
This Python sample uses a PowerShell script to slice the tests. The tests are run using pytest. JUnit-style test results
created by pytest are then published to the server. Import (into Azure Repos or Azure DevOps Server) or fork (into
GitHub) this repo:
https://github.com/PBoraMSFT/ParallelTestingSample-Python
This JavaScript sample uses a bash script to slice the tests. The tests are run using the mocha runner. JUnit-style test
results created by mocha are then published to the server. Import (into Azure Repos or Azure DevOps Server) or
fork (into GitHub) this repo:
https://github.com/PBoraMSFT/ParallelTestingSample-Mocha
The sample code includes a file azure-pipelines.yml at the root of the repository that you can use to create a
pipeline. Follow all the instructions in Create your first pipeline to create a pipeline and see test slicing in action.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015 |
Visual Studio 2017 | Visual Studio 2015
NOTE
Applies only to TFS 2017 Update 1 and later, and Visual Studio 2015 Update 3 and later.
Continuous Integration (CI) is a key practice in the industry. Integrations are frequent, and verified with an
automated build that runs regression tests to detect integration errors as soon as possible. However, as the
codebase grows and matures, its regression test suite tends to grow as well - to the extent that running a full
regression test might require hours. This slows down the frequency of integrations, and ultimately defeats the
purpose of continuous integration. In order to have a CI pipeline that completes quickly, some teams defer the
execution of their longer running tests to a separate stage in the pipeline. However, this only serves to further
defeat continuous integration.
Instead, enable Test Impact Analysis (TIA) when using the Visual Studio Test task in a build pipeline. TIA performs
incremental validation by automatic test selection. It will automatically select only the subset of tests required to
validate the code being committed. For a given code commit entering the CI/CD pipeline, TIA will select and run
only the relevant tests required to validate that commit. Therefore, that test run will complete more quickly, if there
is a failure you will get to know about it sooner, and because it is all scoped by relevance, analysis will be faster as
well.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Use the minimatch pattern when setting variables, and separate multiple items with a semicolon.
TestMethod1
dependency1
dependency2
TestMethod2
dependency1
dependency3
TIA can generate such a dependencies map for managed code execution. Where such dependencies reside in .cs
and .vb files, TIA can automatically watch for commits into such files and then run tests that had these source files
in their list of dependencies.
You can extend the scope of TIA by explicitly providing the dependencies map as an XML file. For example, you
might want to support code in other languages such as JavaScript or C++, or support the scenario where tests and
product code are running on different machines. The mapping can even be approximate, and the set of tests you
want to run can be specified in terms of a test case filter such as you would typically provide in the VSTest task
parameters.
The XML file should be checked into your repository, typically at the root level. Then set the build variable
TIA.UserMapFile to point to it. For example, if the file is named TIAmap.xml , set the variable to
$(System.DefaultWorkingDirector y)/TIAmap.xml .
For an example of the XML file format, see TIA custom dependency mapping.
See Also
TIA overview and VSTS integration
TIA scope and applications
TIA advanced configuration
TIA custom dependency mapping
Azure Pipelines
Productivity for developers relies on the ability of tests to find real problems with the code under development or
update in a timely and reliable fashion. Flaky tests present a barrier to finding real problems, since the failures often
don't relate to the changes being tested. A flaky test is a test that provides different outcomes, such as pass or fail,
even when there are no changes in the source code or execution environment. Flaky tests also impact the quality of
shipped code.
NOTE
This feature is only available on Azure DevOps Services. Typically, new features are introduced in the cloud service first, and
then made available on-premises in the next major version or update of Azure DevOps Server. To learn more, see Azure
DevOps Feature Timeline.
The goal of bringing flaky test management in-product is to reduce developer pain cause by flaky tests and cater to
the whole workflow. Flaky test management provides the following benefits.
Detection - Auto detection of flaky test with rerun or extensibility to plug in your own custom detection
method
Management of flakiness - Once a test is marked as flaky, the data is available for all pipelines for that
branch
Repor t on flaky tests - Ability to choose if you want to prevent build failures caused by flaky tests, or use
the flaky tag only for troubleshooting
Resolution - Manual bug-creation or manual marking and unmarking test as flaky based on your analysis
Close the loop - Reset flaky test as a result of bug resolution / manual input
Enable flaky test management
To configure flaky test management, choose Project settings , and select Test management in the Pipelines
section.
Slide the On/Off button to On .
The default setting for all projects is to use flaky tests for troubleshooting.
Flaky test detection
Flaky test management supports system and custom detection.
System detection : The in-product flaky detection uses test rerun data. The detection is via VSTest task
rerunning of failed tests capability or retry of stage in the pipeline. You can select specific pipelines in the
project for which you would like to detect flaky tests.
NOTE
Once a test is marked as flaky, the data is available for all pipelines for that branch to assist with troubleshooting in
every pipeline.
Custom detection : You can integrate your own flaky detection mechanism with Azure Pipelines and use
the reporting capability. With custom detection, you need to update the test results metadata for flaky tests.
For details, see Test Results, Result Meta Data - Update REST API.
Flaky test options
The Flaky test options specify how flaky tests are available in test reporting as well as resolution capabilities, as
described in the following sections.
When a test is marked flaky or unflaky in a pipeline, no changes are made in the current pipeline. Only on future
executions of that test is the changed flaky setting evaluated. Tests marked as flaky have the Marked flaky tag in the
user interface.
Help and support
See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Related articles
Review test results
Visual Studio Test task
Publish Test Results task
Test Results, Result Meta Data - Update REST API
UI testing considerations
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
When running automated tests in the CI/CD pipeline, you may need a special configuration in order to run UI tests
such as Selenium, Appium or Coded UI tests. This topic describes the typical considerations for running UI tests.
NOTE
Applies only to TFS 2017 Update 1 and later.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Prerequisites
Familiarize yourself with agents and deploying an agent on Windows.
NOTE
Microsoft Edge browser currently cannot be run in the headless mode.
2. Visible UI mode . In this mode, the browser runs normally and the UI components are visible. When
running tests in this mode on Windows, special configuration of the agents is required.
If you are running UI tests for a desktop application, such as Appium tests using WinAppDriver or Coded UI tests, a
special configuration of the agents is required.
TIP
End-to-end UI tests generally tend to be long-running. When using the visible UI mode, depending on the test framework,
you may not be able to run tests in parallel on the same machine because the app must be in focus to receive keyboard and
mouse events. In this scenario, you can speed up testing cycles by running tests in parallel on different machines. See run
tests in parallel for any test runner and run tests in parallel using Visual Studio Test task.
In this example, the number '1' is the ID of the remote desktop session. This number may change between remote
sessions, but can be viewed in Task Manager. Alternatively, to automate finding the current session ID, create a
batch file containing the following code:
Save the batch file and create a desktop shortcut to it, then change the shortcut properties to 'Run as
administrator'. Running the batch file from this shortcut disconnects from the remote desktop but preserves the UI
session and allows UI tests to run.
NOTE
The screen resolution utility task runs on the unified build/release/test agent, and cannot be used with the deprecated Run
Functional Tests task.
Add the screenshot file using TestContext.AddResultFile(fileName); //Where fileName is the name of the file.
If you use the Publish Test Results task to publish results, test result attachments can only be published if you are
using the VSTest (TRX) results format or the NUnit 3.0 results format.
Result attachments cannot be published if you use JUnit or xUnit test results. This is because these test result
formats do not have a formal definition for attachments in the results schema. You can use one of the below
approaches to publish test attachments instead.
If you are running tests in the build (CI) pipeline, you can use the Copy and Publish Build Artifacts task to
publish any additional files created in your tests. These will appear in the Ar tifacts page of your build
summary.
Use the REST APIs to publish the necessary attachments. Code samples can be found in this GitHub
repository.
Capture video
If you use the Visual Studio test task to run tests, video of the test can be captured and is automatically available as
an attachment to the test result. For this, you must configure the video data collector in a .runsettings file and this
file must be specified in the task settings.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015 |
Visual Studio 2017 | Visual Studio 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Performing user interface (UI) testing as part of the release pipeline is a great way of detecting unexpected
changes, and need not be difficult. This topic describes using Selenium to test your website during a continuous
deployment release and test automation. Special considerations that apply when running UI tests are discussed in
UI testing considerations.
Typically you will run unit tests in your build workflow, and functional (UI) tests in your release workflow after
your app is deployed (usually to a QA environment).
using System;
using System.Text;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;
using OpenQA.Selenium.Chrome;
using OpenQA.Selenium.IE;
namespace SeleniumBingTests
{
/// <summary>
/// Summary description for MySeleniumTests
/// </summary>
[TestClass]
public class MySeleniumTests
{
private TestContext testContextInstance;
private IWebDriver driver;
private string appURL;
public MySeleniumTests()
{
}
[TestMethod]
[TestCategory("Chrome")]
public void TheBingSearchTest()
{
driver.Navigate().GoToUrl(appURL + "/");
driver.FindElement(By.Id("sb_form_q")).SendKeys("Azure Pipelines");
driver.FindElement(By.Id("sb_form_go")).Click();
driver.FindElement(By.XPath("//ol[@id='b_results']/li/h2/a/strong[3]")).Click();
Assert.IsTrue(driver.Title.Contains("Azure Pipelines"), "Verified title of the page");
}
/// <summary>
///Gets or sets the test context which provides
///information about and functionality for the current test run.
///</summary>
public TestContext TestContext
{
{
get
{
return testContextInstance;
}
set
{
testContextInstance = value;
}
}
[TestInitialize()]
public void SetupTest()
{
appURL = "http://www.bing.com/";
[TestCleanup()]
public void MyTestCleanup()
{
driver.Quit();
}
}
}
4. Run the Selenium test locally using Test Explorer and check that it works.
Select the Azure App Ser vice Deployment template and choose Apply .
In the Ar tifacts section of the Pipeline tab, choose + Add . Select your build artifacts and choose
Add .
Choose the Continuous deployment trigger icon in the Ar tifacts section of the Pipeline tab. In
the Continuous deployment trigger pane, enable the trigger so that a new release is created from
every build. Add a filter for the default branch.
Open the Tasks tab, select the Stage 1 section, and enter your subscription information and the
name of the web app where you want to deploy the app and tests. These settings are applied to the
Deploy Azure App Ser vice task.
2. If you are deploying your app and tests to environments where the target machines that host the agents do
not have Visual Studio installed:
In the Tasks tab of the release pipeline, choose the + icon in the Run on agent section. Select the
Visual Studio Test Platform Installer task and choose Add . Leave all the settings at the default
values.
You can find a task more easily by using the search textbox.
3. In the Tasks tab of the release pipeline, choose the + icon in the Run on agent section. Select the Visual
Studio Test task and choose Add .
4. If you added the Visual Studio Test Platform Installer task to your pipeline, change the Test platform
version setting in the Execution options section of the Visual Studio Test task to Installed by Tools
Installer .
6. To view the test results, open the release summary from the Releases page and choose the Tests link.
Next steps
Review your test results
Requirements traceability
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Requirements traceability is the ability to relate and document two or more phases of a development process,
which can then be traced both forward or backward from its origin. Requirements traceability help teams to get
insights into indicators such as quality of requirements or readiness to ship the requirement . A
fundamental aspect of requirements traceability is association of the requirements to test cases, bugs and code
changes.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
The list shows only work items belonging to the Requirements category.
3. After the requirements have been linked to the test results you can view the test results grouped by
requirement. Requirement is one of the many "Group by" options provided to make it easy to navigate the
test results.
4. Teams often want to pin the summarized view of requirements traceability to a dashboard. Use the
Requirements quality widget for this.
5. Configure the Requirements quality widget with the required options and save it.
Requirements quer y : Select a work item query that captures the requirements, such as the user stories
in the current iteration.
Quality data : Specify the stage of the pipeline for which the requirements quality should be traced.
6. View the widget in the team's dashboard. It lists all the Requirements in scope, along with the Pass Rate
for the tests and count of Failed tests. Selecting a Failed test count opens the Tests tab for the selected build
or release. The widget also helps to track the requirements without any associated test(s).
To ensure user requirements meet the quality goals, the requirements in a project can be linked to test results,
which can then be viewed on the team's dashboard. This enables end-to-end traceability with a simple way to
monitor test results. To link automated tests with requirements, visit test report in build or release.
1. In the results section under Tests tab of a build or release summary, select the test(s) to be linked to
requirements and choose Link .
2. Choose a work item to be linked to the selected test(s) in one of the specified way:
Choose an applicable work item from the list of suggested work items. The list is based on the most
recently viewed and updated work items.
Specify a work item ID.
Search for a work item based on the title text.
The list shows only work items belonging to the Requirements category.
3. Teams often want to pin the summarized view of requirements traceability to a dashboard. Use the
Requirements quality widget for this.
4. Configure the Requirements quality widget with the required options and save it.
Requirements quer y : Select a work item query that captures the requirements, such as the user stories
in the current iteration.
Quality data : Specify the stage of the pipeline for which the requirements quality should be traced.
5. View the widget in the team's dashboard. It lists all the Requirements in scope, along with the Pass Rate
for the tests and count of Failed tests. Selecting a Failed test count opens the Tests tab for the selected build
or release. The widget also helps to track the requirements without any associated test(s).
Bug traceability
Testing gives a measure of the confidence to ship a change to users. A test failure signals an issues with the change.
Failures can happen for many reasons such as errors in the source under test, bad test code, environmental issues,
flaky tests, and more. Bugs provide a robust way to track test failures and drive accountability in the team to take
the required remedial actions. To associate bugs with test results, visit test report in build or release.
1. In the results section of the Tests tab select the tests against which the bug should be created and choose
Bug . Multiple test results can be mapped to a single bug. This is typically done when the reason for the
failures is attributable to a single cause such as the unavailability of a dependent service, a database
connection failure, or similar issues.
2. Open the work item to see the bug. It captures the complete context of the test results including key
information such as the error message, stack trace, comments, and more.
3. View the bug with the test result, directly in context, within the Tests tab. The Work Items tab also lists any
linked requirements for the test result.
4. From a work item, navigate directly to the associated test results. Both the test case and the specific test
result are linked to the bug.
5. In the work item, select Test case or Test result to go directly to the Tests page for the selected build or
release. You can troubleshoot the failure, update your analysis in the bug, and make the changes required to
fix the issue as applicable. While both the links take you to the Tests tab , the default section shown are
Histor y and Debug respectively.
Source traceability
When troubleshooting test failures that occur consistently over a period of time, it is important to trace back to the
initial set of changes - where the failure originated. This can help significantly to narrow down the scope for
identifying the problematic test or source under test. To discover the first instance of test failures and trace it back
to the associated code changes, visit Tests tab in build or release.
1. In the Tests tab, select a test failure to be analyzed. Based on whether it's a build or release, choose the
Failing build or Failing release column for the test.
2. This opens another instance of the Tests tab in a new window, showing the first instance of consecutive
failures for the test.
3. Based on the build or release pipeline, you can choose the timeline or pipeline view to see what code
changes were committed. You can analyze the code changes to identify the potential root cause of the test
failure.
Traditional teams using planned testing
Teams that are moving from manual testing to continuous (automated) testing, and have a subset of tests already
automated, can execute them as part of the pipeline or on demand (see test report). Referred to as Planned
testing , automated tests can be associated to the test cases in a test plan and executed from Azure Test Plans .
Once associated, these tests contribute towards the quality metrics of the corresponding requirements.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Automated tests can be configured to run as part of a build or release for various languages. Test reports
provide an effective and consistent way to view the tests results executed using different test frameworks, in
order to measure pipeline quality, review traceability, troubleshoot failures and drive failure ownership. In
addition, it provides many advanced reporting capabilities explored in the following sections.
You can also perform deeper analysis of test results by using the Analytics Service. For an example of using this
with your build and deploy pipelines, see Analyze test results.
Read the glossary to understand test report terminology.
NOTE
Test report is available in TFS 2015 and above, however the new experience described in this topic is currently available
only in Azure Pipelines.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.
Published test results can be viewed in the Tests tab in a build or release summary.
NOTE
This inferred test report is a limited experience. Some features available in fully-formed test reports are
not present here (more details). We recommend that you publish a fully-formed test report to get the full
Test and Insights experience in Pipelines. Also see:
TIP
If you use the Visual Studio Test task to run tests, diagnostic output logged from tests (using any of Console.WriteLine,
Trace.WriteLine or TestContext.WriteLine methods), will appear as an attachment for a failed test.
The following capabilities of the Tests tab help to improve productivity and troubleshooting experience.
Filter large test results
Over time, tests accrue and, for large applications, can easily grow to tens of thousands of tests. For these
applications with very many tests, it can be hard to navigate through the results to identify test failures,
associate root causes, or get ownership of issues. Filters make it easy to quickly navigate to the test results of
your interest. You can filter on Test Name , Outcome (failed, passed, and more), Test Files (files holding tests)
and Owner (for test files). All of the filter criteria are cumulative in nature.
Additionally, with multiple Grouping options such as Test run , Test file , Priority , Requirement , and more,
you can organize the Results view exactly as you require.
Test debt management with bugs
To manage your test debt for failing or long running tests you can create a bug or add data to exisiting bug and
all view all associated work items in the work item tab.
Immersive troubleshooting experience
Error messages and stack traces are lengthy in nature and need enough real estate to view the details during
troubleshooting. To provide an immersive troubleshooting experience, the Details view can be expanded to full
page view while still being able to perform the required operations in context, such as bug creation or
requirement association for the selected test result.
The view below shows the in-progress test summary in a release, reporting the total test count and the
number of test failures at a given point in time. The test failures are available for troubleshooting, creating
bug(s), or to take any other appropriate action.
Data driven tests : Similar to the rerun of failed tests, all iterations of data driven tests are reported
under that test. The summarized result view for data driven tests depends on the behavior of the test
framework. If the framework produces a hierarchy of results (for example, MSTest v1 and v2) they will be
reported in a summarized view. If the framework produces individual results for each iteration (for
example, xUnit) they will not be grouped together. The summarized view is also available for ordered
tests (.orderedtest in Visual Studio).
NOTE
Metrics in the test summary section, such as the total number of tests, passed, failed, or other are computed using the
root level of the summarized test result.
See the list of runners for which test results are automatically inferred.
As only limited test metadata is present in such inferred reports, they are limited in features and capabilities.
The following features are not available for inferred test reports:
Group the test results by test file, owner, priority, and other fields
Search and filter the test results
Check details of passed tests
Preview any attachments generated during the tests within the web UI itself
Associate a test failure with a new bug, or see list of associated work items for this failure
See build-on-build analytics for testing in Pipelines
NOTE
Some runners such as Mocha have multiple built-in console reporters such as dot-matrix and progress-bar. If you have
configured a non-default console output for your test runner, or you are using a custom reporter, Azure DevOps will not
be able to infer the test results. It can only infer the results from the default reporter.
Related articles
Analyze test results
Trace test requirements
Review code coverage results
Azure Pipelines
Tracking test quality over time and improving test collateral is key to maintaining a healthy DevOps pipeline. Test
analytics provides near real-time visibility into your test data for builds and releases. It helps improve the
efficiency of your pipeline by identifying repetitive, high impact quality issues.
NOTE
Test analytics is currently available only with Azure Pipelines.
Failing tests: Provides a distinct count of tests that failed during the specified period. In the example
above, 986 test failures originated from 124 tests.
Chart view: A trend of the total test failures and average pass rate on each day of the specified
period.
Results : List of top failed tests based on the total number of failures. Helps to identify problematic tests
and lets you drill into a detailed summary of results.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Code coverage helps you determine the proportion of your project's code that is actually being tested by tests such
as unit tests. To increase your confidence of the code changes, and guard effectively against bugs, your tests should
exercise - or cover - a large proportion of your code.
Reviewing the code coverage result helps to identify code path(s) that are not covered by the tests. This
information is important to improve the test collateral over time by reducing the test debt.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Example
To view an example of publishing code coverage results for your choice of language, see the Ecosystems section
of the Pipelines topics. For example, collect and publish code coverage for JavaScript using Istanbul.
View results
The code coverage summary can be viewed in the build timeline view. The summary shows the overall percentage
of line coverage.
NOTE
Merging code coverage results from multiple test runs is limited to .NET and .NET Core at present. This will be supported for
other formats in a future release.
The code coverage summary can be viewed on the Summar y tab on the pipeline run summary.
The results can be viewed and downloaded on the Code coverage tab.
NOTE
Merging code coverage results from multiple test runs is limited to .NET and .NET Core at present. This will be supported for
other formats in a future release.
Artifacts
The code coverage artifacts published during the build can be viewed under the Build ar tifacts published
milestone in the timeline view.
The code coverage artifacts published during the build can be viewed under the Summar y tab on the pipeline run
summary.
If you use the Visual Studio Test task to collect coverage for .NET and .NET Core apps, the artifact contains
.coverage files that can be downloaded and used for further analysis in Visual Studio.
If you publish code coverage using Cobertura or JaCoCo coverage formats, the code coverage artifact
contains an HTML file that can be viewed offline for further analysis.
NOTE
For .NET and .NET Core, the link to download the artifact is available by choosing the code coverage milestone in the build
summary.
Tasks
Publish Code Coverage Results publishes code coverage results to Azure Pipelines or TFS, which were produced
by a build in Cobertura or JaCoCo format.
Built-in tasks such as Visual Studio Test, .NET Core, Ant, Maven, Gulp, Grunt, and Gradle provide the option to
publish code coverage data to the pipeline.
Azure Pipelines
Code coverage is an important quality metric and helps you measure the percentage of your project's code that is
being tested. To ensure that quality for your project improves over time (or at the least, does not regress), it is
essential that new code being brought into the system is well tested. This means that when developers raise pull
requests, knowing whether their changes are covered by tests would help plug any testing holes before the
changes are merged into the target branch. Repo owners may also want to set policies to prevent merging large
untested changes.
Full coverage, diff coverage
Typically, coverage gets measured for the entire codebase of a project. This is full coverage . However, in the
context of pull requests, developers are focused on the changes they are making and want to know whether the
specific lines of code they have added or changed are covered. This is diff coverage .
Prerequisites
In order to get coverage metrics for a pull request, first configure a pipeline that validates pull requests. In this
pipeline, configure the test tool you are using to collect code coverage metrics. Coverage results must then be
published to the server for reporting.
To learn more about collecting and publishing code coverage results for the language of your choice, see the
Ecosystems section. For example, collect and publish code coverage for .NET core apps.
NOTE
While you can collect and publish code coverage results for many different languages using Azure Pipelines, the code
coverage for pull requests feature discussed in this document is currently available only for .NET and .NET core projects
using the Visual Studio code coverage results format (file extension .coverage). Support for other languages and coverage
formats will be added in future milestones.
In the changed files view of a pull request, lines that are changed are also annotated with coverage indicators to
show whether those lines are covered.
NOTE
While you can build code from a wide variety of version control systems that Azure Pipelines supports, the code coverage
for pull requests feature discussed in this document is currently available only for Azure Repos.
Sample YAML files for different coverage settings can be found in the code coverage YAML samples repo.
NOTE
Coverage indicators light up in the changed files view regardless of whether the pull request comment details are turned on.
TIP
The coverage settings YAML is different from a YAML pipeline. This is because the coverage settings apply to your repo and
will be used regardless of which pipeline builds your code. This separation also means that if you are using the classic
designer-based build pipelines, you will get the code coverage status check for pull requests.
TIP
Code coverage status posted from a pipeline follows the naming convention {name-of-your-pipeline/codecoverage} .
NOTE
Branch policies in Azure Repos (even optional policies) prevent pull requests from completing automatically if they fail. This
behavior is not specific to code coverage policy.
FAQ
Which coverage tools and result formats can be used for validating code coverage in pull requests?
Code coverage for pull requests capability is currently only available for Visual Studio code coverage (.coverage)
formats. This can be used if you publish code coverage using the Visual Studio Test task, the test verb of dotnet core
task and the TRX option of the publish test results task. Support for other coverage tools and result formats will be
added in future milestones.
If multiple pipelines are triggered when a pull request is raised, will coverage be merged across the pipelines?
If multiple pipelines are triggered when a pull request is raised, code coverage will not be merged. The capability is
currently designed for a single pipeline that collects and publishes code coverage for pull requests. If you need the
ability to merge coverage data across pipelines, please file a feature request on developer community.
Resources
While environment at its core is a grouping of resources, the resources themselves represent actual deployment
targets. The Kubernetes resource and virtual machine resource types are currently supported.
Create an environment
1. Sign in to your Azure DevOps organization and navigate to your project.
2. In your project, navigate to the Pipelines page. Then choose Environments and click on Create
Environment .
3. After adding the name of an environment (required) and the description (optional), you can create an
environment. Resources can be added to an existing environment later as well.
TIP
It is possible to create an empty environment and reference it from deployment jobs. This will let you record the
deployment history against the environment.
NOTE
You can use a Pipeline to create, and deploy to environments as well. To learn more, see the how to guide
- stage: deploy
jobs:
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-latest'
# creates an environment if it doesn't exist
environment: 'smarthotel-dev'
strategy:
runOnce:
deploy:
steps:
- script: echo Hello world
NOTE
If the specified environment doesn't already exist, an empty environment is created using the environment name
provided.
environment: 'smarthotel-dev.bookings'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespace)
manifests: $(System.ArtifactsDirectory)/manifests/*
imagePullSecrets: $(imagePullSecret)
containers: $(containerRegistry)/$(imageRepository):$(tag)
# value for kubernetesServiceConnection input automatically passed down to task by
environment.resource input
Environment in run details
All environments targeted by deployment jobs of a specific run of a pipeline can be found under the
Environments tab of pipeline run details.
If you're using an AKS private cluster, the Workloads tab isn't available.
Approvals
You can manually control when a stage should run using approval checks. You can use approval checks to
control deployments to production environments. Checks are a mechanism available to the resource owner to
control when a stage in a pipeline consumes resource. As the owner of a resource, such as an environment, you
can define approvals and checks that must be satisfied before a stage consuming that resource starts.
Currently, manual approval checks are supported on environments. For more information, see Approvals.
2. Drill down into the job details reveals the listing of commits and work items that were newly deployed to
the environment.
Security
User permissions
You can control who can create, view, use, and manage the environments with user permissions. There are four
roles - Creator (scope: all environments), Reader, User, and Administrator. In the specific environment's user
permissions panel, you can set the permissions that are inherited and you can override the roles for each
environment.
Navigate to the specific Environment that you would like to authorize.
Click on overflow menu button located at the top-right part of the page next to "Add resource" and choose
Security to view the settings.
In the User permissions blade, click on +Add to add a User or group and select a suitable Role .
RO L E O N A N EN VIRO N M EN T P URP O SE
NOTE
If you create an environment within a YAML, contributors and project administrators will be granted Administrator
role. This is typically used in provisioning Dev/Test environments.
If you create an environment through the UI, only the creator will be granted the Administrator role. You should use
the UI to create protected environments like for a production environment.
Pipeline permissions
Pipeline permissions can be used to authorize all or selected pipelines for deployment to the environment.
To remove Open access on the environment or resource, click the Restrict permission in Pipeline
permissions .
To allow specific pipelines to deploy to an environment or a specific resource, click + and choose from the list
of pipelines.
Environment - Kubernetes resource
11/3/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Kubernetes resource view within environments provides a glimpse of the status of objects within the namespace
mapped to the resource. It also overlays pipeline traceability on top of these objects so that one can trace back
from a Kubernetes object to the pipeline and then back to the commit.
Overview
The advantages of using Kubernetes resource views within environments include -
Pipeline traceability - The Kubernetes manifest task used for deployments adds additional annotations to
portray pipeline traceability in resource views. This can help in identifying the originating Azure DevOps
organization, project and pipeline responsible for updates made to an object within the namespace.
Diagnose resource health - Workload status can be useful in quickly debugging potential mistakes or
regressions that could have been introduced by a new deployment. For example, in the case of
unconfigured imagePullSecrets resulting in ImagePullBackOff errors, pod status information can help
identify the root cause for this issue.
Review App - Review app works by deploying every pull request from Git repository to a dynamic
Kubernetes resource under the environment. Reviewers can see how those changes look as well as work
with other dependent services before they're merged into the target branch and deployed to production.
Kubernetes resource creation
Azure Kubernetes Service
A ServiceAccount is created in the chosen cluster and namespace. For an RBAC enabled cluster, RoleBinding is
created as well to limit the scope of the created service account to the chosen namespace. For an RBAC disabled
cluster, the ServiceAccount created has cluster-wide privileges (across namespaces).
1. In the environment details page, click on Add resource and choose Kubernetes .
2. Select Azure Kubernetes Ser vice in the Provider dropdown.
3. Choose the Azure subscription, cluster and namespace (new/existing).
4. Click on Validate and create to create the Kubernetes resource.
Using existing service account
While the Azure Provider option creates a new ServiceAccount, the generic provider allows for using an existing
ServiceAccount to allow a Kubernetes resource within environment to be mapped to a namespace.
TIP
Generic provider (existing service account) is useful for mapping a Kubernetes resource to a namespace from a non-AKS
cluster.
1. In the environment details page, click on Add resource and choose Kubernetes .
2. Select Generic provider (existing ser vice account) in the Provider dropdown.
3. Input cluster name and namespace values.
4. For fetching Server URL, execute the following command on your shell:
5. For fetching Secret object required to connect and authenticate with the cluster, the following sequence of
commands need to be run:
The above command fetches the name of the secret associated with a ServiceAccount. The output of the
above command is to be substituted in the following command for fetching Secret object:
Copy and paste the Secret object fetched in JSON form into the Secret text-field.
6. Click on Validate and create to create the Kubernetes resource.
jobs:
- deployment: Deploy
condition: and(succeeded(), not(startsWith(variables['Build.SourceBranch'], 'refs/pull/')))
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: $(envName).$(resourceName)
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)
- deployment: DeployPullRequest
displayName: Deploy Pull request
condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/pull/'))
pool:
vmImage: $(vmImageName)
environment: '$(envName).$(k8sNamespaceForPR)'
strategy:
runOnce:
deploy:
steps:
- reviewApp: $(resourceName)
- task: Kubernetes@1
displayName: 'Create a new namespace for the pull request'
inputs:
command: apply
useConfigurationFile: true
inline: '{ "kind": "Namespace", "apiVersion": "v1", "metadata": { "name": "$(k8sNamespaceForPR)"
}}'
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
namespace: $(k8sNamespaceForPR)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- task: KubernetesManifest@0
displayName: Deploy to the new namespace in the Kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespaceForPR)
manifests: |
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)
To use this job in an exiting pipeline, the service connection backing the regular Kubernetes environment
resource needs to be modified to "Use cluster admin credentials". Alternatively, role bindings need to be created
for the underlying service account to the review app namespace.
For setting up review apps without the need to author the above YAML from scratch or create explicit role bindings
manually, checkout new pipeline creation experience using the Deploy to Azure Kubernetes Services template.
Environment - virtual machine resource
11/2/2020 • 2 minutes to read • Edit Online
NOTE
The Personal Access Token (PAT) of the logged in user is included in the script. The PAT expires on the day you generate
the script.
If your VM already has any other agent running on it, provide a unique name for agent to register with the environment.
7. Once your VM is registered, it will start appearing as an environment resource under the Resources tab of the
environment.
8. To add more VMs, copy the script again by clicking Add resource and selecting Vir tual Machines . This script
remains the same for all the VMs added to the environment.
9. Each machine interacts with Azure Pipelines to coordinate deployment of your app.
jobs:
- deployment: VMDeploy
displayName: web
environment:
name: VMenv
resourceType: VirtualMachine
tags: web1
strategy:
You can select specific sets of virtual machines from the environment to receive the deployment by specifying the
tags that you have defined. Here is the complete YAML schema for a deployment job.
./config.cmd remove
Known limitations
When you retry a stage, it will rerun the deployment on all VMs and not just failed targets.
Next steps
Learn more about deployment jobs and environments.
To learn what else you can do in YAML pipelines, see the YAML schema reference.
Deploy to a Linux Virtual Machine
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines provides a complete, fully featured set of CI/CD automation tools for deployments to virtual
machines.
You can use continuous integration (CI) and continuous deployment (CD) to build, release, and deploy your code.
Learn how to set up a CI/CD pipeline for multi-machine deployments.
This article covers how to set up continuous deployment of your app to a web server running on Ubuntu. You can
use these steps for any app that publishes a web deployment package.
https://github.com/spring-projects/spring-petclinic
NOTE
Petclinic is a Spring Boot application built using Maven.
NOTE
The Personal Access Token (PAT) of the logged in user is pre-inserted in the script and expires after three hours.
If your VM already has any agent running on it, provide a unique name to register with environment.
7. To add more VMs, copy the script again. Click Add resource and choose Vir tual Machines . This script is
the same for all the VMs you want to add to the same environment.
8. Each machine interacts with Azure Pipelines to coordinate deployment of your app.
9. You can add or remove tags for the VM. Click on the dots at the end of each VM resource in Resources . The
tags you assign allow you to limit deployment to specific VMs when the environment is used in a
deployment job. Tags are each limited to 256 characters, but there is no limit to the number of tags you can
create.
Define your CI build pipeline
You'll need a continuous integration (CI) build pipeline that publishes your web application and a deployment script
that can be run locally on the Ubuntu server. Set up a CI build pipeline based on the runtime you want to use.
1. Sign in to your Azure DevOps organization and navigate to your project.
2. In your project, navigate to the Pipelines page. Then choose the action to create a new pipeline.
3. Walk through the steps of the wizard by first selecting GitHub as the location of your source code.
4. You may be redirected to GitHub to sign in. If so, enter your GitHub credentials.
5. When the list of repositories appears, select your desired sample app repository.
6. Azure Pipelines will analyze your repository and recommend a suitable pipeline template.
Java
JavaScript
Select the star ter template and copy this YAML snippet to build your Java project and runs tests with Apache
Maven:
- job: Build
displayName: Build Maven Project
steps:
- task: Maven@3
displayName: 'Maven Package'
inputs:
mavenPomFile: 'pom.xml'
- task: CopyFiles@2
displayName: 'Copy Files to artifact staging directory'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: '**/target/*.?(war|jar)'
TargetFolder: $(Build.ArtifactStagingDirectory)
- upload: $(Build.ArtifactStagingDirectory)
artifact: drop
For more guidance, follow the steps mentioned in Build your Java app with Maven for creating a build.
Define CD steps to deploy to the Linux VM
1. Edit your pipeline and include a deployment job by referencing the environment and the VM resources you
created earlier:
jobs:
- deployment: VMDeploy
displayName: web
environment:
name: <environment name>
resourceType: VirtualMachine
tags: web1
strategy:
2. You can select specific sets of virtual machines from the environment to receive the deployment by
specifying the tags that you have defined for each virtual machine in the environment. Here is the complete
YAML schema for Deployment job.
3. You can specify either runOnce or rolling as a deployment strategy.
runOnce is the simplest deployment strategy. All the life-cycle hooks, namely preDeploy deploy ,
routeTraffic , and postRouteTraffic , are executed once. Then, either on: success or on: failure is
executed.
Below is an example YAML snippet for runOnce :
jobs:
- deployment: VMDeploy
displayName: web
pool:
vmImage: 'Ubuntu-16.04'
environment:
name: <environment name>
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- script: echo my first deployment
4. Below is an example YAML snippet for the rolling strategy. You can update up to 5 targets gets in each
iteration. maxParallel will determine the number of targets that can be deployed to, in parallel. The selection
accounts for absolute number or percentage of targets that must remain available at any time excluding the
targets that are being deployed to. It is also used to determine the success and failure conditions during
deployment.
jobs:
- deployment: VMDeploy
displayName: web
environment:
name: <environment name>
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 2 #for percentages, mention as x%
preDeploy:
steps:
- download: current
artifact: drop
- script: echo initialize, cleanup, backup, install certs
deploy:
steps:
- task: Bash@3
inputs:
targetType: 'inline'
script: |
# Modify deployment script based on the app type
echo "Starting deployment script run"
sudo java -jar '$(Pipeline.Workspace)/drop/**/target/*.jar'
routeTraffic:
steps:
- script: echo routing traffic
postRouteTraffic:
steps:
- script: echo health check post-route traffic
on:
failure:
steps:
- script: echo Restore from backup! This is on failure
success:
steps:
- script: echo Notify! This is on success
With each run of this job, deployment history is recorded against the <environment name> environment that
you have created and registered the VMs.
Get started with Azure Resource Manager templates (ARM templates) by deploying a Linux web app with MySQL.
ARM templates give you a way to save your configuration in code. Using an ARM template is an example of
infrastructure as code and a good DevOps practice.
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for
your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to
write the sequence of programming commands to create it.
Prerequisites
Before you begin, you need:
An Azure account with an active subscription. Create an account for free.
An active Azure DevOps organization. Sign up for Azure Pipelines.
Create a project
If you signed up for Azure DevOps with a newly created Microsoft account (MSA), your project is automatically
created and named based on your sign-in.
If you signed up for Azure DevOps with an existing MSA or GitHub identity, you're automatically prompted to create
a project. You can create either a public or private project. To learn more about public projects, see What is a public
project?.
1. Enter information into the form provided, which includes a project name, description, visibility selection,
initial source control type, and work item process.
See choosing the right version control for your project and choose a process for guidance.
2. When your project is complete, the welcome page appears.
https://github.com/Azure/azure-quickstart-templates/
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"contentVersion": "1.0.0.0",
"parameters": {
"siteName": {
"type": "string",
"defaultValue": "[concat('MySQL-', uniqueString(resourceGroup().name))]",
"metadata": {
"description": "The unique name of your Web Site."
}
},
"administratorLogin": {
"type": "string",
"minLength": 1,
"metadata": {
"description": "Database administrator login name"
}
},
"administratorLoginPassword": {
"type": "securestring",
"minLength": 8,
"metadata": {
"description": "Database administrator password"
}
},
"dbSkucapacity": {
"type": "int",
"defaultValue": 2,
"allowedValues": [
2,
4,
8,
16,
32
],
"metadata": {
"description": "Azure database for mySQL compute capacity in vCores (2,4,8,16,32)"
}
},
"dbSkuName": {
"type": "string",
"defaultValue": "GP_Gen5_2",
"allowedValues": [
"GP_Gen5_2",
"GP_Gen5_4",
"GP_Gen5_8",
"GP_Gen5_16",
"GP_Gen5_32",
"MO_Gen5_2",
"MO_Gen5_4",
"MO_Gen5_8",
"MO_Gen5_16",
"MO_Gen5_32"
],
"metadata": {
"description": "Azure database for mySQL sku name "
}
},
"dbSkuSizeMB": {
"type": "int",
"defaultValue": 51200,
"allowedValues": [
102400,
51200
],
"metadata": {
"description": "Azure database for mySQL Sku Size "
}
},
"dbSkuTier": {
"type": "string",
"defaultValue": "GeneralPurpose",
"defaultValue": "GeneralPurpose",
"allowedValues": [
"GeneralPurpose",
"MemoryOptimized"
],
"metadata": {
"description": "Azure database for mySQL pricing tier"
}
},
"mysqlVersion": {
"type": "string",
"defaultValue": "5.7",
"allowedValues": [
"5.6",
"5.7"
],
"metadata": {
"description": "MySQL version"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"databaseskuFamily": {
"type": "string",
"defaultValue": "Gen5",
"metadata": {
"description": "Azure database for mySQL sku family"
}
}
},
"variables": {
"databaseName": "[concat('database', uniqueString(resourceGroup().id))]",
"serverName": "[concat('mysql-', uniqueString(resourceGroup().id))]",
"hostingPlanName": "[concat('hpn-', uniqueString(resourceGroup().id))]"
},
"resources": [
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2020-06-01",
"name": "[variables('hostingPlanName')]",
"location": "[parameters('location')]",
"sku": {
"Tier": "Standard",
"Name": "S1"
},
"kind": "linux",
"properties": {
"name": "[variables('hostingPlanName')]",
"workerSizeId": "1",
"reserved": true,
"numberOfWorkers": "1"
}
},
{
"type": "Microsoft.Web/sites",
"apiVersion": "2020-06-01",
"name": "[parameters('siteName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[variables('hostingPlanName')]"
],
"properties": {
"siteConfig": {
"linuxFxVersion": "php|7.0",
"connectionStrings": [
{
"name": "defaultConnection",
"ConnectionString": "[concat('Database=', variables('databaseName'), ';Data Source=',
reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName,';User
Id=',parameters('administratorLogin'),'@',variables('serverName')
,';Password=',parameters('administratorLoginPassword'))]",
"type": "MySql"
}
]
},
"name": "[parameters('siteName')]",
"serverFarmId": "[variables('hostingPlanName')]"
}
},
{
"type": "Microsoft.DBforMySQL/servers",
"apiVersion": "2017-12-01",
"name": "[variables('serverName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('dbSkuName')]",
"tier": "[parameters('dbSkuTier')]",
"capacity": "[parameters('dbSkucapacity')]",
"size": "[parameters('dbSkuSizeMB')]",
"family": "[parameters('databaseSkuFamily')]"
},
"properties": {
"createMode": "Default",
"version": "[parameters('mysqlVersion')]",
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]",
"storageProfile": {
"storageMB": "[parameters('dbSkuSizeMB')]",
"backupRetentionDays": 7,
"geoRedundantBackup": "Disabled"
},
"sslEnforcement": "Disabled"
},
"resources": [
{
"type": "firewallrules",
"apiVersion": "2017-12-01",
"name": "AllowAzureIPs",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.DBforMySQL/servers/databases', variables('serverName'),
variables('databaseName'))]",
"[resourceId('Microsoft.DBforMySQL/servers/', variables('serverName'))]"
],
"properties": {
"startIpAddress": "0.0.0.0",
"endIpAddress": "255.255.255.255"
}
},
{
"type": "databases",
"apiVersion": "2017-12-01",
"name": "[variables('databaseName')]",
"dependsOn": [
"[resourceId('Microsoft.DBforMySQL/servers/', variables('serverName'))]"
],
"properties": {
"charset": "utf8",
"collation": "utf8_general_ci"
}
}
]
}
}
]
}
NOTE
You may be redirected to GitHub to sign in. If so, enter your GitHub credentials.
NOTE
You may be redirected to GitHub to install the Azure Pipelines app. If so, select Approve and install.
trigger:
- none
pool:
vmImage: 'ubuntu-latest'
siteName mytestsite No
adminUser fabrikam No
VA RIA B L E VA L UE SEC RET ?
8. Map the secret variable $(adminPass) so that it is available in your Azure Resource Group Deployment task.
At the top of your YAML file, map $(adminPass) to $(ARM_PASS) .
variables:
ARM_PASS: $(adminPass)
trigger:
- none
pool:
vmImage: 'ubuntu-latest'
9. Add the Copy Files task to the YAML file. You will use the 101-webapp-linux-managed-mysql project. For more
information, see Build a Web app on Linux with Azure database for MySQL repo for more details.
variables:
ARM_PASS: $(adminPass)
trigger:
- none
pool:
vmImage: 'ubuntu-latest'
steps:
- task: CopyFiles@2
inputs:
SourceFolder: '101-webapp-linux-managed-mysql'
Contents: '**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
10. Add and configure the Azure Resource Group Deployment task.
The task references both the artifact you built with the Copy Files task and your pipeline variables. Set these
values when configuring your task.
Deployment scope (deploymentScope) : Set the deployment scope to Resource Group . You can target
your deployment to a management group, an Azure subscription, or a resource group.
Azure Resource Manager connection (azureResourceManagerConnection) : Select your Azure
Resource Manager service connection. To configure new service connection, select the Azure subscription
from the list and click Authorize . See Connect to Microsoft Azure for more details
Subscription (subscriptionId) : Select the subscription where the deployment should go.
Action (action) : Set to Create or update resource group to create a new resource group or to update an
existing one.
Resource group : Set to ARMPipelinesLAMP-rg to name your new resource group. If this is an existing
resource group, it will be updated.
Location(location) : Location for deploying the resource group. Set to your closest location (for example,
West US). If the resource group already exists in your subscription, this value will be ignored.
Template location (templateLocation) : Set to Linked artifact . This is location of your template and
the parameters files.
Template (cmsFile) : Set to $(Build.ArtifactStagingDirectory)/azuredeploy.json . This is the path to the
ARM template.
Template parameters (cmsParametersFile) : Set to
$(Build.ArtifactStagingDirectory)/azuredeploy.parameters.json . This is the path to the parameters file for
your ARM template.
Override template parameters (overrideParameters) : Set to
-siteName $(siteName) -administratorLogin $(adminUser) -administratorLoginPassword $(ARM_PASS) to use
the variables you created earlier. These values will replace the parameters set in your template
parameters file.
Deployment mode (deploymentMode) : The way resources should be deployed. Set to Incremental .
Incremental keeps resources that are not in the ARM template and is faster than Complete . Validate
mode lets you find problems with the template before deploying.
variables:
ARM_PASS: $(adminPass)
trigger:
- none
pool:
vmImage: 'ubuntu-latest'
steps:
- task: CopyFiles@2
inputs:
SourceFolder: '101-webapp-linux-managed-mysql'
Contents: '**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: AzureResourceManagerTemplateDeployment@3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: '<your-resource-manager-connection>'
subscriptionId: '<your-subscription-id>'
action: 'Create Or Update Resource Group'
resourceGroupName: 'ARMPipelinesLAMP-rg'
location: '<your-closest-location>'
templateLocation: 'Linked artifact'
csmFile: '$(Build.ArtifactStagingDirectory)/azuredeploy.json'
csmParametersFile: '$(Build.ArtifactStagingDirectory)/azuredeploy.parameters.json'
overrideParameters: '-siteName $(siteName) -administratorLogin $(adminUser) -
administratorLoginPassword $(ARM_PASS)'
deploymentMode: 'Incremental'
11. Click Save and run to deploy your template. The pipeline job will be launched and after few minutes,
depending on your agent, the job status should indicate Success .
2. Go to your new site. If you set siteName to armpipelinetestsite , the site is located at
https://armpipelinetestsite.azurewebsites.net/ .
Clean up resources
You can also use an ARM template to delete resources. Change the action value in your Azure Resource Group
Deployment task to DeleteRG . You can also remove the inputs for templateLocation , csmFile , csmParametersFile
, overrideParameters , and deploymentMode .
variables:
ARM_PASS: $(adminPass)
trigger:
- none
pool:
vmImage: 'ubuntu-latest'
steps:
- task: CopyFiles@2
inputs:
SourceFolder: '101-webapp-linux-managed-mysql'
Contents: '**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: AzureResourceManagerTemplateDeployment@3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: '<your-resource-manager-connection>'
subscriptionId: '<your-subscription-id>'
action: 'DeleteRG'
resourceGroupName: 'ARMPipelinesLAMP-rg'
location: ''<your-closest-location>'
Next steps
Create your first ARM template
Why data pipelines?
11/2/2020 • 2 minutes to read • Edit Online
Data pipelines in the enterprise can evolve into more complicated scenarios with multiple source systems and
supporting various downstream applications.
Data pipelines provide:
Consistency: Data pipelines transform data into a consistent format for users to consume
Error reduction: Automated data pipelines eliminate human errors when manipulating data
Efficiency: Data professionals save time spent on data processing transformation. Saving time allows then to
focus on their core job function - getting the insight out of the data and helping business makes better decisions
What is CI/CD?
Continuous integration and continuous delivery (CI/CD) is a software development approach where all developers
work together on a shared repository of code – and as changes are made, there are automated build process for
detecting code issues. The outcome is a faster development life cycle and a lower error rate.
What is a CICD/ data pipeline and why does it matter for data science?
The building of machine learning models is similar to traditional software development in the sense that the data
scientist needs to write code to train and score machine learning models.
Unlike traditional software development where the product is based on code, data science machine learning
models are based on both the code (algorithm, hyper parameters) and the data used to train the model. That’s why
most data scientists will tell you that they spend 80% of the time doing data preparation, cleaning and feature
engineering.
To complicate the matter even further – to ensure the quality of the machine learning models, techniques such as
A/B testing is used – where there could be multiple machine learning models being used concurrently. There is
usually one control model and one or more treatment models for comparison – so that the model performance can
be compared and maintained. Having multiple models adds another layer of complexity for the CI/CD of machine
learning models.
Having a CI/CD data pipeline is crucial for the data science team to deliver the machine learning models to the
business in a timely and quality manner.
Next steps
Build a data pipeline with Azure
Build a data pipeline with DevOps, Azure Data
Factory, and machine learning
11/2/2020 • 8 minutes to read • Edit Online
Get started with data pipelines by building a data pipeline with data ingestion, data transformation, and model
training.
Learn how to grab data from a CSV and save to blob storage, and then transform the data and save it to a staging
area. Then train a machine learning model using the transformed data and output the model as pickle file to blob
storage.
Prerequisites
Before you begin, you need:
An Azure account with an active subscription. Create an account for free.
An active Azure DevOps organization. Sign up for Azure Pipelines.
Downloaded data (sample.csv)
Access to the data pipeline solution in GitHub
DevOps for Azure Databricks
NOTE
Cloud Shell requires an Azure storage resource to persist any files that you create in Cloud Shell. When you first open
Cloud Shell, you're prompted to create a resource group, storage account, and Azure Files share. This setup is
automatically used for all future Cloud Shell sessions.
az account list-locations \
--query "[].{Name: name, DisplayName: displayName}" \
--output table
2. From the Name column in the output, choose a region that's close to you. For example, choose eastasia or
westus2 .
3. Run az configure to set your default region. Replace <REGION> with the name of the region you chose.
resourceSuffix=$RANDOM
2. Create globally unique names for your storage account and key vault. These commands use double quotes,
which instruct Bash to interpolate the variables using the inline syntax.
storageName="datacicd${resourceSuffix}"
keyVault="keyvault${resourceSuffix}"
3. Create one more Bash variable to store the names of your resource group.
rgName='data-pipeline-cicd-rg'
4. Create variable names for your Azure Data Factory and Azure Databricks instances.
datafactorydev='data-factory-cicd-dev'
datafactorytest='data-factory-cicd-test'
databricksname='databricks-cicd-ws'
2. Run the following az storage account create command to create a new storage account.
a. Run the following az storage container create command to create two containers, rawdata ,
prepareddata .
az storage container create -n rawdata --account-name $storageName
az storage container create -n prepareddata --account-name $storageName
3. Run the following az keyvault create command to create a new key vault.
az keyvault create \
--name $keyVault \
--resource-group $rgName
4. Create a new Azure Data Factory within the portal UI or using Azure CLI.
Name: data-factory-cicd-dev
Version: V2
Resource Group: data-pipeline-cicd-rg
Location: your closest location
Uncheck Enable GIT
a. Add the Azure Data Factory extension.
b. Run the following az datafactory factory create command to create a new data factory.
4. Create a second variable group named keys-vg that pulls data variables from Key Vault.
5. Check Link secrets from an Azure key vault as variables . Learn how to link secrets from an Azure key
vault.
6. Authorize the Azure subscription.
7. Choose all of the available secrets to add as variables ( databricks-token , StorageConnectString , StorageKey ).
Clean up resources
If you're not going to continue to use this application, delete your data pipeline with the following steps:
1. Delete the data-pipeline-cicd-rg resource group.
2. Delete your Azure DevOps project.
Next steps
Learn more about data in Azure Data Factory
Deploy apps to Azure Government Cloud
2/26/2020 • 2 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Azure Government Clouds provide private and semi-isolated locations for specific Government or other services,
separate from the normal Azure services. Highest levels of privacy have been adopted for these clouds, including
restricted data access policies.
Azure Pipelines is not available in Azure Government Clouds, so there are some special considerations when you
want to deploy apps to Government Clouds because artifact storage, build, and deployment orchestration must
execute outside the Government Cloud.
To enable connection to an Azure Government Cloud, you specify it as the Environment parameter when you
create an Azure Resource Manager service connection. You must use the full version of the service connection
dialog to manually define the connection. Before you configure a service connection, you should also ensure you
meet all relevant compliance requirements for your application.
You can then use the service connection in your build and release pipeline tasks.
Next
Deploy an Azure Web App
Troubleshoot Azure Resource Manager service connections
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.
To deploy your app to an Azure resource (to an app service or to a virtual machine), you need an Azure
Resource Manager service connection.
For other types of connection, and general information about creating and using connections, see Service
connections for builds and releases.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure subscription.
Management Group If you selected Management Group for the scope, select
an existing Azure management group. See Create
management groups.
To refresh a service connection, edit the connection and select Verify . Once you save, the service
connection will be valid for two years.
4. Switch from the simplified version of the dialog to the full version using the link in the dialog.
5. Enter a user-friendly Connection name to use when referring to this service connection.
6. Select the Environment name (such as Azure Cloud, Azure Stack, or an Azure Government Cloud).
7. If you do not select Azure Cloud , enter the Environment URL. For Azure Stack, this will be something
like https://management.local.azurestack.external
8. Select the Scope level you require:
If you choose Subscription , select an existing Azure subscription. If you don't see any Azure
subscriptions or instances, see Troubleshoot Azure Resource Manager service connections. |
If you choose Management Group , select an existing Azure management group. See Create
management groups. |
9. Enter the information about your service principal into the Azure subscription dialog textboxes:
Subscription ID
Subscription name
Service principal ID
Either the service principal client key or, if you have selected Cer tificate , enter the contents of both
the certificate and private key sections of the *.pem file.
Tenant ID
You can obtain this information if you don't have it to hand by downloading and running this
PowerShell script in an Azure PowerShell window. When prompted, enter your subscription name,
password, role (optional), and the type of cloud such as Azure Cloud (the default), Azure Stack, or an
Azure Government Cloud.
10. Choose Verify connection to validate the settings you entered.
11. After the new service connection is created:
If you are using it in the UI, select the connection name you assigned in the Azure subscription
setting of your pipeline.
If you are using it in YAML, copy the connection name into your code as the azureSubscription
value.
12. If required, modify the service principal to expose the appropriate permissions. For more details, see
Use Role-Based Access Control to manage access to your Azure subscription resources. This blog post
also contains more information about using service principal authentication.
See also: Troubleshoot Azure Resource Manager service connections.
You can configure Azure Virtual Machines (VM)-based agents with an Azure Managed Service Identity in
Azure Active Directory (Azure AD). This lets you use the system assigned identity (Service Principal) to grant
the Azure VM-based agents access to any Azure resource that supports Azure AD, such as Key Vault, instead of
persisting credentials in Azure DevOps for the connection.
1. In Azure DevOps, open the Ser vice connections page from the project settings page. In TFS, open the
Ser vices page from the "settings" icon in the top menu bar.
2. Choose + New ser vice connection and select Azure Resource Manager .
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
You can automatically deploy your database updates to Azure SQL database after every successful build.
DACPAC
The simplest way to deploy a database is to create data-tier package or DACPAC. DACPACs can be used to package
and deploy schema changes as well as data. You can create a DACPAC using the SQL database project in Visual
Studio.
YAML
Classic
To deploy a DACPAC to an Azure SQL database, add the following snippet to your azure-pipelines.yml file.
- task: SqlAzureDacpacDeployment@1
displayName: Execute Azure SQL : DacpacTask
inputs:
azureSubscription: '<Azure service connection>'
ServerName: '<Database server name>'
DatabaseName: '<Database name>'
SqlUsername: '<SQL user name>'
SqlPassword: '<SQL user password>'
DacpacFile: '<Location of Dacpac file in $(Build.SourcesDirectory) after compilation>'
SQL scripts
Instead of using a DACPAC, you can also use SQL scripts to deploy your database. Here is a simple example of a
SQL script that creates an empty database.
USE [main]
GO
IF NOT EXISTS (SELECT name FROM main.sys.databases WHERE name = N'DatabaseExample')
CREATE DATABASE [DatabaseExample]
GO
To run SQL scripts as part of a pipeline, you will need Azure Powershell scripts to create and remove firewall rules
in Azure. Without the firewall rules, the Azure Pipelines agent cannot communicate with Azure SQL Database.
The following Powershell script creates firewall rules. You can check-in this script as SetAzureFirewallRule.ps1 into
your repository.
ARM
[CmdletBinding(DefaultParameterSetName = 'None')]
param
(
[String] [Parameter(Mandatory = $true)] $ServerName,
[String] [Parameter(Mandatory = $true)] $ResourceGroup,
[String] $AzureFirewallName = "AzureWebAppFirewall"
)
$agentIP = (New-Object net.webclient).downloadstring("http://checkip.dyndns.com") -replace "[^\d\.]"
New-AzureRmSqlServerFirewallRule -ResourceGroupName $ResourceGroup -ServerName $ServerName -FirewallRuleName
$AzureFirewallName -StartIPAddress $agentIp -EndIPAddress $
Classic
[CmdletBinding(DefaultParameterSetName = 'None')]
param
(
[String] [Parameter(Mandatory = $true)] $ServerName,
[String] [Parameter(Mandatory = $true)] $ResourceGroupName,
[String] $AzureFirewallName = "AzureWebAppFirewall"
)
$ErrorActionPreference = 'Stop'
function New-AzureSQLServerFirewallRule {
$agentIP = (New-Object net.webclient).downloadstring("http://checkip.dyndns.com") -replace "[^\d\.]"
New-AzureSqlDatabaseServerFirewallRule -StartIPAddress $agentIp -EndIPAddress $agentIp -FirewallRuleName
$AzureFirewallName -ServerName $ServerName -ResourceGroupName $ResourceGroupName
}
function Update-AzureSQLServerFirewallRule{
$agentIP= (New-Object net.webclient).downloadstring("http://checkip.dyndns.com") -replace "[^\d\.]"
Set-AzureSqlDatabaseServerFirewallRule -StartIPAddress $agentIp -EndIPAddress $agentIp -FirewallRuleName
$AzureFirewallName -ServerName $ServerName -ResourceGroupName $ResourceGroupName
}
The following Powershell script removes firewall rules. You can check-in this script as RemoveAzureFirewall.ps1 into
your repository.
ARM
[CmdletBinding(DefaultParameterSetName = 'None')]
param
(
[String] [Parameter(Mandatory = $true)] $ServerName,
[String] [Parameter(Mandatory = $true)] $ResourceGroup,
[String] $AzureFirewallName = "AzureWebAppFirewall"
)
Remove-AzureRmSqlServerFirewallRule -ServerName $ServerName -FirewallRuleName $AzureFirewallName -
ResourceGroupName $ResourceGroup
Classic
[CmdletBinding(DefaultParameterSetName = 'None')]
param
(
[String] [Parameter(Mandatory = $true)] $ServerName,
[String] [Parameter(Mandatory = $true)] $ResourceGroupName,
[String] $AzureFirewallName = "AzureWebAppFirewall"
)
$ErrorActionPreference = 'Stop'
YAML
Classic
Add the following to your azure-pipelines.yml file to run a SQL script.
variables:
AzureSubscription: '<Azure service connection>'
ServerName: '<Database server name>'
DatabaseName: '<Database name>'
AdminUser: '<SQL user name>'
AdminPassword: '<SQL user password>'
SQLFile: '<Location of SQL file in $(Build.SourcesDirectory)>'
steps:
- task: AzurePowerShell@2
displayName: Azure PowerShell script: FilePath
inputs:
azureSubscription: '$(AzureSubscription)'
ScriptPath: '$(Build.SourcesDirectory)\scripts\SetAzureFirewallRule.ps1'
ScriptArguments: '$(ServerName)'
azurePowerShellVersion: LatestVersion
- task: CmdLine@1
displayName: Run Sqlcmd
inputs:
filename: Sqlcmd
arguments: '-S $(ServerName) -U $(AdminUser) -P $(AdminPassword) -d $(DatabaseName) -i $(SQLFile)'
- task: AzurePowerShell@2
displayName: Azure PowerShell script: FilePath
inputs:
azureSubscription: '$(AzureSubscription)'
ScriptPath: '$(Build.SourcesDirectory)\scripts\RemoveAzureFirewallRule.ps1'
ScriptArguments: '$(ServerName)'
azurePowerShellVersion: LatestVersion
Deploying conditionally
You may choose to deploy only certain builds to your Azure database.
YAML
Classic
To do this in YAML, you can use one of these techniques:
Isolate the deployment steps into a separate job, and add a condition to that job.
Add a condition to the step.
The following example shows how to use step conditions to deploy only those builds that originate from main
branch.
- task: SqlAzureDacpacDeployment@1
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
inputs:
azureSubscription: '<Azure service connection>'
ServerName: '<Database server name>'
DatabaseName: '<Database name>'
SqlUsername: '<SQL user name>'
SqlPassword: '<SQL user password>'
DacpacFile: '<Location of Dacpac file in $(Build.SourcesDirectory) after compilation>'
NOTE
If you execute SQLPackage from the folder where it is installed, you must prefix the path with & and wrap it in double-
quotes.
Basic Syntax
<Path of SQLPackage.exe> <Arguments to SQLPackage.exe>
You can use any of the following SQL scripts depending on the action that you want to perform
Extract
Creates a database snapshot (.dacpac) file from a live SQL server or Microsoft Azure SQL Database.
Command Syntax:
SqlPackage.exe /TargetFile:"<Target location of dacpac file>" /Action:Extract
/SourceServerName:"<ServerName>.database.windows.net"
/SourceDatabaseName:"<DatabaseName>" /SourceUser:"<Username>" /SourcePassword:"<Password>"
or
Example:
Help:
sqlpackage.exe /Action:Extract /?
Publish
Incrementally updates a database schema to match the schema of a source .dacpac file. If the database does not
exist on the server, the publish operation will create it. Otherwise, an existing database will be updated.
Command Syntax:
Example:
Help:
sqlpackage.exe /Action:Publish /?
Export
Exports a live database, including database schema and user data, from SQL Server or Microsoft Azure SQL
Database to a BACPAC package (.bacpac file).
Command Syntax:
Example:
SqlPackage.exe /TargetFile:"C:\temp\test.bacpac" /Action:Export
/SourceServerName:"DemoSqlServer.database.windows.net"
/SourceDatabaseName:"Testdb" /SourceUser:"ajay" /SourcePassword:"SQLPassword"
Help:
sqlpackage.exe /Action:Export /?
Import
Imports the schema and table data from a BACPAC package into a new user database in an instance of SQL Server
or Microsoft Azure SQL Database.
Command Syntax:
Example:
Help:
sqlpackage.exe /Action:Import /?
DeployReport
Creates an XML report of the changes that would be made by a publish action.
Command Syntax:
Example:
Help:
sqlpackage.exe /Action:DeployReport /?
DriftReport
Creates an XML report of the changes that have been made to a registered database since it was last registered.
Command Syntax:
Example:
Help:
sqlpackage.exe /Action:DriftReport /?
Script
Creates a Transact-SQL incremental update script that updates the schema of a target to match the schema of a
source.
Command Syntax:
Example:
Help:
sqlpackage.exe /Action:Script /?
Deploy to Azure App Service using Visual Studio
Code
11/2/2020 • 6 minutes to read • Edit Online
This tutorial walks you through setting up a CI/CD pipeline for deploying Node.js application to Azure App Service
using Deploy to Azure extension.
Prerequisites
An Azure account. If you don't have one, you cancreate for free.
You needVisual Studio Code installed along with theNode.js and npm the Node.js package manager and the
below extensions:
You need Azure Account extension and Deploy to Azure extension
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
IMPORTANT
Ensure that you have all the prerequisites installed and configured. In VS Code, you should see your Azure email address in
the Status Bar.
TIP
If you have already completed theNode.js tutorial, you can skip ahead toSetup CI/CD Pipeline.
TIP
To test that you've got npm correctly installed on your computer, type npm --help from a terminal and you should see the
usage documentation.
The -g switch installs the Express Generator globally on your machine so you can run it from anywhere.
Scaffold a new application
We can now scaffold a new Express application called myExpressApp by running:
express myExpressApp --view pug --git
This creates a new folder called myExpressApp with the contents of your application. The --view pug parameters
tell the generator to use the pug template engine (formerly known as jade).
To install all of the application's dependencies (again shipped as npm modules), go to the new folder and execute
npm install :
cd myExpressApp
npm install
At this point, we should test that our application runs. The generated Express application has a package.json file,
that includes a start script to run node ./bin/www . This will start the Node.js application running.
Run the application
1. From a terminal in the Express application folder, run:
npm start
The Node.js web server will start and you can browse to http://localhost:3000 to see the running application.
2. Follow this link to push this project to GitHub using the command line.
3. Open your application folder in VS Code and get ready to deploy to Azure.
3. After the installation is complete, the extension will be located in enabled extension space.
Setup CI/CD Pipeline
Now you can deploy to Azure App Services, Azure Function App and AKS using VS code. This VS Code extension
helps you set up continuous build and deployment for Azure App Services without leaving VS Code.
To use this service, you need to install the extension on VS Code. You can browse and install extensions from within
VS Code.
Combination of workflows
We support GitHub Actions and Azure Pipelines for GitHub & Azure Repos correspondingly. We also allow you to
create Azure Pipelines if you still manage the code in GitHub.
GitHub + GitHub Actions
1. To set up a pipeline, choose Deploy to Azure: Configure CI/CD Pipeline from the command palette
(Ctrl/Cmd + Shift + P) or right-click on the file explorer.
NOTE
If the code is not opened in the workspace, it will ask for folder location. Similarly, if the code in the workspace has
more than one folder, it will ask for folder.
2. Select a pipeline template you want to create from the list. Since we're targeting Node.js , select
Node.js with npm to App Service.
3. Select the target Azure Subscription to deploy your application.
TIP
If the code is in Azure Repos, you need different permissions.
6. The configuration of GitHub workflow or Azure Pipeline happens based on the extension setting. The guided
workflow will generate a starter YAML file defining the build and deploy process. Commit & push the
YAML file to proceed with the deployment.
TIP
You can customize the pipeline using all the features offered by Azure Pipelines and GitHub Actions.
8. Navigate to your site running in Azure using the Web App URL http://{web_app_name}.azurewebsites.net ,
and verify its contents.
GitHub + Azure Pipelines
IMPORTANT
To setup CI/CD in Azure Pipelines for Github Repository, you need to enable Use Azure Pipelines for GitHub in the
extension.
To open your user and workspace settings, use the following VS Code menu command:
On Windows/Linux - File > Preferences > Settings
On macOS - Code > Preferences > Settings
You can also open the Settings editor from the Command Palette ( Ctrl+Shift+P ) with Preferences: Open Settings
or use the keyboard shortcut ( Ctrl+, ).
When you open the settings editor, you can search and discover settings you are looking for. Search for the name
deployToAzure.UseAzurePipelinesForGithub and enable as shown below.
1. To set up a pipeline, choose Deploy to Azure: Configure CI/CD Pipeline from the command palette
(Ctrl/Cmd + Shift + P) or right-click on the file explorer.
NOTE
If the code is not opened in the workspace, it will ask for folder location. Similarly, if the code in the workspace has
more than one folder, it will ask for folder.
2. Select a pipeline template you want to create from the list. Since we're targeting Node.js , select
Node.js with npm to App Service.
3. Select the target Azure Subscription to deploy your application.
TIP
You can customize the pipeline using all the features offered by Azure Pipelines and GitHub Actions.
NOTE
If the code is not opened in the workspace, it will ask for folder location. Similarly, if the code in the workspace has
more than one folder, it will ask for folder.
2. Select a pipeline template you want to create from the list. Since we're targeting Node.js , select
Node.js with npm to App Service.
3. Select the target Azure Subscription to deploy your application.
TIP
You can customize the pipeline using all the features offered by Azure Pipelines and GitHub Actions.
Next steps
Try the workflow with a Docker file in a repo.
Azure Pipelines
Azure Stack is an extension of Azure that enables the agility and fast-paced innovation of cloud computing through
a hybrid cloud and on-premises environment.
In addition to supporting Azure AD, Azure DevOps Server 2019 can be used to deploy to Azure stack with
Active Directory Federation Services (AD FS) using a service principal with certificate.
Prerequisites
To deploy to Azure stack using Azure Pipelines, ensure the following:
Azure stack requirements:
Use an Azure Stack integrated system or deploy the Azure Stack Development Kit (ASDK)
Use the ConfigASDK.ps1 PowerShell script to automate ASDK post-deployment steps.
Create a tenant subscription in Azure Stack.
Deploy a Windows Server 2012 Virtual Machine in the tenant subscription. You'll use this server as your build
server and to run Azure DevOps Services.
Provide a Windows Server 2016 image with .NET 3.5 for a virtual machine (VM). This VM will be built on your
Azure Stack as a private build agent.
Azure Pipelines agent requirements:
Create a new service principal name (SPN) or use an existing one.
Validate the Azure Stack Subscription via Role-Based Access Control(RBAC) to allow the Service Principal Name
(SPN) to be part of the Contributor's role. Azure DevOps Services must have the Contributor role to provision
resources in an Azure Stack subscription.
Create a new Service connection in Azure DevOps Services using the Azure Stack endpoints and SPN
information. Specify Azure Stack in the Environment parameter when you create an Azure Resource Manager
service connection. You must use the full version of the service connection dialog to manually define the
connection.
You can then use the service connection in your build and release pipeline tasks.
For more details, refer to Tutorial: Deploy apps to Azure and Azure Stack
Next
Deploy an Azure Web App
Troubleshoot Azure Resource Manager service connections
Azure Stack Operator Documentation
FAQ
Are all the Azure tasks supported?
The following Azure tasks are validated with Azure Stack:
Azure PowerShell
Azure File Copy
Azure Resource Group Deployment
Azure App Service Deploy
Azure App Service Manage
Azure SQL Database Deployment
How do I resolve SSL errors during deployment?
To ignore SSL errors, set a variable named VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or
release pipeline.
You can automatically deploy your functions to Azure Function App for Linux Container after every successful build.
https://github.com/azooinmyluggage/GHFunctionAppContainer
variables:
## Add this under variables section in the pipeline
azureSubscription: <Name of the Azure subscription>
appName: <Name of the function App>
containerRegistry: <Name of the Azure container registry>
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: <Docker registry service connection>
imageRepository: <Name of your image repository>
containerRegistry: <Name of the Azure container registry>
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'
The snippet assumes that the build steps in your YAML file build and push the docker image to your Azure
container registry. The Azure Function App on Container Deploy task will pull the appropriate docker image
corresponding to the BuildId from the repository specified, and then deploys the image to the Azure Function App
Container.
YAML pipelines aren't available on TFS.
Deploy to a slot
YAML
Classic
You can configure the Azure Function App container to have multiple slots. Slots allow you to safely deploy your
app and test it before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
- task: AzureFunctionAppContainer@1
inputs:
azureSubscription: <Azure service connection>
appName: <Name of the function app>
imageName: $(containerRegistry)/$(imageRepository):$(tag)
deployToSlotOrASE: true
resourceGroupName: <Name of the resource group>
slotName: staging
- task: AzureAppServiceManage@0
inputs:
azureSubscription: <Azure service connection>
WebAppName: <name of the function app>
ResourceGroupName: <name of resource group>
SourceSlot: staging
SwapWithProduction: true
You can automatically deploy your Azure Function after every successful build.
https://github.com/microsoft/devops-project-samples/tree/master/dotnet/aspnetcore/functionApp
The snippet assumes that the build steps in your YAML file build and publishes the source as an artifact. The Azure
Function App Deploy task will pull the artifact corresponding to the BuildId from the Source type specified, and
then deploys the artifact to the Azure Function App Service.
YAML pipelines aren't available on TFS.
trigger:
- main
variables:
# Azure service connection established during pipeline creation
azureSubscription: <Name of your Azure subscription>
appName: <Name of the Function app>
# Agent VM image name
vmImageName: 'ubuntu-latest'
The snippet assumes that the build steps in your YAML file produce the zip archive in the
$(System.ArtifactsDirectory) folder on your agent.
Deploy to a slot
YAML
Classic
You can configure the Azure Function App to have multiple slots. Slots allow you to safely deploy your app and test
it before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
- task: AzureFunctionApp@1
inputs:
azureSubscription: <Azure service connection>
appType: functionAppLinux
appName: <Name of the Function app>
package: $(System.ArtifactsDirectory)/**/*.zip
deployToSlotOrASE: true
resourceGroupName: <Name of the resource group>
slotName: staging
- task: AzureAppServiceManage@0
inputs:
azureSubscription: <Azure service connection>
WebAppName: <name of the Function app>
ResourceGroupName: <name of resource group>
SourceSlot: staging
SwapWithProduction: true
You can automatically deploy your Azure Function after every successful build.
https://github.com/microsoft/devops-project-samples/tree/master/dotnet/aspnetcore/functionApp
The snippet assumes that the build steps in your YAML file build and publishes the source as an artifact. The Azure
Function App Deploy task will pull the artifact corresponding to the BuildId from the Source type specified, and
then deploys the artifact to the Azure Function App Service.
YAML pipelines aren't available on TFS.
trigger:
- main
variables:
# Azure service connection established during pipeline creation
azureSubscription: <Name of your Azure subscription>
appName: <Name of the Function app>
# Agent VM image name
vmImageName: 'ubuntu-latest'
The snippet assumes that the build steps in your YAML file produce the zip archive in the
$(System.ArtifactsDirectory) folder on your agent.
Deploy to a slot
YAML
Classic
You can configure the Azure Function App to have multiple slots. Slots allow you to safely deploy your app and test
it before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
- task: AzureFunctionApp@1
inputs:
azureSubscription: <Azure service connection>
appType: functionApp
appName: <Name of the Function app>
package: $(System.ArtifactsDirectory)/**/*.zip
deployToSlotOrASE: true
resourceGroupName: <Name of the resource group>
slotName: staging
- task: AzureAppServiceManage@0
inputs:
azureSubscription: <Azure service connection>
WebAppName: <name of the Function app>
ResourceGroupName: <name of resource group>
SourceSlot: staging
SwapWithProduction: true
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
You can automatically deploy your web app to an Azure App Service Linux on every successful build.
NOTE
https://github.com/MicrosoftDocs/pipelines-dotnet-core
variables:
## Add this under variables section in the pipeline
azureSubscription: <Name of the Azure subscription>
appName: <Name of the Web App>
trigger:
- main
variables:
# Azure service connection established during pipeline creation
azureSubscription: <Name of your Azure subscription>
appName: <Name of the web app>
# Agent VM image name
vmImageName: 'ubuntu-latest'
The snippet assumes that the build steps in your YAML file build and publishes the source as an artifact. The Azure
Web App Deploy task will pull the artifact corresponding to the BuildId from the Source type specified, and then
deploys the artifact to the Linux App Service.
YAML pipelines aren't available on TFS.
Deploy to a slot
YAML
Classic
You can configure the Azure Web App to have multiple slots. Slots allow you to safely deploy your app and test it
before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connection>'
appType: webAppLinux
appName: '<name of web app>'
deployToSlotOrASE: true
resourceGroupName: '<name of resource group>'
slotName: staging
- task: AzureAppServiceManage@0
inputs:
azureSubscription: '<Azure service connection>'
appType: webAppLinux
WebAppName: '<name of web app>'
ResourceGroupName: '<name of resource group>'
SourceSlot: staging
SwapWithProduction: true
You can automatically deploy your web app to an Azure Web App for Linux Containers after every successful build.
https://github.com/MicrosoftDocs/pipelines-dotnet-core-docker
variables:
## Add this under variables section in the pipeline
azureSubscription: <Name of the Azure subscription>
appName: <Name of the Web App>
containerRegistry: <Name of the Azure container registry>
trigger:
- main
variables:
# Container registry service connection established during pipeline creation
imageRepository: <Name of your image repository>
containerRegistry: <Name of the Azure container registry>
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'
The snippet assumes that the build steps in your YAML file build and push the docker image to your Azure
container registry. The Azure Web App on Container task will pull the appropriate docker image corresponding
to the BuildId from the repository specified, and then deploys the image to the Linux App Service.
YAML pipelines aren't available on TFS.
Deploy to a slot
YAML
Classic
You can configure the Azure Web App container to have multiple slots. Slots allow you to safely deploy your app
and test it before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
- task: AzureWebAppContainer@1
inputs:
azureSubscription: '<Azure service connection>'
appName: '<Name of the web app>'
containers: $(containerRegistry)/$(imageRepository):$(tag)
deployToSlotOrASE: true
resourceGroupName: '<Name of the resource group>'
slotName: staging
- task: AzureAppServiceManage@0
inputs:
azureSubscription: '<Azure service connection>'
WebAppName: '<name of web app>'
ResourceGroupName: '<name of resource group>'
SourceSlot: staging
SwapWithProduction: true
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
You can automatically deploy your web app to an Azure App Service web app after every successful build.
NOTE
This guidance applies to Team Foundation Server (TFS) version 2017.3 and later.
- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connection>'
appName: '<Name of web app>'
package: $(System.DefaultWorkingDirectory)/**/*.zip
azureSubscription : your Azure subscription.
appName : the name of your existing app service.
package : the file path to the package or a folder containing your app service contents. Wildcards are
supported.
The snippet assumes that the build steps in your YAML file produce the zip archive in the
$(System.DefaultWorkingDirectory) folder on your agent.
- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connection>'
appType: webAppLinux
appName: '<Name of web app>'
package: '$(System.DefaultWorkingDirectory)/**/*.war'
- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connections>'
appName: '<Name of web app>'
package: '$(System.DefaultWorkingDirectory)'
customWebConfig: '-Handler iisnode -NodeStartFile server.js -appType node'
- task: AzureRmWebAppDeployment@4
inputs:
VirtualApplication: '<name of virtual application>'
Vir tualApplication : the name of the Virtual Application that has been configured in the Azure portal. See
Configure an App Service app in the Azure portal for more details.
YAML pipelines aren't available on TFS.
Deploy to a slot
YAML
Classic
You can configure the Azure Web App to have multiple slots. Slots allow you to safely deploy your app and test it
before making it available to your customers.
The following example shows how to deploy to a staging slot, and then swap to a production slot:
- task: AzureWebApp@1
inputs:
azureSubscription: '<Azure service connection>'
appName: '<name of web app>'
slotName: staging
- task: AzureAppServiceManage@0
inputs:
azureSubscription: '<Azure service connection>'
WebAppName: '<name of web app>'
ResourceGroupName: '<name of resource group>'
SourceSlot: staging
jobs:
- job: buildandtest
pool:
vmImage: 'ubuntu-16.04'
steps:
# publish an artifact called drop
- task: PublishBuildArtifacts@1
inputs:
artifactName: drop
- job: deploy
pool:
vmImage: 'ubuntu-16.04'
dependsOn: buildandtest
condition: succeeded()
steps:
Configuration changes
For most language stacks, app settings and connection strings can be set as environment variables at runtime.
App settings can also be resolved from Key Vault using Key Vault references.
For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in in
Web.config. You might want to apply a specific configuration for your web app target before deploying to it. This
is useful when you deploy the same build to multiple web apps in a pipeline. For example, if your Web.config file
contains a connection string named connectionString , you can change its value before deploying to each web
app. You can do this either by applying a Web.config transformation or by substituting variables in your
Web.config file.
Azure App Ser vice Deploy task allows users to modify configuration settings in configuration files (*.config
files) inside web packages and XML parameters files (parameters.xml), based on the stage name specified.
NOTE
File transforms and variable substitution are also supported by the separate File Transform task for use in Azure Pipelines.
You can use the File Transform task to apply file transformations and variable substitutions on any configuration and
parameters files.
YAML
Classic
The following snippet shows an example of variable substitution:
jobs:
- job: test
variables:
connectionString: <test-stage connection string>
steps:
- task: AzureRmWebAppDeployment@4
inputs:
azureSubscription: '<Test stage Azure service connection>'
WebAppName: '<name of test stage web app>'
enableXmlVariableSubstitution: true
- job: prod
dependsOn: test
variables:
connectionString: <prod-stage connection string>
steps:
- task: AzureRmWebAppDeployment@4
inputs:
azureSubscription: '<Prod stage Azure service connection>'
WebAppName: '<name of prod stage web app>'
enableXmlVariableSubstitution: true
Deploying conditionally
You can choose to deploy only certain builds to your Azure Web App.
YAML
Classic
To do this in YAML, you can use one of these techniques:
Isolate the deployment steps into a separate job, and add a condition to that job.
Add a condition to the step.
The following example shows how to use step conditions to deploy only builds that originate from the main
branch:
- task: AzureWebApp@1
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
inputs:
azureSubscription: '<Azure service connection>'
appName: '<name of web app>'
Deployment mechanisms
The preceding examples rely on the built-in Azure Web App task, which provides simplified integration with
Azure.
If you use a Windows agent, this task uses Web Deploy technology to interact with the Azure Web App. Web
Deploy provides several convenient deployment options, such as renaming locked files and excluding files from
the App_Data folder during deployment.
If you use the Linux agent, the task relies on the Kudu REST APIs.
One thing worth checking before deploying is the Azure App Service access restrictions list. This list can include
IP addresses or Azure Virtual Network subnets. When there are one or more entries, there is then an implicit
"deny all" that exists at the end of the list. To modify the access restriction rules to your app, see Adding and
editing access restriction rules in Azure portal. You can also modify/restrict access to your source control
management (scm) site.
The Azure App Service Manage task is another task that's useful for deployment. You can use this task to start,
stop, or restart the web app before or after deployment. You can also use this task to swap slots, install site
extensions, or enable monitoring of the web app.
You can use the File Transform task to apply file transformations and variable substitutions on any configuration
and parameters files.
If the built-in tasks don't meet your needs, you can use other methods to script your deployment. View the YAML
snippets in each of the following tasks for some examples:
Azure PowerShell task
Azure CLI task
FTP task
Release pipelines
11/2/2020 • 10 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
NOTE
This topic covers classic release pipelines. If you want to use YAML to author CI/CD pipelines, then see Create your first
pipeline.
Release pipelines in Azure Pipelines and Team Foundation Server (TFS 2015.2 and later) help your team
continuously deliver software to your customers at a faster pace and with lower risk. You can fully automate
the testing and delivery of your software in multiple stages all the way to production, or set up semi-automated
processes with approvals and on-demand deployments .
See Releases in Azure Pipelines to understand releases and deployments and watch the below video to see
release pipelines in action.
In this example, a release of a website is created by collecting specific versions of two builds (artifacts), each from
a different build pipeline. The release is first deployed to a Dev stage and then forked to two QA stages in parallel.
If the deployment succeeds in both the QA stages, the release is deployed to Prod ring 1 and then to Prod ring 2.
Each production ring represents multiple instances of the same website deployed at various locations around the
globe.
An example of how deployment automation can be modeled within a stage is shown below:
In this example, a job is used to deploy the app to websites across the globe in parallel within production ring 1.
After all those deployments are successful, a second job is used to switch traffic from the previous version to the
newer version.
NOTE
TFS 2015 : Jobs and fork/join deployments are not available in TFS 2015.
Next:
Check out the following articles to learn how to:
Create your first pipeline.
Set up a multi-stage managed release pipeline.
Manage deployments by using approvals and gates.
After you finish editing the draft release, choose Star t from the draft release toolbar.
NOTE
If your source is not an Azure Repos Git repository, you cannot use Azure Pipelines or TFS to automatically publish the
deployment status to your repository. However, you can still use the Enable the Deployment status badge option
described below to show deployment status within your version control system.
When should I edit a release instead of the pipeline that defines it?
You can edit the approvals, tasks, and variables of a previously deployed release, instead of editing these values
in the pipeline from which the release was created. However, these edits apply to only the release generated
when you redeploy the artifacts. If you want your edits apply to all future releases and deployments, choose the
option to edit the release pipeline instead.
You cannot abandon a release when a deployment is in progress, you must cancel the deployment first.
Date / Date:MMddyy The current date, with the default format MMddyy . Any
combinations of M/MM/MMM/MMMM, d/dd/ddd/dddd,
y/yy/yyyy/yyyy, h/hh/H/HH, m/mm, s/ss are supported.
Release.DefinitionName The name of the release pipeline to which the current release
belongs.
Ar tifact.Ar tifactType The type of the artifact source linked with the release. For
example, this can be Azure Pipelines or Jenkins .
Build.SourceBranch The branch of the primary artifact source. For Git, this is of
the form main if the branch is refs/heads/main . For Team
Foundation Version Control, this is of the form branch if the
root server path for the workspace is
$/teamproject/branch . This variable is not set for Jenkins
or other artifact sources.
Related topics
Deploy pull request builds using Azure Pipelines
Stage templates in Azure Pipelines
Deploy from multiple branches using Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online
Prerequisites
You'll need:
A working build for your repository
Build multiple branches
Two separate targets where you will deploy the app. These could be virtual machines, web servers, on-
premises physical deployment groups, or other types of deployment target. You will have to choose names
that are unique, but it's a good idea to include "Dev" in the name of one, and "Prod" in the name of the other
so that you can easily identify them.
3. Add a stage with a name Dev . This stage will be triggered when a build artifact is published from the dev
branch.
4. Choose the Pre-deployment conditions icon in the Stages section to open up the pre-deployment
conditions panel. Under select trigger select After release . This means that a deployment will be initiated
automatically when a new release is created from this release pipeline.
5. Enable the Ar tifact filters . Select Add and specify your artifact. In the Build branch select the dev branch
then Save.
6. Add another stage and name it Prod . This stage will be triggered when a build artifact is published from the
main branch. Repeat steps 4-5 and replace Build branch to main.
Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019
Pull requests provide an effective way to have code reviewed before it is merged to the codebase. However, certain
issues can be tricky to find until the code is built and deployed to an environment. Before the introduction of pull
request release triggers, when a PR was raised, you could trigger a build, but not a deployment. Pull request
triggers enable you to set up a set of criteria that must be met before deploying your code. You can use pull request
triggers with code hosted on Azure Repos or GitHub.
Configuring pull request based releases has two parts:
1. Setting up a pull request trigger.
2. Setting up a branch policy (in Azure Repos) or status checks (in GitHub) for your release pipeline.
Once a pull request release is configured, anytime a pull request is raised for the protected branch a release is
triggered automatically, deployed to the specified environments, and the status of the deployment is displayed in
the PR page. Pull request deployments may help you catch deployment issues early in the cycle, maintain better
code quality, and release with higher confidence.
This article shows how you can set up a pull request based release for code hosted in Azure Repos and in GitHub.
4. To deploy to a specific stage you need to explicitly opt-in that stage. The Stages section shows the stages
that are enabled for pull request deployments.
To opt-in a stage for PR deployment, select the Pre-deployment conditions icon for that specific stage
and under the Triggers section, select Pull request deployment to set it to Enabled .
IMPORTANT
For critical stages like production, Pull request deployment should not be turned on.
2. Select the the context menu ... for your appropriate branch and select Branch policies .
3. Select Add status policy and select a status policy from the status to check dropdown menu. The
dropdown contains a list of recent statuses. The release definition should have run at least once with the PR
trigger switched on in order to get the status. Select the status corresponding to your release definition and
save the policy.
You can further customize the policy for this status, like making the policy required or optional. For more
information, see Configure a branch policy for an external service.
4. You should now be able to see your new status policy in the list. Users won't be able to merge any changes
to the target branch until "succeeded" status is posted to the pull request.
5. You can view the status of your policies in the pull request Overview page. Depending on your policy
settings, you can view the posted release status under the Required , Optional , or Status sections. The
release status gets updated every time the pipeline is triggered.
You can view your status checks in your pull request under the Conversation tab.
Related articles
Release triggers
Supported build source repositories
Additional resources
Azure Repos
Branch policies
Configure branch policy for an external service
Define your multi-stage continuous deployment (CD)
pipeline
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Azure Pipelines provide a highly configurable and manageable pipeline for releases to multiple stages such as
development, staging, QA, and production. it also offers the opportunity to implement gates and approvals at
each specific stage.
In this tutorial, you will learn about:
Continuous deployment triggers
Adding stages
Adding pre-deployment approvals
Creating releases and monitoring deployments
Prerequisites
You'll need:
A release pipeline that contains at least one stage. If you don't already have one, you can create it by
working through any of the following quickstarts and tutorials:
Deploy to an Azure Web App
Azure DevOps Project
Deploy to IIS web server on Windows
Two separate targets where you will deploy the app. These could be virtual machines, web servers, on-
premises physical deployment groups, or other types of deployment target. In this example, we are using
Azure App Services website instances. If you decide to do the same, you will have to choose names that are
unique, but it's a good idea to include "QA" in the name of one, and "Production" in the name of the other
so that you can easily identify them. Use the Azure portal to create a new web app.
2. Select the Continuous deployment trigger icon in the Ar tifacts section to open the trigger panel. Make
sure this is enabled so that a new release is created after every new successful build is completed.
[!div class="mx-imgBorder"]
3. Select the Pre-deployment conditions icon in the Stages section to open the conditions panel. Make
sure that the trigger for deployment to this stage is set to After release . This means that a deployment
will be initiated automatically when a new release is created from this release pipeline.
[!div class="mx-imgBorder"]
You can also set up Release triggers, Stage triggers or schedule deployments.
Add stages
In this section, we will add two new stages to our release pipeline: QA and production (Two Azure App Services
websites in this example). This is a typical scenario where you would deploy initially to a test or staging server, and
then to a live or production server. Each stage represents one deployment target.
1. Select the Pipeline tab in your release pipeline and select the existing stage. Change the name of your
stage to Production .
[!div class="mx-imgBorder"]
2. Select the + Add drop-down list and choose Clone stage (the clone option is available only when an
existing stage is selected).
[!div class="mx-imgBorder"]
Typically, you want to use the same deployment methods with a test and a production stage so that you
can be sure your deployed apps will behave the same way. Cloning an existing stage is a good way to
ensure you have the same settings for both. You then just need to change the deployment targets.
3. Your cloned stage will have the name Copy of Production . Select it and change the name to QA .
[!div class="mx-imgBorder"]
4. To reorganize the stages in the pipeline, select the Pre-deployment conditions icon in your QA stage
and set the trigger to After release . The pipeline diagram will then show the two stages in parallel.
[!div class="mx-imgBorder"]
5. Select the Pre-deployment conditions icon in your Production stage and set the trigger to After
stage , then select QA in the Stages drop-down list. The pipeline diagram will now indicate that the two
stages will execute in the correct order.
[!div class="mx-imgBorder"]
NOTE
You can set up your deployment to start when a deployment to the previous stage is partially successful. This
means that the deployment will continue even if a specific non-critical task have failed. This is usually used in a fork
and join deployments that deploy to different stages in parallel.
[!div class="mx-imgBorder"]
7. Depending on the tasks that you are using, change the settings so that this stage deploys to your "QA"
target. In our example, we will be using Deploy Azure App Ser vice task as shown below.
[!div class="mx-imgBorder"]
[!div class="mx-imgBorder"]
2. In the Approvers text box, enter the user(s) that will be responsible for approving the deployment. It is
also recommended to uncheck the The user requesting a release or deployment should not
approve it check box.
[!div class="mx-imgBorder"]
You can add as many approvers as you need, both individual users and organization groups. It's also
possible to set up post-deployment approvals by selecting the "user" icon at the right side of the stage in
the pipeline diagram. For more information, see Releases gates and approvals.
3. Select Save .
[!div class="mx-imgBorder"]
Create a release
Now that the release pipeline setup is complete, it's time to start the deployment. To do this, we will manually
create a new release. Usually a release is created automatically when a new build artifact is available. However, in
this scenario we will create it manually.
1. Select the Release drop-down list and choose Create release .
[!div class="mx-imgBorder"]
2. Enter a description for your release, check that the correct artifacts are selected, and then select Create .
[!div class="mx-imgBorder"]
3. A banner will appear indicating that a new release has been create. Select the release link to see more
details.
[!div class="mx-imgBorder"]
4. The release summary page will show the status of the deployment to each stage.
[!div class="mx-imgBorder"]
Other views, such as the list of releases, also display an icon that indicates approval is pending. The icon
shows a pop-up containing the stage name and more details when you point to it. This makes it easy for an
administrator to see which releases are awaiting approval, as well as the overall progress of all releases.
[!div class="mx-imgBorder"]
5. Select the pending_approval icon to open the approval window panel. Enter a brief comment, and select
Approve .
[!div class="mx-imgBorder"]
NOTE
You can schedule deployment at a later date, for example during non-peak hours. You can also reassign approval to a
different user. Release administrators can access and override all approval decisions.
[!div class="mx-imgBorder"]
During deployment, you can still access the logs page to see the live logs of every task.
2. Select any task to see the logs for that specific task. This makes it easier to trace and debug deployment
issues. You can also download individual task logs, or a zip of all the log files.
[!div class="mx-imgBorder"]
3. If you need additional information to debug your deployment, you can run the release in debug mode.
Next step
Use approvals and gates to control your deployment
Building a Continuous Integration and Continuous
Deployment pipeline with DSC
11/2/2020 • 11 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
This example demonstrates how to build a Continuous Integration/Continuous Deployment (CI/CD) pipeline by
using PowerShell, DSC, and Pester.
After the pipeline is built and configured, you can use it to fully deploy, configure and test a DNS server and
associated host records. This process simulates the first part of a pipeline that would be used in a development
environment.
An automated CI/CD pipeline helps you update software faster and more reliably, ensuring that all code is tested,
and that a current build of your code is available at all times.
Prerequisites
To use this example, you should be familiar with the following:
CI-CD concepts. A good reference can be found at The Release Pipeline Model.
Git source control
The Pester testing framework
Desired State Configuration(DSC)
Where <YourTFSRepoURL> is the clone URL to the TFS repository you created in the previous step.
If you don't know where to find this URL, see Clone an existing Git repo.
4. Push the code from your local repository to your TFS repository with the following command:
git push tfs --all
Where <YourDevOpsRepoURL> is the clone URL to the Azure DevOps repository you created in the previous
step.
If you don't know where to find this URL, see Clone an existing Git repo.
4. Push the code from your local repository to your TFS repository with the following command:
git push devops --all
5. The Azure DevOps repository will be populated with the Demo_CI code.
NOTE
This example uses the code in the ci-cd-example branch of the Git repo. Be sure to specify this branch as the default
branch in your project, and for the CI/CD triggers you create.
configuration DNSServer
{
Import-DscResource -module 'xDnsServer','xNetworking', 'PSDesiredStateConfiguration'
xDnsServerPrimaryZone $Node.zone
{
Ensure = 'Present'
Name = $Node.Zone
DependsOn = '[WindowsFeature]DNS'
}
This finds any nodes that were defined as having a role of DNSServer in the configuration data, which is created by
the DevEnv.ps1 script.
You can read more about the Where method in about_arrays
Using configuration data to define nodes is important when doing CI because node information will likely change
between environments, and using configuration data allows you to easily make changes to node information
without changing the configuration code.
In the first resource block, the configuration calls the WindowsFeature to ensure that the DNS feature is enabled.
The resource blocks that follow call resources from the xDnsServer module to configure the primary zone and DNS
records.
Notice that the two xDnsRecord blocks are wrapped in foreach loops that iterate through arrays in the
configuration data. Again, the configuration data is created by the DevEnv.ps1 script, which we'll look at next.
Configuration data
The DevEnv.ps1 file (from the root of the local Demo_CI repository, ./InfraDNS/DevEnv.ps1 ) specifies the
environment-specific configuration data in a hashtable, and then passes that hashtable to a call to the
New-DscConfigurationDataDocument function, which is defined in DscPipelineTools.psm (
./Assets/DscPipelineTools/DscPipelineTools.psm1 ).
param(
[parameter(Mandatory=$true)]
[string]
$OutputPath
)
The Default task has no implementation itself, but has a dependency on the CompileConfigs task. The resulting
chain of task dependencies ensures that all tasks in the build script are run.
In this example, the psake script is invoked by a call to Invoke-PSake in the Initiate.ps1 file (located at the root of
the Demo_CI repository):
param(
[parameter()]
[ValidateSet('Build','Deploy')]
[string]
$fileName
)
#$Error.Clear()
Invoke-PSake $PSScriptRoot\InfraDNS\$fileName.ps1
<#if($Error.count)
{
Throw "$fileName script failed. Check logs for failure details."
}
#>
When we create the build definition for our example, we will supply our psake script file as the fileName parameter
for this script.
The build script defines the following tasks:
GenerateEnvironmentFiles
Runs DevEnv.ps1 , which generates the configuration data file.
InstallModules
Installs the modules required by the configuration DNSServer.ps1 .
ScriptAnalysis
Calls the PSScriptAnalyzer.
UnitTests
Runs the Pester unit tests.
CompileConfigs
Compiles the configuration ( DNSServer.ps1 ) into a MOF file, using the configuration data generated by the
GenerateEnvironmentFiles task.
Clean
Creates the folders used for the example, and removes any test results, configuration data files, and modules from
previous runs.
The psake deploy script
The psake deployment script defined in Deploy.ps1 (from the root of the Demo_CI repository,
./InfraDNS/Deploy.ps1 ) defines tasks that deploy and run the configuration.
Deploy.ps1 defines the following tasks:
DeployModules
Starts a PowerShell session on TestAgent1 and installs the modules containing the DSC resources required for the
configuration.
DeployConfigs
Calls the Start-DscConfiguration cmdlet to run the configuration on TestAgent1 .
IntegrationTests
Runs the Pester integration tests.
AcceptanceTests
Runs the Pester acceptance tests.
Clean
Removes any modules installed in previous runs, and ensures that the test result folder exists.
Test scripts
Acceptance, Integration, and Unit tests are defined in scripts in the Tests folder (from the root of the Demo_CI
repository, ./InfraDNS/Tests ), each in files named DNSServer.tests.ps1 in their respective folders.
The test scripts use Pester and PoshSpec syntax.
Unit tests
The unit tests test the DSC configurations themselves to ensure that the configurations will do what is expected
when they run. The unit test script uses Pester.
Integration tests
The integration tests test the configuration of the system to ensure that when integrated with other components,
the system is configured as expected. These tests run on the target node after it has been configured with DSC. The
integration test script uses a mixture of Pester and PoshSpec syntax.
Acceptance tests
Acceptance tests test the system to ensure that it behaves as expected. For example, it tests to ensure a web page
returns the right information when queried. These tests run remotely from the target node in order to test real
world scenarios. The integration test script uses a mixture of Pester and PoshSpec syntax.
This build step runs the initiate.ps1 file, which calls the psake build script.
Publish Test Results
1. Set TestResultsFormat to NUnit
2. Set TestResultsFiles to InfraDNS/Tests/Results/*.xml
3. Set TestRunTitle to Unit .
4. Make sure Control Options Enabled and Always run are both selected.
This build step runs the unit tests in the Pester script we looked at earlier, and stores the results in the
InfraDNS/Tests/Results/*.xml folder.
Copy Files
1. Add each of the following lines to Contents :
initiate.ps1
**\deploy.ps1
**\Acceptance\**
**\Integration\**
This step copies the build and test scripts to the staging directory so that the can be published as build artifacts by
the next step.
Publish Artifact
1. Set TargetPath to $(Build.ArtifactStagingDirectory)\
2. Set Ar tifactName to Deploy
3. Set Enabled to true .
Next steps
This example configures the DNS server TestAgent1 so that the URL www.contoso.com resolves to TestAgent2 , but
it does not actually deploy a website. The skeleton for doing so is provided in the repo under the WebApp folder. You
can use the stubs provided to create psake scripts, Pester tests, and DSC configurations to deploy your own website.
Stage templates in Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
When you start a new release pipeline, or when you add a stage to an existing release pipeline, you can choose
from a list of templates for each stage. These templates pre-populate the stage with the appropriate tasks and
settings, which can considerably reduce the time and effort required to create a release pipeline for your DevOps
CI/CD processes.
A set of pre-defined stage templates is available in Azure Pipelines and in each version of TFS. You can use these
templates when you create a new release pipeline or add a new stage to a pipeline. You can also create your own
custom stage templates from a stage you have populated and configured.
NOTE
Templates do not have any additional security capability. There is no way to restrict the use of a template to specific users. All
templates, pre-defined and custom, are available for use by all users who have permission to create release pipelines.
When a stage is created from a template, the tasks in the template are copied over to the stage. Any further
updates to the template have no impact on existing stages. If you want a way to easily insert a number of stages
into release pipelines (perhaps to keep the definitions consistent) and to enable these stages to all be updated in
one operation, use task groups instead of stage templates.
FAQ
Can I expor t templates or share them with other subscriptions, enterprises, or projects?
Custom templates that you create are scoped to the project that you created them in. Templates cannot be exported
or shared with another project, collection, server, or organization. You can, however, export a release pipeline and
import it into another project, collection, server, or subscription. Then you can re-create the template for use in that
location.
How do I delete a custom stage template?
You can delete an existing custom template from the list of templates that is displayed when you add a new stage
to our pipeline.
How do I update a custom stage template?
To update a stage template, delete the existing template in a release pipeline and then save the stage as a template
with the same name.
Prerequisites
Before you begin, you'll need a CI build that publishes your Web Deploy package. To set up CI for your specific type
of app, see:
Build your ASP.NET 4 app
Build your ASP.NET Core app
Build your Node.js app with gulp
You'll also need an Azure Web App where you will deploy the app.
The only difference between these templates is that Node.js template configures the task to generate a
web.config file containing a parameter that starts the iisnode service.
3. If you created your new release pipeline from a build summary, check that the build pipeline and artifact is
shown in the Ar tifacts section on the Pipeline tab. If you created a new release pipeline from the
Releases tab, choose the + Add link and select your build artifact.
4. Choose the Continuous deployment icon in the Ar tifacts section, check that the continuous deployment
trigger is enabled, and add a filter to include the master branch.
Continuous deployment is not enabled by default when you create a new release pipeline from the
Releases tab.
5. Open the Tasks tab and, with Stage 1 selected, configure the task property variables as follows:
Azure Subscription: Select a connection from the list under Available Azure Ser vice
Connections or create a more restricted permissions connection to your Azure subscription. If you
are using Azure Pipelines and if you see an Authorize button next to the input, click on it to
authorize Azure Pipelines to connect to your Azure subscription. If you are using TFS or if you do not
see the desired Azure subscription in the list of subscriptions, see Azure Resource Manager service
connection to manually set up the connection.
App Ser vice Name : Select the name of the web app from your subscription.
NOTE
Some settings for the tasks may have been automatically defined as stage variables when you created a release
pipeline from a template. These settings cannot be modified in the task settings; instead you must select the parent
stage item in order to edit these settings.
Next step
Customize web app deployment
Deploy to an Azure Web App for Containers
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines
We'll show you how to set up continuous deployment of your Docker-enabled app to an Azure Web App using
Azure Pipelines.
For example, you can continuously deliver your app to a Windows VM hosted in Azure.
After you commit and push a code change, it is automatically built and then deployed. The results will
automatically show up on your site.
https://github.com/spring-guides/gs-spring-boot-docker.git
Prerequisites
You'll need an Azure subscription. You can get one free through Visual Studio Dev Essentials.
Why use a separate release pipeline instead of the automatic deployment feature available in Web
App for Containers?
You can configure Web App for Containers to automatically configure deployment as part of the CI/CD pipeline so
that the web app is automatically updated when a new image is pushed to the container registry (this feature uses
a webhook). However, by using a separate release pipeline in Azure Pipelines or TFS you gain extra flexibility and
traceability. You can:
Specify an appropriate tag that is used to select the deployment target for multi-stage deployments.
Use separate container registries for different stages.
Use parameterized start-up commands to, for example, set the values of variables based on the target stage.
Avoid using the same tag for all the deployments. The default CD pipeline for Web App for Containers uses the
same tag for every deployment. While this may be appropriate for a tag such as latest , you can achieve end-to-
end traceability from code to deployment by using a build-specific tag for each deployment. For example, the
Docker build tasks let you tag your images with the Build.ID for each deployment.
Next steps
Set up multi-stage release
Deploy a Docker container app to Azure Kubernetes
Service
11/2/2020 • 7 minutes to read • Edit Online
Azure Pipelines
We'll show you how to set up continuous deployment of your containerized application to an Azure Kubernetes
Service (AKS) using Azure Pipelines.
After you commit and push a code change, it will be automatically built and deployed to the target Kubernetes
cluster.
https://github.com/spring-guides/gs-spring-boot-docker.git
Prerequisites
You'll need an Azure subscription. You can get one free through Visual Studio Dev Essentials.
Configure authentication
When you use Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), you must establish an
authentication mechanism. This can be achieved in two ways:
1. Grant AKS access to ACR. See Authenticate with Azure Container Registry from Azure Kubernetes Service.
2. Use a Kubernetes image pull secret. An image pull secret can be created by using the Kubernetes
deployment task.
7. Choose + in the Agent job and add another Package and deploy Helm char ts task. Configure the
settings for this task as follows:
Kubernetes cluster : Enter or select the AKS cluster you created.
Namespace : Enter your Kubernetes cluster namespace where you want to deploy your application.
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual
clusters are called namespaces. You can use namespaces to create different environments such as dev,
test, and staging in the same cluster.
Command : Select upgrade as the Helm command. You can run any Helm command using this task
and pass in command options as arguments. When you select the upgrade , the task shows some
additional fields:
Char t Type : Select File Path . Alternatively, you can specify Char t Name if you want to
specify a URL or a chart name. For example, if the chart name is stable/mysql , the task will
execute helm upgrade stable/mysql
Char t Path : This can be a path to a packaged chart or a path to an unpacked chart directory. In
this example you are publishing the chart using a CI build, so select the file package using file
picker or enter $(System.DefaultWorkingDirectory)/**/*.tgz
Release Name : Enter a name for your release; for example azuredevops
Recreate Pods : Tick this checkbox if there is a configuration change during the release and
you want to replace a running pod with the new configuration.
Reset Values : Tick this checkbox if you want the values built into the chart to override all
values provided by the task.
Force : Tick this checkbox if, should conflicts occur, you want to upgrade and rollback to delete,
recreate the resource, and reinstall the full release. This is useful in scenarios where applying
patches can fail (for example, for services because the cluster IP address is immutable).
Arguments : Enter the Helm command arguments and their values; for this example
--set image.repository=$(imageRepoName) --set image.tag=$(Build.BuildId) See this section for
a description of why we are using these arguments.
Enable TLS : Tick this checkbox to enable strong TLS-based connections between Helm and
Tiller.
CA cer tificate : Specify a CA certificate to be uploaded and used to issue certificates for Tiller
and Helm client.
Cer tificate : Specify the Tiller certificate or Helm client certificate
Key : Specify the Tiller Key or Helm client key
8. In the Variables page of the pipeline, add a variable named imageRepoName and set the value to the
name of your Helm image repository. Typically, this is in the format name.azurecr.io/coderepository
9. Save the release pipeline.
Arguments used in the Helm upgrade task
In the build pipeline, the container image is tagged with $(Build.BuildId) and this is pushed to an Azure Container
Registry. In a Helm chart you can parameterize the container image details such as the name and tag because the
same chart can be used to deploy to different environments. These values can also be specified in the values.yaml
file or be overridden by a user-supplied values file, which can in turn be overridden by --set parameters during
the Helm install or upgrade.
In this example, we pass the following arguments:
--set image.repository=$(imageRepoName) --set image.tag=$(Build.BuildId)
The value of $(imageRepoName) was set in the Variables page (or the variables section of your YAML file).
Alternatively, you can directly replace it with your image repository name in the --set arguments value or
values.yaml file. For example:
image:
repository: VALUE_TO_BE_OVERRIDDEN
tag: latest
Another alternative is to set the Set Values option of the task to specify the argument values as comma separated
key-value pairs.
Next steps
Set up multi-stage release
Automatically deploy to IoT edge devices
11/2/2020 • 7 minutes to read • Edit Online
Azure Pipelines
In this tutorial, you'll learn how to build an Azure Internet of Things (IoT) solution, push the created module images
to your Azure Container Registry (ACR), create a deployment manifest, and then deploy the modules to targeted IoT
edge devices.
Prerequisites
1. Visual Studio (VS) Code to create an Iot Edge module. You can download it from here.
2. Azure DevOps Ser vices organization . If you don't yet have one, you can get one for free.
3. Microsoft Azure Account . If you don't yet have one, you can create one for free.
4. Azure IoT tools for VS Code.
5. Docker CE.
6. Create an Azure Container Registry.
F IEL D VA L UES
Provide a solution name Enter a descriptive name for your solution or accept the
default EdgeSolution
Provide Docker image repository for the module An image repository includes the name of your container
registry and the name of your container image. Your
container image is prepopulated from the name that you
have provided in the last step. Replace localhost:5000
with the login server value from your Azure container
registry. You can retrieve the login server from the
Overview page of your container registry in the Azure
portal.
The VS Code window loads your IoT Edge solution workspace. The solution workspace contains five top-level
components.
1. modules - contains C# code for your module as well as Dockerfiles for building your module as a container
image
2. .env - file stores your container registry credentials
3. deployment.template.json - file contains the information that the IoT Edge runtime uses to deploy the
modules on a device
4. deployment.debug.template.json - file contains the debug version of modules
5. .vscode and .gitignore - do not edit
If you didn't specify a container registry when creating your solution, but accepted the default localhost:5000 value,
you won't have a .env file.
4. From the browser, navigate to the repo. You should see the code.
F IEL D VA L UES
Template location (Required) Set the template location to URL of the file
NOTE
Save the pipeline and queue the build. The above step will create an Azure Container Registry. This is required to push
the IoT module images.
8. Edit the pipeline, and select + , and search for the Azure IoT Edge task. Select add . This step will build the
module images.
9. Select + and search for the Azure IoT Edge task. Select add . Configure the task as shown below -
F IEL D VA L UES
Container registry type Select the Container registry type Azure Container
Registr y
Azure subscription Select the Azure Resource Manager subscription for the
deployment
Azure Container Registry Select an Azure Container Registry from the dropdown
which was created in the step 5
10. Select + and search for Publish Build Ar tifacts task. Select add . Set the path to publish to
$(Build.Ar tifactStagingDirector y)/deployment.amd64.json .
11. Save the pipeline and queue the build.
Create a release pipeline
The build pipeline has already built a Docker image and pushed it to an Azure Container Registry. In the release
pipeline we will create an IoT hub, IoT Edge device in that hub, deploy the sample module from the build pipeline,
and provision a virtual machine to run as your IoT Edge device.
1. Navigate to the Pipelines | Releases .
2. From the New drop-down menu, select New release pipeline to create a new release pipeline.
3. Select Empty job to create the pipeline.
4. Select + and search for Azure Resource Group Deployment task. Select add . Configure the task as
shown below.
F IEL D VA L UES
Template location (Required) Set the template location to URL of the file
5. Select + and search for Azure CLI task. Select add and configure the task as shown below.
Azure subscription : Select the Azure Resource Manager subscription for the deployment
Script Location : Set the type to Inline script and copy paste the below script
(az extension add --name azure-cli-iot-ext && az iot hub device-identity show --device-id
YOUR_DEVICE_ID --hub-name YOUR_HUB_NAME) || (az iot hub device-identity create --hub-name
YOUR_HUB_NAME --device-id YOUR_DEVICE_ID --edge-enabled && TMP_OUTPUT="$(az iot hub device-
identity show-connection-string --device-id YOUR_DEVICE_ID --hub-name YOUR_HUB_NAME)" &&
RE="\"cs\":\s?\"(.*)\"" && if [[ $TMP_OUTPUT =~ $RE ]]; then CS_OUTPUT=${BASH_REMATCH[1]}; fi &&
echo "##vso[task.setvariable variable=CS_OUTPUT]${CS_OUTPUT}")
NOTE
Save the pipeline and queue the release. The above 2 steps will create an IoT Hub.
6. Edit the pipeline and select + and search for the Azure IoT Edge task. Select add . This step will Deploy the
module images to IoT Edge devices. Configure the task as shown below.
F IEL D VA L UES
Azure subscription contains IoT Hub Select an Azure subscription that contains IoT Hub
7. Select + and search for Azure Resource Group Deployment task. Select add . Configure the task as
shown below.
F IEL D VA L UES
Template location (Required) Set the template location to URL of the file
9. Once the release is complete, go to IoT hub in the Azure portal to view more information.
How-To: CI/CD with App Service and Azure Cosmos
DB
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines
Create a continuous integration (CI) and continuous delivery (CD) pipeline for Azure Cosmos DB backed Azure App
Service Web App. Azure Cosmos DB is Microsoft's globally distributed, multi-model database. Cosmos DB enables
you to elastically and independently scale throughput and storage across any number of Azure's geographic
regions.
You will:
Clone a sample Cosmos DB and Azure Web App to your repository
Create a Cosmos DB collection and database
Set up CI for your app
Set up CD to Azure for your app
Review the CI/CD pipeline
Prerequisites
An Azure subscription. You can get one free through Visual Studio Dev Essentials.
An Azure DevOps organization. If you don't have one, you can create one for free. If your team already has one,
then make sure you are an administrator of the project you want to use.
A SQL API based Cosmos DB instance. If you don't have one, you can follow the initial steps in this tutorial to
create a Cosmos DB instance and collection.
5. Select the triggers , and then select the checkbox for ""Enable continuous integration**. This setting ensures
every commit to the repository executes a build.
6. Select Save & Queue , and then choose Save and Queue to execute a new build.
7. Select the build hyperlink to examine the running build. In a few minutes the build completes. The build
produces artifacts which can be used to deploy to Azure.
10. Select + Add to create a new variable named endpoint . Select + Add to create a second variable named
authKey .
11. Select the padlock icon to make the authKey variable secret.
12. Select the Pipeline menu.
13. Under the Ar tifacts ideas, choose the Continuous deployment trigger icon. On the right side of the
screen, ensure Enabled is on.
14. Select Save to save changes for the release definition.
Next steps
You can optionally modify these build and release definitions to meet the needs of your team. You can also use this
CI/CD pattern as a template for your other projects. You learned how to:
Clone a sample Cosmos DB and Azure Web App to your repository
Create a Cosmos DB collection and database
Set up CI for your app
Set up CD to Azure for your app
Review the CI/CD pipeline
To learn more about Azure Pipelines, see this tutorial:
ASP.NET MVC and Cosmos DB
Check policy compliance with gates
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Azure Policy helps you manage and prevent IT issues by using policy definitions that enforce rules and effects for
your resources. When you use Azure Policy, resources stay compliant with your corporate standards and service
level agreements. Policies can be applied to an entire subscription, a management group, or a resource group.
This tutorial guides you in enforcing compliance policies on your resources before and after deployment during the
release process through Azure Pipelines.
For more information, see What is Azure Policy? and Create and manage policies to enforce compliance.
Prepare
1. Create an Azure Policy in the Azure portal. There are several pre-defined sample policies that can be applied
to a management group, subscription, and resource group.
2. In Azure DevOps create a release pipeline that contains at least one stage, or open an existing release
pipeline.
3. Add a pre- or post-deployment condition that includes the Security and compliance assessment task as
a gate. More details.
5. An error message is written to the logs and displayed in the stage status panel in the releases page of Azure
Pipelines.
6. When the policy compliance gate passes the release, a Succeeded status is displayed.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
We'll show you how to set up continuous deployment of your app to an nginx web server running on Ubuntu
using Azure Pipelines or Team Foundation Server (TFS) 2018 and higher. You can use the steps in this quickstart for
any app as long as your continuous integration pipeline publishes a web deployment package.
After you commit and push a code change, it is automatically built and then deployed. The results will
automatically show up on your site.
https://github.com/spring-guides/gs-spring-boot-docker.git
Follow additional steps mentioned in Build your Java app with Maven for creating a build to deploy to Linux.
2. Initiate the session by typing the following command, substituting the IP address of your VM:
ssh <publicIpAddress>
The libraries this command installs are Prerequisites for installing the build and release agent onto a Ubuntu
16.04 VM. Prerequisites for other versions of Linux can be found here.
4. Open the Azure Pipelines web portal, navigate to Azure Pipelines , and choose Deployment groups .
5. Choose Add Deployment group (or New if you have existing deployment groups).
6. Enter a name for the group such as myNginx and choose Create .
7. In the Register machine section, make sure that Ubuntu 16.04+ is selected and that Use a personal
access token in the script for authentication is also checked. Choose Copy script to clipboard .
The script you've copied to your clipboard will download and configure an agent on the VM so that it can
receive new web deployment packages and apply them to web server.
8. Back in the SSH session to your VM, paste and run the script.
9. When you're prompted to configure tags for the agent, press Enter (you don't need any tags).
10. Wait for the script to finish and display the message Started Azure Pipelines Agent. Type "q" to exit the file
editor and return to the shell prompt.
11. Back in Azure Pipelines or TFS, on the Deployment groups page, open the myNginx deployment group.
On the Targets tab, verify that your VM is listed.
4. Choose the Continuous deployment icon in the Ar tifacts section, check that the continuous deployment
trigger is enabled, and add a filter that includes the master branch.
Continuous deployment is not enabled by default when you create a new release pipeline from the
Releases tab.
5. Open the Tasks tab, select the Agent job , and choose Remove to remove this job.
6. Choose ... next to the Stage 1 deployment pipeline and select Add deployment group job .
7. For the Deployment Group , select the deployment group you created earlier such as myNginx .
The tasks you add to this job will run on each of the machines in the deployment group you specified.
8. Choose + next to the Deployment group job and, in the task catalog, search for and add a Bash task.
9. In the properties of the Bash task, use the Browse button for the Script Path to select the path to the
deploy.sh script in the build artifact. For example, when you use the nodejs-sample repository to build
your app, the location of the script is
$(System.DefaultWorkingDirectory)/nodejs-sample/drop/deploy/deploy.sh
10. Save the release pipeline.
Next steps
Dynamically create and remove a deployment group
Apply stage-specific configurations
Perform a safe rolling deployment
Deploy a database with your app
Deploy to a Windows Virtual Machine
2/26/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
We'll show you how to set up continuous deployment of your ASP.NET or Node.js app to an IIS web server
running on Windows using Azure Pipelines. You can use the steps in this quickstart as long as your continuous
integration pipeline publishes a web deployment package.
After you commit and push a code change, it is automatically built and then deployed. The results will
automatically show up on your site.
Prerequisites
IIS configuration
The configuration varies depending on the type of app you are deploying.
ASP.NET app
On your VM, open an Administrator : Windows PowerShell console. Install IIS:
# Install IIS
Install-WindowsFeature Web-Server,Web-Asp-Net45,NET-Framework-Features
# Restart the web server so that system PATH updates take effect
Stop-Service was -Force
Start-Service w3svc
Node.js app
Follow the instructions in this topic to install and configure IISnode on IIS servers.
The account under which the agent runs needs Manage permissions for the
C:\Windows\system32\inetsrv\ directory. Adding non-admin users to this directory is not
recommended. In addition, if you have a custom user identity for the application pools, the identity
needs permission to read the crypto-keys. Local service accounts and user accounts must be given
read access for this. For more details, see Keyset does not exist error message.
8. When the script is done, it displays the message Service vstsagent.account.computername started
successfully.
9. On the Deployment groups page in Azure Pipelines, open the myIIS deployment group. On the Targets
tab, verify that your VM is listed.
Next steps
Dynamically create and remove a deployment group
Apply stage-specific configurations
Perform a safe rolling deployment
Deploy a database with your app
Deploy your Web Deploy package to IIS servers
using WinRM
11/2/2020 • 7 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
A simpler way to deploy web applications to IIS servers is by using deployment groups instead of WinRM.
However, deployment groups are not available in version of TFS earlier than TFS 2018.
Continuous deployment means starting an automated deployment pipeline whenever a new successful build is
available. Here we'll show you how to set up continuous deployment of your ASP.NET or Node.js app to one or
more IIS servers using Azure Pipelines. A task running on the Build and Release agent opens a WinRM connection
to each IIS server to run Powershell scripts remotely in order to deploy the Web Deploy package.
Get set up
Begin with a CI build
Before you begin, you'll need a CI build that publishes your Web Deploy package. To set up CI for your specific type
of app, see:
Build your ASP.NET 4 app
Build your ASP.NET Core app
Build your Node.js app with gulp
WinRM configuration
Windows Remote Management (WinRM) requires target servers to be:
Domain-joined or workgroup-joined
Able to communicate using the HTTP or HTTPS protocol
Addressed by using a fully-qualified domain name (FQDN) or an IP address
This table shows the supported scenarios for WinRM.
JO IN ED TO A P ROTO C O L A DDRESSIN G M O DE
Ensure that your IIS servers are set up in one of these configurations. For example, do not use WinRM over HTTP to
communicate with a Workgroup machine. Similarly, do not use an IP address to access the target server(s) when
you use HTTP. Instead, in both scenarios, use HTTPS.
If you need to deploy to a server that is not in the same workgroup or domain, add it to trusted hosts in your
WinRM configuration.
2. Check your PowerShell version. You need PowerShell version 4.0 or above installed on every target machine.
To display the current PowerShell version, execute the following command in the PowerShell console:
$PSVersionTable.PSVersion
3. Check your .NET Framework version. You need version 4.5 or higher installed on every target machine. See
How to: Determine Which .NET Framework Versions Are Installed.
4. Download from GitHub this PowerShell script for Windows 10 and Windows Server 2016, or this
PowerShell script for previous versions of Windows. Copy them to every target machine. You will use them
to configure WinRM in the following steps.
5. Decide if you want to use HTTP or HTTPS to communicate with the target machine(s).
If you choose HTTP, execute the following in a Command window with Administrative permissions:
ConfigureWinRM.ps1 {FQDN} http
This command creates an HTTP WinRM listener and opens port 5985 inbound for WinRM over
HTTP.
If you choose HTTPS, you can use either a FQDN or an IP address to access the target machine(s). To
use a FQDN to access the target machine(s), execute the following in the PowerShell console with
Administrative permissions:
ConfigureWinRM.ps1 {FQDN} https
To use an IP address to access the target machine(s), execute the following in the PowerShell console
with Administrative permissions:
ConfigureWinRM.ps1 {ipaddress} https
These commands create a test certificate by using MakeCer t.exe , use the certificate to create an
HTTPS WinRM listener, and open port 5986 inbound for WinRM over HTTPS. The script also
increases the WinRM MaxEnvelopeSizekb setting. By default on Windows Server this is 500 KB,
which can result in a "Request size exceeded the configured MaxEnvelopeSize quota" error.
IIS configuration
If you are deploying an ASP.NET app, make sure that you have ASP.NET 4.5 or ASP.NET 4.6 installed on each of your
IIS target servers. For more information, see this topic.
If you are deploying an ASP.NET Core application to IIS target servers, follow the additional instructions in this topic
to install .NET Core Windows Server Hosting Bundle.
If you are deploying a Node.js application to IIS target servers, follow the instructions in this topic to install and
configure IISnode on IIS servers.
In this example, we will deploy to the Default Web Site on each of the servers. If you need to deploy to another
website, make sure you configure this as well.
IIS WinRM extension
Install the IIS Web App Deployment Using WinRM extension from Visual Studio Marketplace in Azure Pipelines or
TFS.
5. On the Variables tab of the stage in release pipeline, configure a variable named WebSer vers with the list
of IIS servers as its value; for example machine1,machine2,machine3 .
6. Configure the following tasks in the stage:
Deploy: Windows Machine File Copy - Copy the Web Deploy package to the IIS servers.
Source : Select the Web deploy package (zip file) from the artifact source.
Machines : $(WebServers)
Admin Login : Enter the administrator credentials for the target servers. For workgroup-joined
computers, use the format .\username . For domain-joined computers, use the format
domain\username .
Admin Login : Enter the administrator credentials for target servers. For workgroup-joined
computers, use the format .\username . For domain-joined computers, use the format
domain\username .
FAQ
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
You can quickly and easily deploy your ASP.NET or Node.js app to an IIS Deployment Group using Azure Pipelines
or Team Foundation Server (TFS), as demonstrated in this example. In addition, you can extend your deployment in
a range of ways depending on your scenario and requirements. This topic shows you how to:
Dynamically create and remove a deployment group
Apply stage-specific configurations
Perform a safe rolling deployment
Deploy a database with your app
Prerequisites
You should have worked through the example CD to an IIS Deployment Group before you attempt any of these
steps. This ensures that you have the release pipeline, build artifacts, and websites required.
2. In the IIS Web App Deploy task, select the checkbox for XML variable substitution under File
Transforms and Variable Substitution Options .
If you prefer to manage stage configuration settings in your own database or Azure KeyVault, add a task
to the stage to read and emit those values using
##vso[task.setvariable variable=connectionString;issecret=true]<value> .
2. Add two machine group jobs to stages in the release pipeline, and a task in each job as follows:
First Run on deployment group job for configuration of web servers.
Deployment group : Select the deployment group you created in the previous example.
Required tags : web
Then add a SQL Ser ver Database Deploy task to this job.
Deploy with System Center Virtual Machine Manager
11/2/2020 • 8 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
You can automatically provision new virtual machines in System Center Virtual Machine Manager (SCVMM) and
deploy to those virtual machines after every successful build.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
SCVMM connection
You need to first configure how Azure Pipelines connects to SCVMM. You cannot use Microsoft-hosted agents to
run SCVMM tasks since the VMM Console is not installed on hosted agents. You must set up a self-hosted build
and release agent on the same network as your SCVMM server.
You need to first configure how TFS connects to SCVMM. You must have a build and release agent that can
communicate with the SCVMM server.
1. Install the Vir tual Machine Manager (VMM) console on the agent machine by following these
instructions. Supported version: System Center 2012 R2 Virtual Machine Manager.
2. Install the System Center Vir tual Machine Manager (SCVMM) extension from Visual Studio
Marketplace into TFS or Azure Pipelines:
If you are using Azure Pipelines , install the extension from this location in Visual Studio Marketplace.
If you are using Team Foundation Ser ver , download the extension from this location in Visual Studio
Marketplace, upload it to your Team Foundation Server, and install it.
3. Create an SCVMM service connection in your project:
In your Azure Pipelines or TFS project in your web browser, navigate to the project settings and select
Ser vice connections .
In the Ser vice connections tab, choose New ser vice connection , and select SCVMM .
In the Add new SCVMM Connection dialog, enter the values required to connect to the SCVMM
Server:
Connection Name : Enter a user-friendly name for the service connection such as
MySCVMMSer ver .
SCVMM Ser ver Name : Enter the fully qualified domain name and port number of the SCVMM
server, in the form machine.domain.com:por t .
Username and Password : Enter the credentials required to connect to the vCenter Server.
Username formats such as username , domain\username , machine-name\username , and
.\username are supported. UPN formats such as [email protected] and built-in system
accounts such as NT Authority\System are not supported.
See also
Create a virtual network isolated environment for build-deploy-test scenarios
Deploy to VMware vCenter Server
2/26/2020 • 5 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
You can automatically provision virtual machines in a VMware environment and deploy to those virtual machines
after every successful build.
VMware connection
You need to first configure how Azure Pipelines connects to vCenter. You cannot use Microsoft-hosted agents to
run VMware tasks since the vSphere SDK is not installed on these machines. You have to a set up a self-hosted
agent that can communicate with the vCenter server.
You need to first configure how Azure DevOps Server connects to vCenter. You have to a set up a self-hosted agent
that can communicate with the vCenter server.
You need to first configure how TFS connects to vCenter. You have to a set up a self-hosted agent that can
communicate with the vCenter server.
1. Install the VMware vSphere Management SDK to call VMware API functions that access vSphere web
services. To install and configure the SDK on the agent machine:
Download and install the latest version of the Java Runtime Environment from this location.
Go to this location and sign in with your existing credentials or register with the website. Then
download the vSphere 6.0 Management SDK .
Create a directory for the vSphere Management SDK such as C:\vSphereSDK . Do not include spaces
in the directory names to avoid issues with some of the batch and script files included in the SDK.
Unpack the vSphere Management SDK into the new folder you just created.
Add the full path and name of the precompiled VMware Java SDK file vim25.jar to the machine's
CLASSPATH environment variable. If you used the path and name C:\vSphereSDK for the SDK files,
as shown above, the full path will be:
C:\vSphereSDK\SDK\vsphere-ws\java\JAXWS\lib\vim25.jar
2. Install the VMware extension from Visual Studio Marketplace into TFS or Azure Pipelines.
3. Follow these steps to create a vCenter Server service connection in your project:
Open your Azure Pipelines or TFS project in your web browser. Choose the Settings icon in the
menu bar and select Ser vices .
In the Ser vices tab, choose New ser vice connection , and select VMware vCenter Ser ver .
In the Add new VMware vCenter Ser ver Connection dialog, enter the values required to
connect to the vCenter Server:
Connection Name : Enter a user-friendly name for the service connection such as Fabrikam
vCenter .
vCenter Ser ver URL : Enter the URL of the vCenter server, in the form
https://machine.domain.com/ . Note that only HTTPS connections are supported.
Username and Password : Enter the credentials required to connect to the vCenter Server.
Username formats such as username , domain\username , machine-name\username , and
.\username are supported. UPN formats such as [email protected] and built-in system
accounts such as NT Authority\System are not supported.
Managing VM snapshots
Use the VMware Resource Deployment task from the VMware extension and configure the properties as
follows to take snapshot of virtual machines, or to revert or delete them:
VMware Ser vice Connection : Select the VMware vCenter Server connection you created earlier.
Action : Select one of the actions: Take Snapshot of Vir tual Machines , Rever t Snapshot of Vir tual
Machines , or Delete Snapshot of Vir tual Machines .
Vir tual Machine Names : Enter the names of one or more virtual machines. Separate multiple names with
a comma; for example, VM1,VM2,VM3
Datacenter : Enter the name of the datacenter where the virtual machines will be created.
Snapshot Name : Enter the name of the snapshot. This snapshot must exist if you use the revert or delete
action.
Host Name : Depending on the option you selected for the compute resource type, enter the name of the
host, cluster, or resource pool.
Datastore : Enter the name of the datastore that will hold the virtual machines' configuration and disk files.
Description : Optional. Enter a description for the Take Snapshot of Vir tual Machines action, such as
$(Build.DefinitionName).$(Build.BuildNumber) . This can be used to track the execution of the build or
release that created the snapshot.
Skip Cer tificate Authority Check : If the vCenter Server's certificate is self-signed, select this option to
skip the validation of the certificate by a trusted certificate authority.
To verify if a self-signed certificate is installed on the vCenter Server, open the VMware vSphere Web
Client in your browser and check for a certificate error page. The vSphere Web Client URL will be of the
form https://machine.domain/vsphere-client/ . Good practice guidance for vCenter Server certificates
can be found in the VMware Knowledge Base (article 2057223).
Template : The name of the template that will be used to create the virtual machines. The template must
exist in the location you enter for the Datacenter parameter.
Vir tual Machine Names : Enter the names of one or more virtual machines. Separate multiple names with
a comma; for example, VM1,VM2,VM3
Datacenter : Enter the name of the datacenter where the virtual machines will be created.
Compute Resource Type : Select the type of hosting for the virtual machines: VMware ESXi Host , Cluster ,
or Resource Pool
Host Name : Depending on the option you selected for the compute resource type, enter the name of the
host, cluster, or resource pool.
Datastore : Enter the name of the datastore that will hold the virtual machines' configuration and disk files.
Description : Optional. Enter a description to identify the deployment.
Skip Cer tificate Authority Check : If the vCenter Server's certificate is self-signed, select this option to
skip the validation of the certificate by a trusted certificate authority. See the note for the previous step to
check for the presence of a self-signed certificate.
Azure Pipelines
Azure Pipelines can be used to build images for any repository containing a Dockerfile. Building of both Linux
and Windows containers is possible based on the agent platform used for the build.
Example
Get the code
Fork the following repository containing a sample application and a Dockerfile:
https://github.com/MicrosoftDocs/pipelines-javascript-docker
NOTE
You might be redirected to GitHub to sign in. If so, enter your GitHub credentials. You might be redirected to
GitHub to install the Azure Pipelines app. If so, select Approve and install.
4. Select Star ter pipeline . In the Review tab, replace the contents of azure-pipelines.yml with the
following snippet -
trigger:
- main
pool:
vmImage: 'Ubuntu-16.04'
variables:
imageName: 'pipelines-javascript-docker'
steps:
- task: Docker@2
displayName: Build an image
inputs:
repository: $(imageName)
command: build
Dockerfile: app/Dockerfile
5. Select Save and run , after which you're prompted for a commit message as Azure Pipelines adds the
azure-pipelines.yml file to your repository. After editing the message, select Save and run again to see
the pipeline in action.
TIP
Learn more about how to push the image to Azure Container Registry or push it other container registries such
as Google Container Registry or Docker Hub. Learn more about the Docker task used in the above sample.
Instead of using the recommended Docker task, it is also possible to invoke docker commands directly using a
command line task(script)
NOTE
Linux container images can be built using Microsoft hosted Ubuntu-16.04 agents or Linux platform based self-hosted
agents. Currently the Microsoft hosted MacOS agents can't be used to build container images as Moby engine needed
for building the images is not pre-installed on these agents.
BuildKit
BuildKit introduces build improvements in the areas of performance, storage management, feature
functionality, and security. To enable BuildKit based docker builds, set the DOCKER_BUILDKIT variable as shown
in the following snippet:
variables:
imageName: 'pipelines-javascript-docker'
DOCKER_BUILDKIT: 1
steps:
- task: Docker@2
displayName: Build an image
inputs:
repository: $(imageName)
command: build
Dockerfile: app/Dockerfile
NOTE
BuildKit is not currently supported on Windows hosts.
Self-hosted agents
Docker needs to be installed on self-hosted agent machines prior to runs that try to build container images. To
address this issue, a step corresponding to Docker installer task can be placed in the pipeline definition prior to
the step related to Docker task.
The above command results in an equivalent image in terms of content as the one built by using the Docker
task. The Docker task itself internally calls docker binary on script, but also stitches together a few more
commands to provide a few additional benefits as described in the Docker task's documentation.
FAQ
Is reutilizing layer caching during builds possible on Azure Pipelines?
In the current design of Microsoft-hosted agents, every job is dispatched to a newly provisioned virtual
machine (based on the image generated from azure-pipelines-image-generation repository templates). These
virtual machines are cleaned up after the job reaches completion, not persisted and thus not reusable for
subsequent jobs. The ephemeral nature of virtual machines prevents the reuse of cached Docker layers.
However, Docker layer caching is possible using self-hosted agents as the ephemeral lifespan problem is not
applicable for these agents.
How to build Linux container images for architectures other than x64?
When you use Microsoft-hosted Linux agents, you create Linux container images for the x64 architecture. To
create images for other architectures (for example, x86, ARM, and so on), you can use a machine emulator
such as QEMU. The following steps illustrate how to create an ARM container image:
1. Author your Dockerfile so that an Intel binary of QEMU exists in the base image. For example, the raspbian
image already has this.
FROM balenalib/rpi-raspbian
2. Run the following script in your job before building the image:
# register QEMU binary - this can be done by running the following image
docker run --rm --privileged multiarch/qemu-user-static:register --reset
# build your image
How to run tests and publish test results for containerized applications?
For different options on testing containerized applications and publishing the resulting test results, check out
Publish Test Results task
Push an image
2/26/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Azure Pipelines can be used to push images to container registries such as Azure Container Registry (ACR),
Docker Hub, Google Container Registries, and others.
- task: Docker@2
displayName: Push image
inputs:
containerRegistry: |
$(dockerHub)
repository: $(imageName)
command: push
tags: |
test1
test2
Docker Hub
Choose the Docker Hub option under Docker registry service connection and provide the username and
password required for verifying and creating the service connection.
3. Replace [PROJECT_NAME] with the name of your GCP project and replace [ZONE] with the name of the
zone that you're going to use for creating resources. If you're unsure about which zone to pick, use
us-central1-a . For example:
Launch Code Editor by clicking the button in the upper-right corner of Cloud Shell:
8. Open the file named azure-pipelines-publisher-oneline.json . You'll need the content of this file in one of
the following steps:
9. In your Azure DevOps organization, select Project settings and then select Pipelines -> Ser vice
connections .
10. Click New ser vice connection and choose Docker Registr y
11. In the dialog, enter values for the following fields:
Docker Registr y: https://gcr.io/[PROJECT-ID] , where [PROJECT-ID] is the name of your GCP project.
**Docker ID: _json_key
Docker Password: Paste the contents of azure-pipelines-publisher-oneline.json
Ser vice connection name: gcrServiceConnection
Azure Pipelines
Docker Content Trust (DCT) provides the ability to use digital signatures for data sent to and received from remote
Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific
image tags.
NOTE
A prerequisite for signing an image is a Docker Registry with a Notary server attached (Such as the Docker Hub or Azure
Container Registry)
variables:
system.debug: true
containerRegistryServiceConnection: serviceConnectionName
imageRepository: foobar/content-trust
tag: test
steps:
- task: Docker@2
inputs:
command: login
containerRegistry: $(containerRegistryServiceConnection)
- task: DownloadSecureFile@1
name: privateKey
inputs:
secureFile: cc8f3c6f998bee63fefaaabc5a2202eab06867b83f491813326481f56a95466f.key
- script: |
mkdir -p $(DOCKER_CONFIG)/trust/private
cp $(privateKey.secureFilePath) $(DOCKER_CONFIG)/trust/private
- task: Docker@2
inputs:
command: build
Dockerfile: '**/Dockerfile'
containerRegistry: $(containerRegistryServiceConnection)
repository: $(imageRepository)
tags: |
$(tag)
arguments: '--disable-content-trust=false'
- task: Docker@2
inputs:
command: push
containerRegistry: $(containerRegistryServiceConnection)
repository: $(imageRepository)
tags: |
$(tag)
arguments: '--disable-content-trust=false'
env:
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: $(DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE)
NOTE
In the above snippet, the variable DOCKER_CONFIG is set by the login action done by Docker task. It is recommended
to setup DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE as a secret variable for the pipeline as the alternative
approach of using a pipeline variable in YAML would expose the passphrase in plaintext form.
Deploy to Kubernetes
2/26/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Azure Pipelines can be used to deploy to Kubernetes clusters offered by multiple cloud providers. This document
contains the concepts associated with setting up deployments for any Kubernetes cluster.
While it is possible to use script for loading kubeconfig files onto the agent from a remote location or secure files
and then use kubectl for performing the deployments, the KubernetesManifest task and Kubernetes service
connection can be used to do this in a simpler and more secure way.
KubernetesManifest task
KubernetesManifest task has the added benefits of being able to check for object stability before marking a task as
success/failure, perform artifact substitution, add pipeline traceability-related annotations onto deployed objects,
simplify creation and referencing of imagePullSecrets, bake manifests using Helm or kustomization.yaml or Docker
compose files, and aid in deployment strategy rollouts.
Example
jobs:
- deployment:
displayName: Deploy to AKS
pool:
vmImage: ubuntu-latest
environment: contoso.aksnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Create secret
inputs:
action: createSecret
namespace: aksnamespace
secretType: dockerRegistry
secretName: foo-acr-secret
dockerRegistryEndpoint: fooACR
- task: KubernetesManifest@0
displayName: Create secret
inputs:
action: createSecret
namespace: aksnamespace
secretType: dockerRegistry
secretName: bar-acr-secret
dockerRegistryEndpoint: barACR
- task: KubernetesManifest@0
displayName: Deploy
inputs:
action: deploy
namespace: aksnamespace
manifests: manifests/deployment.yml|manifests/service.yml
containers: |
foo.azurecr.io/demo:$(tagVariable1)
bar.azurecr.io/demo:$(tagVariable2)
imagePullSecrets: |
foo-acr-secret
bar-acr-secret
Note that to allow image pull from private registries, prior to the deploy action, the createSecret action is used
along with instances of Docker registry service connection to create imagePullSecrets that are subsequently
referenced in the step corresponding to deploy action.
TIP
If setting up an end-to-end CI-CD pipeline from scratch for a repository containing a Dockerfile, checkout the Deploy to
Azure Kubernetes template, which constructs an end-to-end YAML pipeline along with creation of an environment and
Kubernetes resource to help visualize these deployments.
While YAML based pipeline currently supports triggers on a single Git repository, if triggers are required for manifest files
stored in another Git repository or if triggers are required for Azure Container Registry or Docker Hub, usage of release
pipelines instead of a YAML based pipeline is recommended for doing the Kubernetes deployments.
Alternatives
Instead of using the KubernetesManifest task for deployment, one can also use the following alternatives:
Kubectl task
kubectl invocation on script. For example: script: kubectl apply -f manifest.yml
Bake manifests
2/26/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Bake action of the Kubernetes manifest task is useful for turning templates into manifests with the help of a
template engine. The bake action of Kubernetes manifest task is intended to provide visibility into the
transformation between the input templates and the end manifest files that are used in the deployments. Helm 2,
kustomize, and kompose are supported as templating options under the bake action.
The baked manifest files are intended to be consumed downstream (subsequent task) where these manifest files
are used as inputs for the deploy action of the Kubernetes manifest task.
Helm 2 example
- deployment:
displayName: Bake and deploy to AKS
pool:
vmImage: ubuntu-latest
environment: contoso.aksnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
name: bake
displayName: Bake K8s manifests from Helm chart
inputs:
action: bake
renderType: helm2
helmChart: charts/sample
overrides: 'image.repository:nginx'
- task: KubernetesManifest@0
displayName: Deploy K8s manifests
inputs:
kubernetesServiceConnection: k8sSC1
manifests: $(bake.manifestsBundle)
containers: |
nginx: 1.7.9
NOTE
Instead of transforming the Helm charts into manifest files in the template as shown above, if one intends to use Helm for
directly managing releases and rollbacks, checkout the Package and Deploy Helm Charts task.
Kustomize example
steps:
- task: KubernetesManifest@0
name: bake
displayName: Bake K8s manifests from kustomization path
inputs:
action: bake
renderType: kustomize
kustomizationPath: folderContainingKustomizationFile
- task: KubernetesManifest@0
displayName: Deploy K8s manifests
inputs:
kubernetesServiceConnection: k8sSC1
manifests: $(bake.manifestsBundle)
Kompose example
steps:
- task: KubernetesManifest@0
name: bake
displayName: Bake K8s manifests from Docker Compose
inputs:
action: bake
renderType: kompose
dockerComposeFile: docker-compose.yaml
- task: KubernetesManifest@0
displayName: Deploy K8s manifests
inputs:
kubernetesServiceConnection: k8sSC1
manifests: $(bake.manifestsBundle)
Multi-cloud Kubernetes deployments
2/26/2020 • 2 minutes to read • Edit Online
Azure Pipelines
With Kubernetes having a standard interface and running the same way on all cloud providers, Azure Pipelines can
be used for deploying to Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Amazon Elastic
Kubernetes Service (EKS), or clusters from any other cloud providers. This document contains information on how
to connect to each of these clusters, and how to perform parallel deployments to multiple clouds.
NOTE
Deployments to Kubernetes clusters are possible using regular jobs as well, but the benefits of pipeline traceability and ability
to diagnose resource health are not available in this option.
To set up multi-cloud deployment, create an environment and subsequently add Kubernetes resources associated
with namespaces of Kubernetes clusters. Follow the steps under the linked sections based on the cloud provider of
your Kubernetes cluster -
Azure Kubernetes Service
Generic provider using existing service account (For GKE/EKS/...)
TIP
The generic provider approach based on existing service account works with clusters from any cloud provider, including
Azure. The incremental benefit of using the Azure Kubernetes Service option instead is that it involves creation of new
ServiceAccount and RoleBinding objects (instead of reusing an existing ServiceAccount) so that the newly created RoleBinding
object limits the operations of the ServiceAccount to the chosen namespace only.
trigger:
- master
jobs:
- deployment:
displayName: Deploy to AKS
pool:
vmImage: ubuntu-latest
environment: contoso.aksnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: aksnamespace
manifests: manifests/*
- deployment:
displayName: Deploy to GKE
pool:
vmImage: ubuntu-latest
environment: contoso.gkenamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: gkenamespace
manifests: manifests/*
- deployment:
displayName: Deploy to EKS
pool:
vmImage: ubuntu-latest
environment: contoso.eksnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: eksnamespace
manifests: manifests/*
- deployment:
displayName: Deploy to OpenShift
pool:
vmImage: ubuntu-latest
environment: contoso.openshiftnamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: openshiftnamespace
manifests: manifests/*
- deployment:
displayName: Deploy to DigitalOcean
pool:
vmImage: ubuntu-latest
environment: contoso.digitaloceannamespace
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: digitaloceannamespace
manifests: manifests/*
NOTE
When using the service account option, ensure that a RoleBinding exists, which grants permissions in the edit
ClusterRole to the desired service account. This is needed so that the service account can be used by Azure Pipelines for
creating objects in the chosen namespace.
Deployment strategies for Kubernetes in Azure
Pipelines
2/26/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Kubernetes manifest task currently supports canary deployment strategy. This document explains the guidelines
and best practices around the usage of this task for setting up canary deployments to Kubernetes.
End-to-end example
An end-to-end example of setting up build and release pipelines to perform canary deployments on Kubernetes
clusters for each change made to application code is available under the how-to guides. This example also
demonstrates the usage of Prometheus for comparing the baseline and canary metrics when the pipeline is paused
using a manual intervention task.
Build and push to Azure Container Registry
2/26/2020 • 3 minutes to read • Edit Online
Azure Pipelines
In this step-by-step guide, you'll learn how to create a pipeline that continuously builds a repository that contains a
Dockerfile. Every time you change your code, the images are automatically pushed to Azure Container Registry.
Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.
TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.
https://github.com/MicrosoftDocs/pipelines-javascript-docker
Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:
Learn more
We invite you to learn more about:
The services:
Azure Container Registry
The template used to create your pipeline: docker-container
The method your pipeline uses to connect to the service: Docker registry service connections
Some of the tasks used in your pipeline, and how you can customize them:
Docker task
Kubernetes manifest task
Some of the key concepts for this kind of pipeline:
Jobs
Docker registry service connections (the method your pipeline uses to connect to the service)
Build and deploy to Azure Kubernetes Service
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines
Azure Kubernetes Service manages your hosted Kubernetes environment, making it quicker and easier for you to
deploy and manage containerized applications. This service also eliminates the burden of ongoing operations and
maintenance by provisioning, upgrading, and scaling resources on demand, without taking your applications
offline.
In this step-by-step guide, you'll learn how to create a pipeline that continuously builds and deploys your app.
Every time you change your code in a repository that contains a Dockerfile, the images are pushed to your Azure
Container Registry, and the manifests are then deployed to your Azure Kubernetes Service cluster.
Prerequisites
To ensure that your Azure DevOps project has the authorization required to access your Azure subscription, create
an Azure Resource Manager service connection. The service connection is required when you create a pipeline in
the project to deploy to Azure Kubernetes Service. Otherwise, the drop-down lists for Cluster and Container
Registr y are empty.
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. (An Azure DevOps
organization is different from your GitHub organization. Give them the same name if you want alignment
between them.)
If your team already has one, then make sure you're an administrator of the Azure DevOps project that you
want to use.
An Azure account. If you don't have one, you can create one for free.
TIP
If you're new at this, the easiest way to get started is to use the same email address as the owner of both the Azure
Pipelines organization and the Azure subscription.
https://github.com/MicrosoftDocs/pipelines-javascript-docker
- task: PublishPipelineArtifact@1
inputs:
artifactName: 'manifests'
path: 'manifests'
The deployment job uses the Kubernetes manifest task to create the imagePullSecret required by Kubernetes
cluster nodes to pull from the Azure Container Registry resource. Manifest files are then used by the Kubernetes
manifest task to deploy to the Kubernetes cluster.
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
jobs:
- deployment: Deploy
displayName: Deploy job
pool:
vmImage: $(vmImageName)
environment: 'azooinmyluggagepipelinesjavascriptdocker.aksnamespace'
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact@2
inputs:
artifactName: 'manifests'
downloadPath: '$(System.ArtifactsDirectory)/manifests'
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
namespace: $(k8sNamespace)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespace)
manifests: |
$(System.ArtifactsDirectory)/manifests/deployment.yml
$(System.ArtifactsDirectory)/manifests/service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)
Clean up resources
Whenever you're done with the resources you created above, you can use the following command to delete them:
Learn more
We invite you to learn more about:
The services:
Azure Kubernetes Service
Azure Container Registry
The template used to create your pipeline: Deploy to existing Kubernetes cluster template
Some of the tasks used in your pipeline, and how you can customize them:
Docker task
Kubernetes manifest task
Some of the key concepts for this kind of pipeline:
Environments
Deployment jobs
Stages
Docker registry service connections (the method your pipeline uses to connect to the service)
Canary deployment strategy for Kubernetes
deployments
11/2/2020 • 13 minutes to read • Edit Online
Azure Pipelines
Canary deployment strategy involves deploying new versions of an application next to stable production versions
to see how the canary version compares against the baseline before promoting or rejecting the deployment. This
step-by-step guide covers usage of Kubernetes manifest task's canary strategy support for setting up canary
deployments for Kubernetes and the associated workflow in terms of instrumenting code and using the same for
comparing baseline and canary before taking a manual judgment on promotion/rejection of the canary.
Prerequisites
A repository in Azure Container Registry or Docker Hub (Azure Container Registry, Google Container Registry,
Docker Hub) with push privileges.
Any Kubernetes cluster (Azure Kubernetes Service, Google Kubernetes Engine, Amazon Elastic Kubernetes
Service).
Sample code
Fork the following repository on GitHub -
https://github.com/MicrosoftDocs/azure-pipelines-canary-k8s
Here's a brief overview of the files in the repository that are used during the course of this guide -
./app:
app.py - Simple Flask based web server instrumented using Prometheus instrumentation library for
Python applications. A custom counter is set up for the number of 'good' and 'bad' responses given out
based on the value of success_rate variable.
Dockerfile - Used for building the image with each change made to app.py. With each change made to
app.py, build pipeline (CI) is triggered and the image gets built and pushed to the container registry.
./manifests:
deployment.yml - Contains specification of the sampleapp Deployment workload corresponding to the
image published earlier. This manifest file is used not just for the stable version of Deployment object, but
for deriving the -baseline and -canary variants of the workloads as well.
service.yml - Creates sampleapp service for routing requests to the pods spun up by the Deployments
(stable, baseline, and canary) mentioned above.
./misc
service-monitor.yml - Used for setup of a ServiceMonitor object to set up Prometheus metric scraping.
fortio-deploy.yml - Used for setup of fortio deployment that is subsequently used as a load-testing tool
to send a stream of requests to the sampleapp service deployed earlier. With sampleapp service's selector
being applicable for all the three pods resulting from the Deployment objects that get created during the
course of this how-to guide - sampleapp , sampleapp-baseline and sampleapp-canary , the stream of
requests sent to sampleapp get routed to pods under all these three deployments.
NOTE
While Prometheus is used for code instrumentation and monitoring in this how-to guide, any equivalent solution like Azure
Application Insights can be used as an alternative as well.
Install prometheus-operator
Use the following command from your development machine (with kubectl and Helm installed and context set to
the cluster you want to deploy against) to install Prometheus on your cluster. Grafana, which is used later in this
how-to guide for visualizing the baseline and canary metrics on dashboards, is installed as part of this Helm chart -
trigger:
- master
pool:
vmImage: Ubuntu-16.04
variables:
imageName: azure-pipelines-canary-k8s
steps:
- task: Docker@2
displayName: Build and push image
inputs:
containerRegistry: dockerRegistryServiceConnectionName #replace with name of your Docker registry
service connection
repository: $(imageName)
command: buildAndPush
Dockerfile: app/Dockerfile
tags: |
$(Build.BuildId)
If the Docker registry service connection created by you was associated with foobar.azurecr.io , then the image
is to foobar.azurecr.io/azure-pipelines-canary-k8s:$(Build.BuildId) based on the above configuration.
pool:
vmImage: Ubuntu-16.04
variables:
imageName: azure-pipelines-canary-k8s
dockerRegistryServiceConnection: dockerRegistryServiceConnectionName #replace with name of your Docker
registry service connection
imageRepository: 'azure-pipelines-canary-k8s'
containerRegistry: containerRegistry #replace with the name of your container registry, Should be in
the format foobar.azurecr.io
tag: '$(Build.BuildId)'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: Ubuntu-16.04
steps:
- task: Docker@2
displayName: Build and push image
inputs:
containerRegistry: $(dockerRegistryServiceConnection)
repository: $(imageName)
command: buildAndPush
Dockerfile: app/Dockerfile
tags: |
$(tag)
- upload: manifests
artifact: manifests
- upload: misc
artifact: misc
7. Add an additional stage at the bottom of your YAML file to deploy the canary version.
- stage: DeployCanary
displayName: Deploy canary
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploycanary
displayName: Deploy canary
pool:
vmImage: Ubuntu-16.04
environment: 'akscanary.canarydemo'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: azure-pipelines-canary-k8s
dockerRegistryEndpoint: azure-pipelines-canary-k8s
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: 'deploy'
strategy: 'canary'
percentage: '25'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/service.yml
containers: '$(containerRegistry)/$(imageRepository):$(tag)'
imagePullSecrets: azure-pipelines-canary-k8s
- task: KubernetesManifest@0
displayName: Deploy Forbio and ServiceMonitor
inputs:
action: 'deploy'
manifests: |
$(Pipeline.Workspace)/misc/*
8. Save your pipeline by committing directly to the main branch. This commit should already run your pipeline
successfully.
Manual intervention for promoting or rejecting canary
YAML
Classic
1. Navigate to Pipelines -> Environments -> New environment
2. Configure the new environment as follows -
Name : akspromote
Resource : choose Kubernetes
3. Click on Next and now configure your Kubernetes resource as follows -
Provider : Azure Kubernetes Service
Azure subscription : Choose the subscription that holds your kubernetes cluster
Cluster : Choose your cluster
Namespace : Choose the namespace canarydemo namespace you created earlier
4. Click on Validate and Create
5. Select your new akspromote environment from the list of environments.
6. Click on the button with the three dots in the top right -> Approvals and checks -> Approvals
7. Configure your approval as follows -
Approvers : Add your own user account
Advanced : Make sure the Allow approvers to approve their own runs checkbox is checked.
8. Click on Create
9. Navigate to Pipelines -> Select the pipeline you just created -> Edit
10. Add an additional stage PromoteRejectCanary at the end of your YAML file to promote the changes.
- stage: PromoteRejectCanary
displayName: Promote or Reject canary
dependsOn: DeployCanary
condition: succeeded()
jobs:
- deployment: PromoteCanary
displayName: Promote Canary
pool:
vmImage: Ubuntu-16.04
environment: 'akspromote.canarydemo'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: promote canary
inputs:
action: 'promote'
strategy: 'canary'
manifests: '$(Pipeline.Workspace)/manifests/*'
containers: '$(containerRegistry)/$(imageRepository):$(tag)'
imagePullSecrets: '$(imagePullSecret)'
11. Add an additional stage RejectCanary at the end of your YAML file to roll back the changes.
- stage: RejectCanary
displayName: Reject canary
dependsOn: PromoteRejectCanary
condition: failed()
jobs:
- deployment: RejectCanary
displayName: Reject Canary
pool:
vmImage: Ubuntu-16.04
environment: 'akscanary.canarydemo'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: reject canary
inputs:
action: 'reject'
strategy: 'canary'
manifests: '$(Pipeline.Workspace)/manifests/*'
12. Save your YAML pipeline by clicking on Save and commit it directly to the main branch.
Deploy a stable version
YAML
Classic
Currently for the first run of the pipeline, the stable version of the workloads and their baseline/canary version do
not exist in the cluster. To deploy the stable version -
1. In app/app.py , change success_rate = 5 to success_rate = 10 .This change triggers the pipeline leading to
build and push of the image to container registry. It will also trigger the DeployCanary stage.
2. Given you have configured an approval on the akspromote environment, the release will wait before executing
that stage.
3. In the summary of the run click on Review and next click on Approve in the subsequent fly-out. This will result
in the stable version of the workloads ( sampleapp deployment in manifests/deployment.yml) being deployed to
the namespace
The above change triggers build pipeline resulting in build and push of image to the container registry, which in
turn triggers the release pipeline and the commencement of Deploy canar y stage.
Simulate requests
On your development machine, run the following commands and keep it running to send a constant stream of
requests at the sampleapp service. sampleapp service routes the requests to the pods spun by stable sampleapp
deployment and the pods spun up by sampleapp-baseline and sampleapp-canary deployments as the selector
specified for sampleapp is applicable for all these pods.
http://localhost:3000/login
3. When prompted for login credentials, unless the adminPassword value was overridden during prometheus-
operator Helm chart installation, use the following values -
username: admin
password: prom-operator
4. In the left navigation menu, choose + -> Dashboard -> Graph
5. Click anywhere on the newly added panel and type e to edit the panel.
6. In the Metrics tab, enter the following query -
rate(requests_total{pod=~"sampleapp-.*", custom_status="good"}[1m])
7. In the General tab, change the name of this panel to All sampleapp pods
8. In the overview bar at the top of the page, change the duration range to Last 5 minutes or Last 15
minutes .
9. Click on the save icon in the overview bar to save this panel.
10. While the above panel visualizes success rate metrics from all the variants - stable (from sampleapp
deployment), baseline (from sampleapp-baseline deployment) and canary (from sampleapp-canary
deployment), you can visualize just the baseline and canary metrics by adding another panel with the
following configuration -
General tab -> Title : sampleapp baseline and canary
Metrics tab -> query to be used:
rate(requests_total{pod=~"sampleapp-baseline-.*|sampleapp-canary-.*", custom_status="good"}[1m])
NOTE
Note that the panel for baseline and canary metrics will only have metrics available for comparison when the Deploy
canar y stage has successfully completed and the Promote/reject canar y stage is waiting on manual intervention.
TIP
Setup annotations for Grafana dashboards to visually depict stage completion events for Deploy canar y and
Promote/reject canar y so that you know when to start comparing baseline with canary and when the
promotion/rejection of canary has completed respectively.
Azure Pipelines
You can use a pipeline to automatically train and deploy machine learning models with the Azure Machine Learning
service. Here you'll learn how to build a machine learning model, and then deploy the model as a web service. You'll
end up with a pipeline that you can use to train your model.
Prerequisites
Before you read this topic, you should understand how the Azure Machine Learning service works.
Follow the steps in Azure Machine Learning quickstart: portal to create a workspace.
https://github.com/MicrosoftDocs/pipelines-azureml
Planning
Before you using Azure Pipelines to automate model training and deployment, you must understand the files
needed by the model and what indicates a "good" trained model.
Machine learning files
In most cases, your data science team will provide the files and resources needed to train the machine learning
model. The following files in the example project would be provided by the data scientists:
Training script ( train.py ): The training script contains logic specific to the model that you are training.
Scoring file ( score.py ): When the model is deployed as a web service, the scoring file receives data from
clients and scores it against the model. The output is then returned to the client.
RunConfig settings ( sklearn.runconfig ): Defines how the training script is ran on the compute target that is
used for training.
Training environment ( myenv.yml ): Defines the packages needed to run the training script.
Deployment environment ( deploymentConfig.yml ): Defines the resources and compute needed for the
deployment environment.
Deployment environment ( inferenceConfig.yml ): Defines the packages needed to run and score the model in
the deployment environment.
Some of these files are directly used when developing a model. For example, the train.py and score.py files.
However the data scientist may be programmatically creating the run configuration and environment settings. If so,
they can create the .runconfig and training environment files, by using RunConfiguration.save(). Alternatively,
default run configuration files will be created for all compute targets already in the workspace when running the
following command.
The files created by this command are stored in the .azureml directory.
Determine the best model
The example pipeline deploys the trained model without doing any performance checks. In a production scenario,
you may want to log metrics so that you can determine the "best" model.
For example, you have a model that is already deployed and has an accuracy of 90. You train a new model based on
new checkins to the repo, and the accuracy is only 80, so you don't want to deploy it. This is an example of a metric
that you can create automation logic around, as you can do a simple comparison to evaluate the model. In other
cases, you may have several metrics that are used to indicate the "best" model, and must be evaluated by a human
before deployment.
Depending on what "best" looks like for your scenario, you may need to create a release pipeline where someone
must inspect the metrics to determine if the model should be deployed.
You should work with your data scientists to understand what metrics are important for your model.
To log metrics during training, use the Run class.
C OMMAND P URP O SE
az ml folder attach Associates the files in the project with your Azure Machine
Learning service workspace.
az ml computetarget create Creates a compute target that is used to train the model.
For more information on these commands, see the CLI extension reference.
Next steps
Learn how you can further integrate machine learning into your pipelines with the Machine Learning extension.
For more examples of using Azure Pipelines with Azure Machine Learning service, see the following repos:
MLOps (CLI focused)
MLOps (Python focused)
Overview of artifacts in Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
You can publish and consume many different types of packages and artifacts with Azure Pipelines. Your
continuous integration/continuous deployment (CI/CD) pipeline can publish specific package types to their
respective package repositories (NuGet, npm, Python, and so on). Or you can use build artifacts and pipeline
artifacts to help store build outputs and intermediate files between build steps. You can then add onto, build, test,
or even deploy those artifacts.
NOTE
Aside from being published, Build and Release artifacts will be available as long as that Build or Release is retained unless
otherwise specified. For more information on retaining Build and Release artifacts, see the Retention Policy documentation.
Build artifacts Build artifacts are the files that you want your build to
produce. Build artifacts can be nearly anything that your
team needs to test or deploy your app. For example, you've
got .dll and .exe executable files and a .PDB symbols file of a
.NET or C++ Windows app.
Pipeline artifacts You can use pipeline artifacts to help store build outputs and
move intermediate files between jobs in your pipeline.
Pipeline artifacts are tied to the pipeline that they're created
in. You can use them within the pipeline and download them
from the build, as long as the build is retained. Pipeline
artifacts are the new generation of build artifacts. They take
advantage of existing services to dramatically reduce the time
it takes to store outputs in your pipelines. Only available in
Azure DevOps Ser vices .
NOTE
Build and Release artifacts will be available as long as that Build or Release run is retained, unless you specify how long to
retain the artifacts. For more information on retaining Build and Release artifacts, see the Retention Policy documentation.
Azure Pipelines
Pipeline artifacts provide a way to share files between stages in a pipeline or between different pipelines. They
are typically the output of a build process that needs to be consumed by another job or be deployed. Artifacts
are associated with the run they were produced in and remain available after the run has completed.
NOTE
Both PublishPipelineArtifact@1 and DownloadPipelineArtifact@2 require a minimum agent version of 2.153.1
Publishing artifacts
NOTE
This feature is only available on Azure DevOps Services. Typically, new features are introduced in the cloud service first,
and then made available on-premises in the next major version or update of Azure DevOps Server. To learn more, see
Azure DevOps Feature Timeline.
To publish (upload) an artifact for the current run of a CI/CD or classic pipeline:
YAML
YAML (task)
Classic
Azure CLI
steps:
- publish: $(System.DefaultWorkingDirectory)/bin/WebApp
artifact: WebApp
NOTE
The publish keyword is a shortcut for the Publish Pipeline Ar tifact task.
Keep in mind:
Although artifact name is optional, it is a good practice to specify a name that accurately reflects the
contents of the artifact.
The path of the file or folder to publish is required. It can be absolute or relative to
$(System.DefaultWorkingDirectory) .
If you plan to consume the artifact from a job running on a different operating system or file system, you
must ensure all file paths in the artifact are valid for the target environment. For example, a file name
containing a \ or * character will typically fail to download on Windows.
NOTE
You will not be billed by Azure Artifacts for storage of Pipeline Artifacts, Build Artifacts, and Pipeline Caching. For more
information, see Which artifacts count toward my total billed storage.
Cau t i on
Deleting a build that published Artifacts to a file share will result in the deletion of all Artifacts in that UNC path.
Limiting which files are included
.artifactignore files use the identical file-globbing syntax of .gitignore (with very few limitations) to provide
a version-controlled way to specify which files should not be added to a pipeline artifact.
Using an .artifactignore file, it is possible to omit the path from the task configuration, if you want to create a
Pipeline Artifact containing everything in and under the working directory, minus all of the ignored files and
folders. For example, to include only files in the artifact with a .exe extension:
**/*
!*.exe
The above statement instructs the universal package task and the pipeline artifacts task to ignore all files except
the ones with .exe extension.
NOTE
.artifactignore follows the same syntax as .gitignore with some minor limitations. The plus sign character + is not
supported in URL paths as well as some of the builds semantic versioning metadata ( + suffix) in some packages types
such as Maven.
To learn more, see Use the .artifactignore file or the .gitignore documentation.
IMPORTANT
Deleting and/or overwriting Pipeline Artifacts is not currently supported. The recommended workflow if you want to re-
run a failed pipeline job is to include the job ID in the artifact name. $(system.JobId) is the appropriate variable for this
purpose. See System variables to learn more about predefined variables.
Downloading artifacts
To download a specific artifact in CI/CD or classic pipelines:
YAML
YAML (task)
Classic
Azure CLI
steps:
- download: current
artifact: WebApp
NOTE
The download keyword is a shortcut to the Download Pipeline Ar tifact task.
In this context, current means the current run of this pipeline (i.e. artifacts published earlier in the run). For
release and deployment jobs this also include any source artifacts.
For additional configuration options, see the download keyword in the YAML schema.
Keep in mind:
The Download Pipeline Ar tifact task can download both build artifacts (published with the Publish
Build Artifacts task) and pipeline artifacts.
By default, files are downloaded to $(Pipeline.Workspace)/{artifact} , where artifact is the name of
the artifact. The folder structure of the artifact is always preserved.
File matching patterns can be used to limit which files from the artifact(s) are downloaded. For more
information on how pattern matching works, see artifact selection.
For advanced scenarios, including downloading artifacts from other pipelines, see the Download Pipeline
Artifact task.
Artifact selection
A single download step can download one or more artifacts. To download multiple artifacts, do not specify an
artifact name and optionally use file matching patterns to limit which artifacts and files are downloaded. The
default file matching pattern is ** , meaning all files in all artifacts.
Single artifact
When an artifact name is specified:
1. Only files for this artifact are downloaded. If this artifact does not exist, the task will fail.
2. Unless the specified download path is absolute, a folder with the same name as the artifact is created
under the download path, and the artifact's files are placed in it.
3. File matching patterns are evaluated relative to the root of the artifact. For example, the pattern *.jar
matches all files with a .jar extension at the root of the artifact.
steps:
- download: current
artifact: WebApp
patterns: '**/*.js'
Files (with the directory structure of the artifact preserved) are downloaded under
$(Pipeline.Workspace)/WebApp .
Multiple artifacts
When no artifact name is specified:
1. Files from multiple artifacts can be downloaded, and the task does not fail if no files are downloaded.
2. A folder is always created under the download path for each artifact with files being downloaded.
3. File matching patterns should assume the first segment of the pattern is (or matches) an artifact name.
For example, WebApp/** matches all files from the WebApp artifact. The pattern */*.dll matches all files
with a .dll extension at the root of each artifact.
For example, to download all .zip files from all source artifacts:
YAML
YAML (task)
Classic
Azure CLI
steps:
- download: current
patterns: '**/*.zip'
NOTE
Artifacts are only downloaded automatically in deployment jobs. In a regular build job, you need to explicitly use the
download step keyword or Download Pipeline Ar tifact task.
To stop artifacts from being downloaded automatically, add a download step and set its value to none:
steps:
- download: none
FAQ
Can this task publish ar tifacts to a shared folder or network path?
Not currently, but this feature is planned.
What are build ar tifacts?
Build artifacts are the files generated by your build. See Build Artifacts to learn more about how to publish and
consume your build artifacts.
Artifacts in Azure Pipelines
11/2/2020 • 7 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
NOTE
We recommend upgrading from build ar tifacts ( PublishBuildArtifacts@1 and DownloadBuildArtifacts@0 ) to
pipeline ar tifacts ( PublishPipelineArtifact@1 and DownloadPipelineArtifact@2 ) for faster output storage speeds.
Artifacts are the files that you want your build to produce. Artifacts can be anything that your team needs to test
or deploy your app.
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop
pathToPublish : the folder or file path to publish. It can be an absolute or a relative path, and wildcards are not
supported.
ar tifactName : the name of the artifact that you want to create.
NOTE
You cannot use Bin , App_Data and other folder names reserved by IIS as an artifact name because this content is not
served in response to Web requests. Please see ASP.NET Web Project Folder Structure for more details.
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop1
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop2
pathToPublish : the folder or file path to publish. It can be an absolute or a relative path, and wildcards are not
supported.
ar tifactName : the name of the artifact that you want to create.
NOTE
You cannot use Bin , App_Data and other folder names reserved by IIS as an artifact name because this content is not
served in response to Web requests. Please see ASP.NET Web Project Folder Structure for more details.
- task: CopyFiles@2
inputs:
sourceFolder: '$(Build.SourcesDirectory)'
contents: '**/$(BuildConfiguration)/**/?(*.exe|*.dll|*.pdb)'
targetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop
sourceFolder : the folder that contains the files you want to copy. If you leave this value empty, copying will be
done from the root folder of your repo ( $(Build.SourcesDirectory) ).
contents : location(s) of the file(s) that will be copied to the destination folder.
targetFolder : destination folder.
pathToPublish : the folder or file path to publish. It can be an absolute or a relative path, and wildcards are not
supported.
ar tifactName : the name of the artifact that you want to create.
NOTE
You cannot use Bin , App_Data and other folder names reserved by IIS as an artifact name because this content is not
served in response to Web requests. Please see ASP.NET Web Project Folder Structure for more details.
- task: DownloadBuildArtifacts@0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: 'drop'
downloadPath: '$(System.ArtifactsDirectory)'
buildType : specify which build artifacts will be downloaded: current (the default value) or from a specific
build.
downloadType : choose whether to download a single artifact or all artifacts of a specific build.
ar tifactName : the name of the artifact that will be downloaded.
downloadPath : path on the agent machine where the artifacts will be downloaded.
YAML is not supported in TFS.
NOTE
In case you are using a deployment task, you can then reference your build artifacts by using $(Agent.BuildDirectory)
variable. See Agent variables for more information on how to use predefined variables.
Tips
Ar tifact publish location argument: Azure Pipelines/TFS (TFS 2018 RTM and older : Artifact type:
Server) is the best and simplest choice in most cases. This choice causes the artifacts to be stored in Azure
Pipelines or TFS. But if you're using a private Windows agent, you've got the option to drop to a UNC file
share.
Use forward slashes in file path arguments so that they work for all agents. Backslashes don't work for
macOS and Linux agents.
Build artifacts are stored on a Windows filesystem, which causes all UNIX permissions to be lost, including
the execution bit. You might need to restore the correct UNIX permissions after downloading your artifacts
from Azure Pipelines or TFS.
On Azure Pipelines and some versions of TFS, two different variables point to the staging directory:
Build.ArtifactStagingDirectory and Build.StagingDirectory . These are interchangeable.
Utility: Copy Files By copying files to $(Build.ArtifactStagingDirectory) , you can publish multiple files of
different types from different places specified by your matching patterns.
Utility: Delete Files You can prune unnecessary files that you copied to the staging directory.
Utility: Publish Build Artifacts
When the build is done, if you watched it run, select the name of the completed build and then select the
Ar tifacts tab to see your artifact.
From here, you can explore or download the artifacts.
You can also use Azure Pipelines to deploy your app by using the artifacts that you've published. See Artifacts in
Azure Pipelines releases.
NOTE
Use a Windows build agent. This option doesn't work for macOS and Linux agents.
Choose file share to copy the artifact to a file share. Common reasons to do this:
The size of your drop is large and consumes too much time and bandwidth to copy.
You need to run some custom scripts or other tools against the artifact.
If you use a file share, specify the UNC file path to the folder. You can control how the folder is created for each
build by using variables. For example: \\my\share\$(Build.DefinitionName)\$(Build.BuildNumber) .
Next steps
Publish and download artifacts in Azure Pipelines
Define your multi-stage classic pipeline
Releases in Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
This topic covers classic release pipelines. If you author your pipelines using YAML, see runs.
A release is the package or container that holds a versioned set of artifacts specified in a release pipeline in your
DevOps CI/CD processes. It includes a snapshot of all the information required to carry out all the tasks and
actions in the release pipeline, such as the stages, the tasks for each one, the values of task parameters and
variables, and the release policies such as triggers, approvers, and release queuing options. There can be multiple
releases from one release pipeline, and information about each one is stored and displayed in Azure Pipelines for
the specified retention period.
A deployment is the action of running the tasks for one stage, which results in the application artifacts being
deployed, tests being run, and whatever other actions are specified for that stage. Initiating a release starts each
deployment based on the settings and policies defined in the original release pipeline. There can be multiple
deployments of each release even for one stage. When a deployment of a release fails for a stage, you can
redeploy the same release to that stage. To redeploy a release, simply navigate to the release you want to deploy
and select deploy.
The following schematic shows the relationship between release pipelines, releases, and deployments.
[!div class="mx-imgBorder"]
Releases can be created from a release pipeline in several ways:
By a continuous deployment trigger that creates a release when a new version of the source build artifacts
is available.
By using the Release command in the UI to create a release manually from the Releases or the Builds
summary.
By sending a command over the network to the REST interface.
However , the action of creating a release does not mean it will automatically or immediately start a deployment.
For example:
There may be deployment triggers defined for a stage, which force the deployment to wait; this could be for
a manual deployment, until a scheduled day and time, or for successful deployment to another stage.
A deployment started manually from the [Deploy] command in the UI, or from a network command sent
to the REST interface, may specify a final target stage other than the last stage in a release pipeline. For
example, it may specify that the release is deployed only as far as the QA stage and not to the production
stage.
There may be queuing policies defined for a stage, which specify which of multiple deployments will occur,
or the order in which releases are deployed.
There may be pre-deployment approvers or gates defined for a stage, and the deployment will not occur
until all necessary approvals have been granted.
Approvers may defer the release to a stage until a specified date and time.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
NOTE
This topic covers classic release pipelines. To understand artifacts in YAML pipelines, see artifacts.
A release is a collection of artifacts in your DevOps CI/CD processes. An ar tifact is a deployable component of
your application. Azure Pipelines can deploy artifacts that are produced by a wide range of artifact sources, and
stored in different types of artifact repositories.
When authoring a release pipeline , you link the appropriate ar tifact sources to your release pipeline. For
example, you might link an Azure Pipelines build pipeline or a Jenkins project to your release pipeline.
When creating a release , you specify the exact version of these artifact sources; for example, the number of a
build coming from Azure Pipelines, or the version of a build coming from a Jenkins project.
After a release is created, you cannot change these versions. A release is fundamentally defined by the versioned
artifacts that make up the release. As you deploy the release to various stages, you will be deploying and
validating the same artifacts in all stages.
A single release pipeline can be linked to multiple ar tifact sources , of which one is the primary source. In this
case, when you create a release, you specify individual versions for each of these sources.
Artifacts are central to a number of features in Azure Pipelines. Some of the features that depend on the linking of
artifacts to a release pipeline are:
Auto-trigger releases . You can configure new releases to be automatically created whenever a new
version of an artifact is produced. For more information, see Continuous deployment triggers. Note that
the ability to automatically create releases is available for only some artifact sources.
Trigger conditions . You can configure a release to be created automatically, or the deployment of a
release to a stage to be triggered automatically, when only specific conditions on the artifacts are met. For
example, you can configure releases to be automatically created only when a new build is produced from a
certain branch.
Ar tifact versions . You can configure a release to automatically use a specific version of the build artifacts,
to always use the latest version, or to allow you to specify the version when the release is created.
Ar tifact variables . Every artifact that is part of a release has metadata associated with it, exposed to tasks
through variables. This metadata includes the version number of the artifact, the branch of code from
which the artifact was produced (in the case of build or source code artifacts), the pipeline that produced
the artifact (in the case of build artifacts), and more. This information is accessible in the deployment tasks.
For more information, see Artifact variables.
Work items and commits . The work items or commits that are part of a release are computed from the
versions of artifacts. For example, each build in Azure Pipelines is associated with a set of work items and
commits. The work items or commits in a release are computed as the union of all work items and commits
of all builds between the current release and the previous release. Note that Azure Pipelines is currently
able to compute work items and commits for only certain artifact sources.
Ar tifact download . Whenever a release is deployed to a stage, by default Azure Pipelines automatically
downloads all the artifacts in that release to the agent where the deployment job runs. The procedure to
download artifacts depends on the type of artifact. For example, Azure Pipelines artifacts are downloaded
using an algorithm that downloads multiple files in parallel. Git artifacts are downloaded using Git library
functionality. For more information, see Artifact download.
Artifact sources
There are several types of tools you might use in your application lifecycle process to produce or store artifacts.
For example, you might use continuous integration systems such as Azure Pipelines, Jenkins, or TeamCity to
produce artifacts. You might also use version control systems such as Git or TFVC to store your artifacts. Or you
can use repositories such as Azure Artifacts or a NuGet repository to store your artifacts. You can configure Azure
Pipelines to deploy artifacts from all these sources.
By default, a release created from the release pipeline will use the latest version of the artifacts. At the time of
linking an artifact source to a release pipeline, you can change this behavior by selecting one of the options to use
the latest build from a specific branch by specifying the tags, a specific version, or allow the user to specify the
version when the release is created from the pipeline.
If you link more than one set of artifacts, you can specify which is the primary (default).
IMPORTANT
The Artifacts Default version drop down list items depends on the repository type of the linked build definition.
The following options are supported by all the repository types: Specify at the time of release creation ,
Specific version , and Latest .
Latest from a specific branch with tags and Latest from the build pipeline default branch with tags
options are supported by the following repository types: TfsGit , GitHub , Bitbucket , and
GitHubEnterprise .
Latest from the build pipeline default branch with tags is not supported by XAML build definitions.
The following sections describe how to work with the different types of artifact sources.
Azure Pipelines
TFVC, Git, and GitHub
Jenkins
Azure Container Registry, Docker, and Kubernetes
Azure Artifacts (NuGet, Maven, npm, Python, and Universal Packages)
External or on-premises TFS
TeamCity
Other sources
NOTE
You must include a Publish Ar tifacts task in your build pipeline. For XAML build pipelines, an artifact with the name drop
is published implicitly.
Some of the differences in capabilities between different versions of TFS and Azure Pipelines are:
TFS 2015 : You can link build pipelines only from the same project of your collection. You can link multiple
definitions, but you cannot specify default versions. You can set up a continuous deployment trigger on
only one of the definitions. When multiple build pipelines are linked, the latest builds of all the other
definitions are used, along with the build that triggered the release creation.
TFS 2017 and newer and Azure Pipelines : You can link build pipelines from any of the projects in Azure
Pipelines or TFS. You can link multiple build pipelines and specify default values for each of them. You can
set up continuous deployment triggers on multiple build sources. When any of the builds completes, it will
trigger the creation of a release.
The following features are available when using Azure Pipelines sources:
Auto-trigger releases New releases can be created automatically when new builds
(including XAML builds) are produced. See Continuous
Deployment for details. You do not need to configure
anything within the build pipeline. See the notes above for
differences between version of TFS.
Artifact variables A number of artifact variables are supported for builds from
Azure Pipelines.
Work items and commits Azure Pipelines integrates with work items in TFS and Azure
Pipelines. These work items are also shown in the details of
releases. Azure Pipelines integrates with a number of version
control systems such as TFVC and Git, GitHub, Subversion,
and Other Git repositories. Azure Pipelines shows the
commits only when the build is produced from source code in
TFVC or Git.
Artifact download By default, build artifacts are downloaded to the agent. You
can configure an option in the stage to skip the download of
artifacts.
Deployment section in build The build summary includes a Deployment section, which
lists all the stages to which the build was deployed.
By default, the releases execute in with a collection level Job authorization scope. That means releases can access
resources in all projects in the organization (or collection for Azure DevOps Server). This is useful when linking
build artifacts from other projects. You can enable Limit job authorization scope to current project for
release pipelines in project settings to restrict access to artifacts for releases in a project.
To set job authorization scope for the organization:
Navigate to your organization settings page in the Azure DevOps user interface.
Select Settings under Pipelines.
Turn on the toggle Limit job authorization scope to current project for release pipelines to limit the scope to
current project. This is the recommended setting, as it enhances security for your pipelines.
To set job authorization scope for a specific project:
Navigate to your project settings page in the Azure DevOps user interface.
Select Settings under Pipelines.
Turn on the toggle Limit job authorization scope to current project to limit the scope to project. This is the
recommended setting, as it enhances security for your pipelines.
NOTE
If the scope is set to project at the organization level, you cannot change the scope in each project.
All jobs in releases run with the job authorization scope set to collection. In other words, these jobs have access to
resources in all projects in your project collection.
Work items and commits Azure Pipelines cannot show work items or commits
associated with releases when using version control artifacts.
By default, the releases execute in with a collection level Job authorization scope. That means releases can access
all repositories in the organization (or collection for Azure DevOps Server). You can enable Limit job
authorization scope to current project for release pipelines in project settings to restrict access to artifacts
for releases in a project.
Artifact variables A number of artifact variables are supported for builds from
Jenkins.
Work items and commits Azure Pipelines cannot show work items or commits for
Jenkins builds.
Artifact download By default, Jenkins builds are downloaded to the agent. You
can configure an option in the stage to skip the download of
artifacts.
Artifacts generated by Jenkins builds are typically propagated to storage repositories for archiving and sharing.
Azure blob storage is one of the supported repositories, allowing you to consume Jenkins projects that publish to
Azure storage as artifact sources in a release pipeline. Deployments download the artifacts automatically from
Azure to the agents. In this configuration, connectivity between the agent and the Jenkins server is not required.
Microsoft-hosted agents can be used without exposing the server to internet.
NOTE
Azure Pipelines may not be able to contact your Jenkins server if, for example, it is within your enterprise network. In this
case you can integrate Azure Pipelines with Jenkins by setting up an on-premises agent that can access the Jenkins server.
You will not be able to see the name of your Jenkins projects when linking to a build, but you can type this into the link
dialog field.
For more information about Jenkins integration capabilities, see Azure Pipelines Integration with Jenkins Jobs,
Pipelines, and Artifacts.
Work items and commits Azure Pipelines cannot show work items or commits.
Artifact download By default, builds are downloaded to the agent. You can
configure an option in the stage to skip the download of
artifacts.
NOTE
In the case of continuous deployment from multiple artifact sources (multiple registries/repositories), it isn't possible to map
artifact sources to trigger particular stages. A release will be created anytime there is a push to any of the artifact sources. If
you wish to map an artifact source to trigger a specific stage, the recommended way is to decompose the release pipeline
into multiple release pipelines.
Work items and commits Azure Pipelines cannot show work items or commits.
Artifact download By default, packages are downloaded to the agent. You can
configure an option in the stage to skip the download of
artifacts.
# Remove all copies of the artifact except the one with the lexicographically highest value.
Get-Item "myApplication*.jar" | Sort-Object -Descending Name | Select-Object -SkipIndex 0 | Remove-Item
TIP
Using this mechanism, you can also deploy artifacts published in one Azure Pipelines subscription in another Azure
Pipelines, or deploy artifacts published in one Team Foundation Server from another Team Foundation Server.
To enable these scenarios, you must install the TFS artifacts for Azure Pipelines extension from Visual Studio
Marketplace. Then create a service connection with credentials to connect to your TFS server (see service
connections for details).
You can then link a TFS build pipeline to your release pipeline. Choose External TFS Build in the Type list.
The following features are available when using external TFS sources:
Artifact variables A number of artifact variables are supported for external TFS
sources.
Work items and commits Azure Pipelines cannot show work items or commits for
external TFS sources.
NOTE
Azure Pipelines may not be able to contact an on-premises TFS server in case it's within your enterprise network. In that
case you can integrate Azure Pipelines with TFS by setting up an on-premises agent that can access the TFS server. You will
not be able to see the name of your TFS projects or build pipelines when linking to a build, but you can include those
variables in the link dialog fields. In addition, when you create a release, Azure Pipelines may not be able to query the TFS
server for the build numbers. Instead, enter the Build ID (not the build number) of the desired build in the appropriate
field, or select the Latest build.
Artifact variables A number of artifact variables are supported for builds from
TeamCity.
Work items and commits Azure Pipelines cannot show work items or commits for
TeamCity builds.
F EAT URE B EH AVIO R W IT H T EA M C IT Y SO URC ES
Artifact download By default, TeamCity builds are downloaded to the agent. You
can configure an option in the stage to skip the download of
artifacts.
NOTE
Azure Pipelines may not be able to contact your TeamCity server if, for example, it is within your enterprise network. In this
case you can integrate Azure Pipelines with TeamCity by setting up an on-premises agent that can access the TeamCity
server. You will not be able to see the name of your TeamCity projects when linking to a build, but you can type this into the
link dialog field.
This uniqueness also ensures that, if you later rename a linked artifact source in its original location (for example,
rename a build pipeline in Azure Pipelines or a project in Jenkins), you don't need to edit the task properties
because the download location defined in the agent does not change.
The source alias is, by default, the name of the source selected when you linked the artifact source, prefixed with
an underscore; depending on the type of the artifact source this will be the name of the build pipeline, job, project,
or repository. You can edit the source alias from the artifacts tab of a release pipeline; for example, when you
change the name of the build pipeline and you want to use a source alias that reflects the name of the build
pipeline.
Primary source
When you link multiple artifact sources to a release pipeline, one of them is designated as the primary artifact
source. The primary artifact source is used to set a number of pre-defined variables. It can also be used in naming
releases.
Artifact download
When you deploy a release to a stage, the versioned artifacts from each of the sources are, by default,
downloaded to the automation agent so that tasks running within that stage can deploy these artifacts. The
artifacts downloaded to the agent are not deleted when a release is completed. However, when you initiate the
next release, the downloaded artifacts are deleted and replaced with the new set of artifacts.
A new unique folder in the agent is created for every release pipeline when you initiate a release, and the artifacts
are downloaded into that folder. The $(System.DefaultWorkingDirectory) variable maps to this folder.
Azure Pipelines currently does not perform any optimization to avoid downloading the unchanged artifacts if the
same release is deployed again. In addition, because the previously downloaded contents are always deleted
when you initiate a new release, Azure Pipelines cannot perform incremental downloads to the agent.
You can, however, instruct Azure Pipelines to skip the automatic download of artifacts to the agent for a specific
job and stage of the deployment if you wish. Typically, you will do this when the tasks in that job do not require
any artifacts, or if you implement custom code in a task to download the artifacts you require.
In Azure Pipelines, you can, however, select which artifacts you want to download to the agent for a specific job
and stage of the deployment. Typically, you will do this to improve the efficiency of the deployment pipeline when
the tasks in that job do not require all or any of the artifacts, or if you implement custom code in a task to
download the artifacts you require.
Artifact variables
Azure Pipelines exposes a set of pre-defined variables that you can access and use in tasks and scripts; for
example, when executing PowerShell scripts in deployment jobs. When there are multiple artifact sources linked
to a release pipeline, you can access information about each of these. For a list of all pre-defined artifact variables,
see variables.
Additional information
Code repo sources in Azure Pipelines
Jenkins artifacts in Azure Pipelines
TeamCity extension for Continuous Integration
External TFS extension for Release Management
Related topics
Release pipelines
Stages
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
This guide covers the basics of using Azure Pipelines to work with Maven artifacts in Azure Artifacts feeds.
IMPORTANT
Do not commit this file into your repository.
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<servers>
<!-- Paste the <server> snippet generated by Azure DevOps here -->
</servers>
</settings>
7. Below the settings.xml snippet in the generated credentials dialog, there is a snippet to be added to the
<repositories> section of your project's pom.xml . Add that snippet. If you intend to use Maven to publish to
Artifacts, add the snippet to the <distributionManagement> section of the POM file as well. Commit and push
this change.
8. Upload settings.xml created in step 3 as a Secure File into the pipeline's library.
9. Add tasks to your pipeline to download the secure file and to copy it to the (~/.m2) directory. The latter can
be accomplished with the following PowerShell script, where settingsxml is the reference name of the
"Download secure file" task:
1. Navigate to Ar tifacts .
2. With your feed selected, select Connect to feed .
3. Select Maven .
<url>https://pkgs.dev.azure.com/[ORGANIZATION_NAME]/_packaging/[ORGANIZATION_NAME]/maven/v1</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
b. Add or edit the settings.xml file in ${user.home}/.m2. Replace the [ORGANIZATION_NAME] placeholder
with your own organization.
<server>
<id>[ORGANIZATION_NAME]</id>
<username>[ORGANIZATION_NAME]</username>
<password>[PERSONAL_ACCESS_TOKEN]</password>
</server>
c. Generate a Personal Access Token with Packaging read & write scopes and paste it into the
<password> tag.
IMPORTANT
In order to automatically authenticate Maven feeds from Azure Artifacts, you must have the mavenFeedAuthenticate
argument set to true in your Maven task. See Maven build task for more information.
mvn build
mvn deploy
Publish npm packages (YAML/Classic)
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
- task: Npm@1
inputs:
command: publish
publishRegistry: useFeed
publishFeed: projectName/feedName
useFeed : this option allows the use of an Azure Artifacts feed in the same organization as the build.
feedName : the name of the feed you want to publish to.
projectName the name of your project.
NOTE
All new feeds that were created through the classic user interface are project scoped feeds. You must include the project
name in the publishFeed parameter: publishFeed: '<projectName>/<feedName>' . See Project-scoped feeds vs.
Organization-scoped feeds to learn about the difference between the two types.
To publish to an external npm registry, you must first create a service connection to point to that feed. You can do
this by going to Project settings , selecting Ser vices , and then creating a New ser vice connection . Select the
npm option for the service connection. Fill in the registry URL and the credentials to connect to the registry. See
Service connections to learn more about how to create, manage, secure, and use a service connection.
To publish a package to an npm registry, add the following snippet to your azure-pipelines.yml file.
- task: Npm@1
inputs:
command: publish
publishEndpoint: '<copy and paste the name of the service connection here>'
publishEndpoint : This argument is required when publishRegistry == UseExternalRegistry . Copy and paste
the name of the service connection you created earlier.
For a list of other options, see the npm task to install and publish your npm packages, or run an npm command.
YAML is not supported in TFS.
NOTE
Ensure that your working folder has an .npmrc file with a registry= line, as described in the Connect to feed screen
in your feed.
The build does not support using the publishConfig property to specify the registry to which you're publishing. The
build will fail, potentially with unrelated authentication errors, if you include the publishConfig property in your
package.json configuration file.
FAQ
Where can I learn about the Azure Pipelines and TFS Package Management ser vice?
Check out the Azure Artifacts landing page for details about Artifacts in Azure Pipelines.
How to publish packages to my feed from the command line?
See Publish your package to an npm feed using the CLI for more information.
How to create a token that lasts longer than 90 days?
See Set up your client's npmrc for more information on how to set up authentication to Azure Artifacts
feeds.
Do you recommend using scopes or upstream sources?
We recommend using upstream sources because it gives you the most flexibility to use a combination of
scoped- and non-scoped packages in your feed, as well as scoped- and non-scoped packages from
npmjs.com.
See Use npm scopes and Use packages from npmjs.com for more details.
Publish to NuGet feeds (YAML/Classic)
11/2/2020 • 8 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
You can publish NuGet packages from your build to NuGet feeds using the Pipeline tasks as well as the Classic
user interface. You can publish these packages to:
Azure Artifacts or the TFS Package Management service.
Other NuGet services such as NuGet.org.
Your internal NuGet repository.
- task: NuGetCommand@2
inputs:
command: pack
packagesToPack: '**/*.csproj'
The NuGet task supports a number of options. The following list describes some of the key ones. The task
documentation describes the rest.
packagesToPack : The path to the files that describe the package you want to create. If you don't have these,
see the NuGet documentation to get started.
configuration : The default is $(BuildConfiguration) unless you want to always build either Debug or
Release packages, or unless you have a custom build configuration.
packDestination : The default is $(Build.ArtifactStagingDirectory) . If you set this, make a note of the
location so you can use it in the publish task.
YAML is not supported in TFS.
Package versioning
In NuGet, a particular package is identified by its name and version number. A recommended approach to
versioning packages is to use Semantic Versioning. Semantic version numbers have three numeric components,
Major.Minor.Patch .
When you fix a bug, you increment the patch ( 1.0.0 to 1.0.1 ). When you release a new backward-compatible
feature, you increment the minor version and reset the patch version to 0 ( 1.4.17 to 1.5.0 ). When you make a
backward-incompatible change, you increment the major version and reset the minor and patch versions to 0 (
2.6.5 to 3.0.0 ).
In addition to Major.Minor.Patch , Semantic Versioning provides for a prerelease label. Prerelease labels are a
hyphen ( - ) followed by whatever letters and numbers you want. Version 1.0.0-alpha , 1.0.0-beta , and
1.0.0-foo12345 are all prerelease versions of 1.0.0 . Even better, Semantic Versioning specifies that when you
sort by version number, those prerelease versions fit exactly where you'd expect: 0.99.999 < 1.0.0-alpha <
1.0.0 < 1.0.1-beta .
When you create a package in continuous integration (CI), you can use Semantic Versioning with prerelease
labels. You can use the NuGet task for this purpose. It supports the following formats:
Use the same versioning scheme for your builds and packages, if that scheme has at least three parts
separated by periods. The following build pipeline formats are examples of versioning schemes that are
compatible with NuGet:
$(Major).$(Minor).$(rev:.r) , where Major and Minor are two variables defined in the build pipeline.
This format will automatically increment the build number and the package version with a new patch
number. It will keep the major and minor versions constant, until you change them manually in the
build pipeline.
$(Major).$(Minor).$(Patch).$(date:yyyyMMdd) , where Major , Minor , and Patch are variables defined
in the build pipeline. This format will create a new prerelease label for the build and package while
keeping the major, minor, and patch versions constant.
Use a version that's different from the build number. You can customize the major, minor, and patch
versions for your packages in the NuGet task, and let the task generate a unique prerelease label based on
date and time.
Use a script in your build pipeline to generate the version.
YAML
Classic
This example shows how to use the date and time as the prerelease label.
variables:
Major: '1'
Minor: '0'
Patch: '0'
steps:
- task: NuGetCommand@2
inputs:
command: pack
versioningScheme: byPrereleaseNumber
majorVersion: '$(Major)'
minorVersion: '$(Minor)'
patchVersion: '$(Patch)'
For a list of other possible values for versioningScheme , see the NuGet task.
YAML is not supported in TFS.
Although Semantic Versioning with prerelease labels is a good solution for packages produced in CI builds,
including a prerelease label is not ideal when you want to release a package to your users. The challenge is that
after packages are produced, they're immutable. They can't be updated or replaced.
When you're producing a package in a build, you can't know whether it will be the version that you aim to release
to your users or just a step along the way toward that release. Although none of the following solutions are ideal,
you can use one of these depending on your preference:
After you validate a package and decide to release it, produce another package without the prerelease
label and publish it. The drawback of this approach is that you have to validate the new package again, and
it might uncover new issues.
Publish only packages that you want to release. In this case, you won't use a prerelease label for every
build. Instead, you'll reuse the same package version for all packages. Because you do not publish
packages from every build, you do not cause a conflict.
NOTE
Please note that DotNetCore and DotNetStandard packages should be packaged with the DotNetCoreCLI@2 task to
avoid System.InvalidCastExceptions. See the .NET Core CLI task for more details.
task: DotNetCoreCLI@2
displayName: 'dotnet pack $(buildConfiguration)'
inputs:
command: pack
versioningScheme: byPrereleaseNumber
majorVersion: '$(Major)'
minorVersion: '$(Minor)'
patchVersion: '$(Patch)'
steps:
- task: NuGetAuthenticate@0
displayName: 'NuGet Authenticate'
- task: NuGetCommand@2
displayName: 'NuGet push'
inputs:
command: push
publishVstsFeed: '<projectName>/<feed>'
allowPackageConflicts: true
NOTE
Artifact feeds that were created through the classic user interface are project scoped feeds. You must include the project
name in the publishVstsFeed parameter: publishVstsFeed: '<projectName>/<feed>' . See Project-scoped feeds vs.
Organization-scoped feeds to learn about the difference between the two types.
To publish to an external NuGet feed, you must first create a service connection to point to that feed. You can do
this by going to Project settings , selecting Ser vice connections , and then creating a New ser vice
connection . Select the NuGet option for the service connection. To connect to the feed, fill in the feed URL and
the API key or token.
To publish a package to a NuGet feed, add the following snippet to your azure-pipelines.yml file.
- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: '<Name of the NuGet service connection>'
- task: NuGetCommand@2
inputs:
command: push
nuGetFeedType: external
versioningScheme: byEnvVar
versionEnvVar: <VersionVariableName>
FAQ
Where can I learn more about Azure Artifacts and the TFS Package Management service?
Package Management in Azure Artifacts and TFS
Publish Python packages in Azure Pipelines
11/2/2020 • 2 minutes to read • Edit Online
To publish Python packages produced by your build, you'll use twine, a widely used tool for publishing Python
packages. This guide covers how to do the following in your pipeline:
1. Install twine on your build agent
2. Authenticate twine with your Azure Artifacts feeds
3. Use a custom task that invokes twine to publish your Python packages
Install twine
First, you'll need to run pip install twine to ensure the build agent has twine installed.
YAML
Classic
Check out the script YAML task reference for the schema for this command.
- task: TwineAuthenticate@0
inputs:
artifactFeeds: 'feed_name1, feed_name2'
externalFeeds: 'feed_name1, feed_name2'
Check out the YAML schema reference for more details on the script keyword.
WARNING
We strongly recommend NOT checking any credentials or tokens into source control.
Publish symbols for debugging
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
NOTE
A symbol server is available with Azure Ar tifacts in Azure DevOps Services and works best with Visual Studio 2017
Update 4 or later . Team Foundation Ser ver users and users without the Azure Ar tifacts extension can publish
symbols to a file share using a build task.
Symbol servers enable debuggers to automatically retrieve the correct symbol files without knowing product
names, build numbers, or package names. To learn more about symbols, read the concept page. To consume
symbols, see this page for Visual Studio or this page for WinDbg.
Publish symbols
To publish symbols to the symbol server in Azure Artifacts, include the Index Sources and Publish Symbols task in
your build pipeline. Configure the task as follows:
For Version , select 2.\ *.
For Version , select 1.\ *.
For Symbol Ser ver Type , select Symbol Ser ver in this organization/collection (requires Azure
Ar tifacts) .
Use the Path to symbols folder argument to specify the root directory that contains the .pdb files to be
published.
Use the Search pattern argument to specify search criteria to find the .pdb files in the folder that you specify
in Path to symbols folder . You can use a single-folder wildcard ( * ) and recursive wildcards ( ** ). For
example, **\bin\**\*.pdb searches for all .pdb files in all subdirectories named bin.
Publish symbols for NuGet packages
To publish symbols for NuGet packages, include the preceding task in the build pipeline that produces the NuGet
packages. Then the symbols will be available to all users in the Azure DevOps organization.
Portable PDBs
If you're using portable PDBs, you still need to use the Index Sources and Publish Symbols task to publish
symbols. For portable PDBs, the build does the indexing, however you should use SourceLink to index the symbols
as part of the build. Note that Azure Artifacts doesn't presently support ingesting NuGet symbol packages and so
the task is used to publish the generated PDB files into the symbols service directly.
The preceding example contains two sections: the variables section and the source files section. The information in
the variables section can be overridden. The variables can use other variables, and can use information from the
source files section.
To override one or more of the variables while debugging with Visual Studio, create an .ini file
%LOCALAPPDATA%\SourceServer\srcsrv.ini . Set the content of the .ini file to override the variables. For example:
[variables]
TFS_COLLECTION=http://DIFFERENT_SERVER:8080/tfs/DifferentCollection
IMPORTANT
If you want to delete symbols that were published using the Index Sources & Publish Symbols task, you must first
remove the build that generated those symbols. This can be accomplished by using retention policies to clean up your build
or by manually deleting the run. For more information about debugging your app, see Debug with symbols in Visual Studio,
and Debug with symbols in WinDbg.
FAQ
Q: What's the retention policy for the symbols stored in the Azure Pipelines symbol server?
A: Symbols have the same retention as the build. When you delete a build, you also delete the symbols that the
build produced.
Q: Can I use source indexing on a portable .pdb file created from a .NET Core assembly?
A: No, source indexing is currently not enabled for portable .pdb files because SourceLink doesn't support
authenticated source repositories. The workaround at the moment is to configure the build to generate full .pdb
files.
Q: Is this available in TFS?
A: In TFS, you can bring your own file share and set it up as a symbol server, as described in this blog.
Publish and download Universal Packages in Azure
Pipelines
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines
When you want to publish a set of related files from a pipeline as a single package, you can use Universal Packages
hosted in Azure Artifacts feeds.
- task: UniversalPackages@0
displayName: Universal Publish
inputs:
command: publish
publishDirectory: '$(Build.ArtifactStagingDirectory)'
vstsFeedPublish: '<projectName>/<feedName>'
vstsFeedPackagePublish: '<Package name>'
packagePublishDescription: '<Package description>'
NOTE
See Task control options to learn about the available control options for your task.
To publish to an Azure Artifacts feed, set the Project Collection Build Ser vice identity to be a Contributor on
the feed. To learn more about permissions to Package Management feeds, see Secure and share packages using
feed permissions.
To publish to an external Universal Packages feed, you must first create a service connection to point to that feed.
You can do this by going to Project settings , selecting Ser vice connections , and then creating a New Ser vice
Connection . Select the Team Foundation Ser ver/Team Ser vices option for the service connection. Fill in the
feed URL and a personal access token to connect to the feed.
Package versioning
In Universal Packages, a particular package is identified by its name and version number. Currently, Universal
Packages require Semantic Versioning. Semantic version numbers have three numeric components,
Major.Minor.Patch . When you fix a bug, you increment the patch ( 1.0.0 to 1.0.1 ). When you release a new
backward-compatible feature, you increment the minor version and reset the patch version to 0 ( 1.4.17 to 1.5.0
). When you make a backward-incompatible change, you increment the major version and reset the minor and
patch versions to 0 ( 2.6.5 to 3.0.0 ).
The Universal Packages task automatically selects the next major, minor, or patch version for you when you publish
a new package. Just set the appropriate option.
YAML
Classic
In the Universal Packages snippet that you added previously, add a versionOption . The options for publishing a
new package version are: major , minor , patch , or custom .
Selecting custom allows you to specify any SemVer2 compliant version number for your package. The other
options will get the latest version of the package from your feed and increment the chosen version segment by 1.
So if you have a testPackage v1.0.0, and you publish a new version of testPackage and select the major option, your
package version number will be 2.0.0. If you select the minor option, your package version will be 1.1.0, and if you
select the patch option, your package version will be 1.0.1.
One thing to keep in mind is that if you select the custom option, you must also provide a versionPublish .
- task: UniversalPackages@0
displayName: Universal Publish
inputs:
command: publish
publishDirectory: '$(Build.ArtifactStagingDirectory)'
vstsFeedPublish: '<projectName>/<feedName>'
vstsFeedPackagePublish: '<Package name>'
versionOption: custom
versionPublish: '<Package version>'
packagePublishDescription: '<Package description>'
NOTE
See Task control options to learn about the available control options for your task.
steps:
- task: UniversalPackages@0
displayName: 'Universal download'
inputs:
command: download
vstsFeed: '<projectName>/<feedName>'
vstsFeedPackage: '<packageName>'
vstsPackageVersion: 1.0.0
downloadDirectory: '$(Build.SourcesDirectory)\someFolder'
vstsFeed The project and feed name that the package will be
downloaded from.
NOTE
See Task control options to learn about the available control options for your task.
To download a Universal Package from an external source, use the following snippet:
steps:
- task: UniversalPackages@0
displayName: 'Universal download'
inputs:
command: download
feedsToUse: external
externalFeedCredentials: MSENG2
feedDownloadExternal: 'fabrikamFeedExternal'
packageDownloadExternal: 'fabrikam-package'
versionDownloadExternal: 1.0.0
NOTE
See Task control options to learn about the available control options for your task.
FAQ
Where can I learn more about Azure Artifacts and the TFS Package Management service?
Package Management in Azure Artifacts and TFS
In what versions of Azure DevOps/TFS are Universal Packages available?
Universal Packages are only available for Azure DevOps Services.
Restore NuGet packages in Azure Pipelines
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
NuGet package restore allows you to have all your project's dependencies available without having to store them in
source control. This allows for a cleaner development environment and smaller repository size. You can restore
your NuGet packages using the NuGet restore build task, the NuGet CLI, or the .NET Core CLI. This article will show
you how to restore your NuGet packages using both YAML and the classic Azure pipelines.
Prerequisites
Set up your solution to consume packages from Azure artifacts feed.
Created your first pipeline for your repository.
Set up the build identity permissions for your feed.
To restore your NuGet packages run the following command in your project directory:
nuget.exe restore
command : The dotnet command to run. Options: build , push , pack , restore , run , test , and custom .
projects : The path to the csproj file(s) to use. You can use wildcards (e.g. **/*.csproj for all .csproj files in all
subfolders).
feedsToUse : You can either choose to select a feed or commit a NuGet.config file to your source code repository
and set its path using nugetConfigPath . Options: select , config .
vstsFeed : This argument is required when feedsToUse == Select . Value format: <projectName>/<feedName> .
includeNuGetOrg : Use packages from NuGet.org.
3. Create your PAT with the Packaging (read) scope and keep it handy.
4. In the Azure DevOps organization that contains the build, edit the build's NuGet step and ensure you're using
version 2 or greater of the task, using the version selector.
5. In the Feeds and authentication section, Ensure you've selected the Feeds in my NuGet.config radio
button.
6. Set the path to your NuGet.config in the Path to NuGet.config .
7. In Credentials for feeds outside this organization/collection , select the + New .
8. In the service connection dialog that appears, select the External Azure DevOps Ser ver option and enter
a connection name, the feed URL (make sure it matches what's in your NuGet.config) and the PAT you
created in step 3.
FAQ
Why can't my build restore NuGet packages?
NuGet restore can fail due to a variety of issues. One of the most common issues is the introduction of a new
project in your solution that requires a target framework that isn't understood by the version of NuGet your build is
using. This issue generally doesn't present itself on a developer machine because Visual Studio updates the NuGet
restore mechanism at the same time it adds new project types. We're looking into similar features for Azure
Artifacts. In the meantime though, the first thing to try when you can't restore packages is to update to the latest
version of NuGet.
How do I use the latest version of NuGet?
If you're using Azure Pipelines or TFS 2018, new template-based builds will work automatically thanks to a new
"NuGet Tool Installer" task that's been added to the beginning of all build templates that use the NuGet task. We
periodically update the default version that's selected for new builds around the same time we install Visual Studio
updates on the Hosted build agents.
For existing builds, just add or update a NuGet Tool Installer task to select the version of NuGet for all the
subsequent tasks. You can see all available versions of NuGet on nuget.org.
Related articles
Publish to NuGet feeds (YAML/Classic)
Publish and consume build artifacts
Use Jenkins to restore and publish packages
11/2/2020 • 4 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Azure Artifacts works with the continuous integration tools your team already uses. In this Jenkins walkthrough,
you'll create a NuGet package and publish it to an Azure Artifacts feed. If you need help on Jenkins setup, you can
learn more on the Jenkins wiki.
Setup
This walkthrough uses Jenkins 1.635 running on Windows 10. The walkthrough is simple, so any recent Jenkins
and Windows versions should work.
Ensure the following Jenkins plugins are enabled:
MSBuild 1.24
Git 2.4.0
Git Client 1.19.0
Credentials Binding plugin 1.6
Some of these plugins are enabled by default. Others you will need to install by using Jenkins's "Manage Plugins"
feature.
The example project
The sample project is a simple shared library written in C#.
To follow along with this walkthrough, create a new C# Class Library solution in Visual Studio 2015.
Name the solution "FabrikamLibrary" and uncheck the Create director y for solution checkbox.
On the FabrikamLibrary project's context menu, choose Proper ties , then choose Assembly Information .
Edit the description and company fields. Now generating a NuGet package is easier.
Check the new solution into a Git repo where your Jenkins server can access it later.
Under Source Code Management, set the build to use Git and select your Git repo.
Under Build Environment, select the Use secret text(s) or file(s) option.
Add a new Username and password (separated) binding.
Set the Username Variable to "FEEDUSER" and the Password Variable to "FEEDPASS". These are the
environment variables Jenkins will fill in with your credentials when the build runs.
Choose the Add button to create a new username and password credential in Jenkins.
Set the username to "token" and the password to the PAT you generated earlier. Choose Add to save
these credentials.
A resource is anything used by a pipeline that lives outside the pipeline. Resources are defined at one place and can
be consumed anywhere in your pipeline. Resources can be protected or open.
Resources include:
agent pools
variable groups
secure files
service connections
environments
repositories
artifacts
pipelines
containers
Resources in YAML
11/2/2020 • 23 minutes to read • Edit Online
Azure Pipelines
A resource is anything used by a pipeline that lives outside the pipeline. Pipeline resources include:
CI/CD pipelines that produce artifacts (Azure Pipelines, Jenkins, etc.)
code repositories (Azure Repos Git repos, GitHub, GitHub Enterprise, Bitbucket Cloud)
container image registries (Azure Container Registry, Docker Hub, etc.)
package feeds (GitHub packages)
Why resources?
Resources are defined at one place and can be consumed anywhere in your pipeline. Resources provide you the full
traceability of the services consumed in your pipeline including the version, artifacts, associated commits, and
work-items. You can fully automate your DevOps workflow by subscribing to trigger events on your resources.
Resources in YAML represent sources of types pipelines, builds, repositories, containers, and packages.
Schema
resources:
pipelines: [ pipeline ]
builds: [ build ]
repositories: [ repository ]
containers: [ container ]
packages: [ package ]
Variables
When a resource triggers a pipeline, the following variables are set:
resources.triggeringAlias
resources.triggeringCategory
Resources: pipelines
If you have an Azure Pipeline that produces artifacts, you can consume the artifacts by defining a pipelines
resource. pipelines is a dedicated resource only for Azure Pipelines. You can also set triggers on pipeline resource
for your CD workflows.
In your resource definition, pipeline is a unique value that you can use to reference the pipeline resource later on.
source is the name of the pipeline that produces an artifact.
IMPORTANT
When you define a resource trigger, if its pipeline resource is from the same repo as the current pipeline, triggering follows
the same branch and commit on which the event is raised. But if the pipeline resource is from a different repo, the current
pipeline is triggered on the default branch.
resources:
pipelines:
- pipeline: MyCIAlias
project: Fabrikam
source: Farbrikam-CI
branch: master ### This branch input cannot have wild cards. It is used for evaluating default
version when pipeline is triggered manually or scheduled.
tags: ### These tags are used for resolving default version when the pipeline is triggered
manually or scheduled
- Production ### Tags are AND'ed
- PreProduction
In case your pipeline is triggered automatically, the CI pipeline version will be picked based on the trigger event.
The default version info provided irrelevant.
If you provide branches, a new pipeline will be triggered whenever a CI run is successfully completed that
matches to the branches that are included.
If you provide tags, a new pipeline will be triggered whenever a CI run is successfully completed that matches
all the tags mentioned.
If you provide stages, new pipeline will be triggered whenever a CI run has all the stages mentioned are
completed successfully.
If you provide branches, tags and stages together, a new pipeline run is triggered whenever a CI run matches all
the conditions.
If you don't provide anything and just say trigger: true , a new pipeline run is triggered whenever a CI run is
successfully completed.
If you don't provide any trigger for the resource, no pipeline run will be triggered. Triggers are disabled by
default unless you specifically enable them.
resources:
pipelines:
- pipeline: SmartHotel
project: DevOpsProject
source: SmartHotel-CI
trigger:
branches:
include:
- releases/*
- master
exclude:
- topic/*
tags:
- Verified
- Signed
stages:
- Production
- PreProduction
steps:
- download: [ current | pipeline resource identifier | none ] # disable automatic download if "none"
artifact: string ## artifact name, optional; downloads all the available artifacts if not specified
patterns: string # patterns representing files to include; optional
Resources: builds
If you have any external CI build system that produces artifacts, you can consume artifacts with a builds resource.
A builds resource can be any external CI systems like Jenkins, TeamCity, CircleCI etc.
Schema
Example
builds is an extensible category. You can write an extension to consume artifacts from your builds service
(CircleCI, TeamCity etc.) and introduce a new type of service as part of builds . Jenkins is a type of resource in
builds .
IMPORTANT
Triggers are only supported for hosted Jenkins where Azure DevOps has line of sight with Jenkins server.
IMPORTANT
build resource artifacts are not automatically downloaded in your jobs/deploy-jobs. You need to explicitly add
downloadBuild task for consuming the artifacts.
Schema
Example
- downloadBuild: string # identifier for the resource from which to download artifacts
artifact: string # artifact to download; if left blank, downloads all artifacts associated with the resource
provided
patterns: string | [ string ] # a minimatch path or list of [minimatch paths](tasks/file-matching-
patterns.md) to download; if blank, the entire artifact is downloaded
Resources: repositories
If your pipeline has templates in another repository, or if you want to use multi-repo checkout with a repository
that requires a service connection, you must let the system know about that repository. The repository keyword
lets you specify an external repository.
Schema
Example
resources:
repositories:
- repository: string # identifier (A-Z, a-z, 0-9, and underscore)
type: enum # see the following "Type" topic
name: string # repository name (format depends on `type`)
ref: string # ref name to use; defaults to 'refs/heads/master'
endpoint: string # name of the service connection to use (for types that aren't Azure Repos)
trigger: # CI trigger for this repository, no CI trigger if skipped (only works for Azure Repos)
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
tags:
include: [ string ] # tag names which will trigger a build
exclude: [ string ] # tag names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
Type
Pipelines support the following values for the repository type: git , github , githubenterprise , and bitbucket .
The git type refers to Azure Repos Git repos.
If you specify type: git, the name value refers to another repository in the same project. An example is
name: otherRepo . To refer to a repo in another project within the same organization, prefix the name with
that project's name. An example is name: OtherProject/otherRepo .
If you specify type: github , the name value is the full name of the GitHub repo and includes the user or
organization. An example is name: Microsoft/vscode . GitHub repos require a GitHub service connection for
authorization.
If you specify type: githubenterprise , the name value is the full name of the GitHub Enterprise repo and
includes the user or organization. An example is name: Microsoft/vscode . GitHub Enterprise repos require a
GitHub Enterprise service connection for authorization.
If you specify type: bitbucket , the name value is the full name of the Bitbucket Cloud repo and includes the
user or organization. An example is name: MyBitbucket/vscode . Bitbucket Cloud repos require a Bitbucket
Cloud service connection for authorization.
checkout your repository
Use checkout keyword to consume your repos defined as part of repository resource.
Schema
steps:
- checkout: string # identifier for your repository resource
clean: boolean # if true, execute `execute git clean -ffdx && git reset --hard HEAD` before fetching
fetchDepth: number # the depth of commits to ask Git to fetch; defaults to no limit
lfs: boolean # whether to download Git-LFS files; defaults to false
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules; defaults to not checking out submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1);
defaults to a directory called `s`
persistCredentials: boolean # if 'true', leave the OAuth token in the Git config after the initial fetch;
defaults to false
Repos from the repository resource are not automatically synced in your jobs. Use checkout to fetch your repos
as part of your jobs.
For more information, see Check out multiple repositories in your pipeline.
Resources: containers
If you need to consume a container image as part of your CI/CD pipeline, you can achieve it using containers .A
container resource can be a public or private Docker Registry, or Azure Container Registry.
If you need to consume images from Docker registry as part of your pipeline, you can define a generic container
resource (not type keyword required).
Schema
Example
resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container
A generic container resource can be used as an image consumed as part of your job or it can also be used for
Container jobs.
You can use a first class container resource type for Azure Container Registry (ACR) to consume your ACR images.
This resources type can be used as part of your jobs and also to enable automatic pipeline triggers.
Schema
Example
resources: # types: pipelines | repositories | containers | builds | packages
containers:
- container: string # identifier for the container resource
type: string # type of the registry like ACR, GCR etc.
azureSubscription: string # Azure subscription (ARM service connection) for container registry;
resourceGroup: string # resource group for your ACR
registry: string # registry for container images
repository: string # name of the container image repository in ACR
trigger: # Triggers are not enabled by default and need to be set explicitly
tags:
include: [ string ] # image tags to consider the trigger events, optional; defaults to any new tag
exclude: [ string ] # image tags on discard the trigger events, optional; defaults to none
resources.container.<Alias>.type
resources.container.<Alias>.registry
resources.container.<Alias>.repository
resources.container.<Alias>.tag
resources.container.<Alias>.digest
resources.container.<Alias>.URI
resources.container.<Alias>.location
Note: location variable is only applicable for ACR type of container resources.
Resources: packages
You can consume NuGet and npm GitHub packages as a resource in YAML pipelines.
When specifying package resources, set the package as NuGet or npm. You can also enable automated pipeline
triggers when a new package version gets released.
To use GitHub packages, you will need to use PAT-based authentication and create a GitHub service connection that
uses PAT.
By default, packages will not be automatically downloaded into jobs. To download, use getPackage .
Schema
Example
resources:
packages:
- package: myPackageAlias # alias for the package resource
type: Npm # type of the package NuGet/npm
connection: GitHubConnectionName # Github service connection with the PAT type
name: nugetTest/nodeapp # <Repository>/<Name of the package>
version: 1.0.1 # Version of the packge to consume; Optional; Defaults to latest
trigger: true # To enable automated triggers (true/false); Optional; Defaults to no triggers
Resources: webhooks
With other resources (such as pipelines, containers, build, and packages) you can consume artifacts and enable
automated triggers. However, you could not automate your deployment process based on other external events or
services. webhooks resource enables you to integrate your pipeline with any external service and automate the
workflow. You can subscribe to any external events through its webhooks (GitHub, GitHub Enterprise, Nexus,
Artifactory, etc.) and trigger your pipelines.
Here are the steps to configure the webhook triggers:
1. Set up a webhook on your external service. When creating your webhook, you need to provide the following
info:
Request Url -
https://dev.azure.com/<ADO Organization>/_apis/public/distributedtask/webhooks/<WebHook Name>?api-
version=6.0-preview
Secret - This is optional. If you need to secure your JSON payload, provide the Secret value
2. Create a new "Incoming Webhook" service connection. This is a newly introduced Service Connection Type that
will allow you to define three important pieces of information:
Webhook Name : The name of the webhook should match webhook created in your external service.
HTTP Header - The name of the HTTP header in the request that contains the payload hash value for
request verification. For example, in the case of the GitHub, the request header will be "X-Hub-
Signature "
Secret - The secret is used to parse the payload hash used for verification of the incoming request (this
is optional). If you have used a secret in creating your webhook, you will need to provide the same secret
key
3. A new resource type called webhooks is introduced in YAML pipelines. For subscribing to a webhook event,
you need to define a webhook resource in your pipeline and point it to the Incoming webhook service
connection. You can also define additional filters on the webhook resource based on the JSON payload data
to further customize the triggers for each pipeline, and you can consume the payload data in the form of
variables in your jobs.
4. Whenever a webhook event is received by the Incoming Webhook service connection, a new run will be
triggered for all the pipelines subscribed to the webhook event. You can consume the JSON payload data in
your jobs using the format ${{ parameters..}}
Schema
Example
resources:
webhooks:
- webhook: MyWebhookTriggerAlias ### Webhook alias
connection: IncomingWebhookConnection ### Incoming webhook service connection
filters: ### List of JSON parameters to filter; Parameters are AND'ed
- path: JSONParameterPath ### JSON path in the payload
value: JSONParameterExpectedValue ### Expected value in the path provided
Webhooks are a great way to automate your workflow based on any external webhook event that is not supported
by first class resources like pipelines, builds, containers, and packages. Also, for on-premise services where Azure
DevOps doesn't have visibility into the process, you can configure webhooks on the service and to trigger your
pipelines automatically.
For pipeline resources, you can see all the available runs across all branches. You can search them based on the
pipeline number or branch. And you can pick a run that is successful, failed or in-progress run. This flexibility is
given to ensure you can run your CD pipeline if you are sure your CI pipeline produced all the artifacts you need
and you don't need to wait for the CI run is complete or rerun due to some unrelated stage in the CI run failed.
However, when we evaluate default version for scheduled triggers or if you don't use manual version picker, we
only consider successfully completed CI runs.
For resources where you can't fetch available versions (like GitHub packages), we will show a text box as part of
version picker so that user can provide the version to be picked in the run.
In this case, you will see an option to authorize the resources on the failed build. If you are a member of
the User role for the resource, you can select this option. Once the resources are authorized, you can
start a new build.
If you continue to have problems authorizing resources, verify that the agent pool security roles for your
project are correct.
Traceability
We provide full traceability for any resource consumed at a pipeline or deployment-job level.
Pipeline traceability
For every pipeline run, we show the info about the
1. The resource that has triggered the pipeline (if it is triggered by a resource).
2. Version of the resource and the artifacts consumed.
Use a variable group to store values that you want to control and make available across multiple pipelines. You
can also use variable groups to store secrets and other values that might need to be passed into a YAML
pipeline. Variable groups are defined and managed in the Librar y page under Pipelines .
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.
NOTE
Variable groups can be used in a build pipeline in only Azure DevOps and TFS 2018. They cannot be used in a build
pipeline in earlier versions of TFS.
variables:
- group: my-variable-group
Thereafter variables from the variable group can be used in your YAML file.
If you use both variables and variable groups, you'll have to use name / value syntax for the individual (non-
grouped) variables:
variables:
- group: my-variable-group
- name: my-bare-variable
value: 'value of my-bare-variable'
To reference a variable group, you can use macro syntax or a runtime expression. In this example, the group
my-variable-group has a variable named myhello .
variables:
- group: my-variable-group
- name: my-passed-variable
value: $[variables.myhello] # uses runtime expression
steps:
- script: echo $(myhello) # uses macro syntax
- script: echo $(my-passed-variable)
You can reference multiple variable groups in the same pipeline. If multiple variable groups include the same
variable, the variable group included last in your YAML file will set the variable's value.
variables:
- group: my-first-variable-group
- group: my-second-variable-group
You can also reference a variable group in a template. In the template variables.yml , the group
my-variable-group is referenced. The variable group includes a variable named myhello .
# variables.yml
variables:
- group: my-variable-group
In this pipeline, the variable $(myhello) from the variable group my-variable-group included in
variables.yml is referenced.
# azure-pipeline.yml
stages:
- stage: MyStage
variables:
- template: variables.yml
jobs:
- job: Test
steps:
- script: echo $(myhello)
To work with variable groups, you must authorize the group. This is a security feature: if you only had to name
the variable group in YAML, then anyone who can push code to your repository could extract the contents of
secrets in the variable group. To do this, or if you encounter a resource authorization error in your build, use
one of the following techniques:
If you want to authorize any pipeline to use the variable group, which may be a suitable option if you do
not have any secrets in the group, go to Azure Pipelines, open the Librar y page, choose Variable
groups , select the variable group in question, and enable the setting Allow access to all pipelines .
If you want to authorize a variable group for a specific pipeline, open the pipeline by selecting Edit and
queue a build manually. You will see a resource authorization error and a "Authorize resources" action
on the error. Choose this action to explicitly add the pipeline as an authorized user of the variable group.
NOTE
If you added a variable group to a pipeline and did not get a resource authorization error in your build when you
expected one, turn off the Allow access to all pipelines setting described above.
Optional parameters
action : Specifies the action that can be performed on the variable groups. Accepted values are manage,
none and use.
continuation-token : Lists the variable groups after a continuation token is provided.
group-name : Name of the variable group. Wildcards are accepted, such as new-var* .
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
quer y-order : Lists the results in either ascending or descending (the default) order. Accepted values are
Asc and Desc.
top : Number of variable groups to list.
Example
The following command lists the top 3 variable groups in ascending order and returns the results in table
format.
az pipelines variable-group list --top 3 --query-order Asc --output table
Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
Example
The following command shows details for the variable group with the ID 4 and returns the results in YAML
format.
authorized: false
description: Variables for my new app
id: 4
name: MyNewAppVariables
providerData: null
type: Vsts
variables:
app-location:
isSecret: null
value: Head_Office
app-name:
isSecret: null
value: Fabrikam
Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
yes : Optional. Doesn't prompt for confirmation.
Example
The following command deletes the variable group with the ID 1 and doesn't prompt for confirmation.
Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
name : Required. Name of the variable you are adding.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
secret : Optional. Indicates whether the variable's value is a secret. Accepted values are false and true.
value : Required for non secret variable. Value of the variable. For secret variables, if value parameter is not
provided, it is picked from environment variable prefixed with AZURE_DEVOPS_EXT_PIPELINE_VAR_ or user is
prompted to enter it via standard input. For example, a variable named MySecret can be input using the
environment variable AZURE_DEVOPS_EXT_PIPELINE_VAR_MySecret .
Example
The following command creates a variable in the variable group with ID of 4 . The new variable is named
requires-login and has a value of True , and the result is shown in table format.
az pipelines variable-group variable create --group-id 4 --name requires-login --value True --output table
Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
Example
The following command lists all of the variables in the variable group with ID of 4 and shows the result in
table format.
Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
name : Required. Name of the variable you are adding.
new-name : Optional. Specify to change the name of the variable.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
prompt-value : Set to true to update the value of a secret variable using environment variable or prompt
via standard input. Accepted values are false and true.
secret : Optional. Indicates whether the variable's value is kept secret. Accepted values are false and true.
value : Updates the value of the variable. For secret variables, use the prompt-value parameter to be
prompted to enter it via standard input. For non-interactive consoles, it can be picked from environment
variable prefixed with AZURE_DEVOPS_EXT_PIPELINE_VAR_ . For example, a variable named MySecret can be
input using the environment variable AZURE_DEVOPS_EXT_PIPELINE_VAR_MySecret .
Example
The following command updates the requires-login variable with the new value False in the variable group
with ID of 4 . It specifies that the variable is a secret and shows the result in YAML format. Notice that the
output shows the value as null instead of False since it is a secret value (hidden).
az pipelines variable-group variable update --group-id 4 --name requires-login --value False --secret true
--output yaml
requires-login:
isSecret: true
value: null
Parameters
group-id : Required. ID of the variable group. To find the variable group ID, see List variable groups.
name : Required. Name of the variable you are deleting.
org : Azure DevOps organization URL. You can configure the default organization using
az devops configure -d organization=ORG_URL . Required if not configured as default or picked up using
git config . Example: --org https://dev.azure.com/MyOrganizationName/ .
project : Name or ID of the project. You can configure the default project using
az devops configure -d project=NAME_OR_ID . Required if not configured as default or picked up using
git config .
yes : Optional. Doesn't prompt for confirmation.
Example
The following command deletes the requires-login variable from the variable group with ID of 4 and
prompts for confirmation.
2. Specify your Azure subscription end point and the name of the vault containing your secrets.
Ensure the Azure service connection has at least Get and List management permissions on the vault
for secrets. You can enable Azure Pipelines to set these permissions by choosing Authorize next to the
vault name. Alternatively, you can set the permissions manually in the Azure portal:
Open the Settings blade for the vault, choose Access policies , then Add new .
In the Add access policy blade, choose Select principal and select the service principal for your
client account.
In the Add access policy blade, choose Secret permissions and ensure that Get and List are
checked (ticked).
Choose OK to save the changes.
3. In the Variable groups page, choose + Add to select specific secrets from your vault that will be
mapped to this variable group.
Secrets management notes
Only the secret names are mapped to the variable group, not the secret values. The latest version of the
value of each secret is fetched from the vault and used in the pipeline linked to the variable group
during the run.
Any changes made to existing secrets in the key vault, such as a change in the value of a secret, will be
made available automatically to all the pipelines in which the variable group is used.
When new secrets are added to the vault, or a secret is deleted from the vault, the associated variable
groups are not updated automatically. The secrets included in the variable group must be explicitly
updated in order for the pipelines using the variable group to execute correctly.
Azure Key Vault supports storing and managing cryptographic keys and secrets in Azure. Currently,
Azure Pipelines variable group integration supports mapping only secrets from the Azure key vault.
Cryptographic keys and certificates are not supported.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Use the Secure Files library to store files such as signing certificates, Apple Provisioning Profiles, Android
Keystore files, and SSH keys on the server without having to commit them to your source repository. Secure files
are defined and managed in the Librar y tab in Azure Pipelines .
The contents of the secure files are encrypted and can only be used during the build or release pipeline by
referencing them from a task. The secure files are available across multiple build and release pipelines in the
project based on the security settings. Secure files follow the library security model.
There's a size limit of 10 MB for each secure file.
FAQ
How can I consume secure files in a Build or Release Pipeline?
Use the Download Secure File Utility task to consume secure files within a Build or Release Pipeline.
How can I create a custom task using secure files?
You can build your own tasks that use secure files by using inputs with type secureFile in the task.json . Learn
how to build a custom task.
The Install Apple Provisioning Profile task is a simple example of a task using a secure file. See the reference
documentation and source code.
To handle secure files during build or release, you can refer to the common module available here.
My task can't access the secure files. What do I do?
Make sure your agent is running version of 2.116.0 or higher. See Agent version and upgrades.
Why do I see an Invalid Resource error when downloading a secure file with Azure DevOps Server/TFS on-
premises?
Make sure IIS Basic Authentication is disabled on the TFS or Azure DevOps Server.
How do I authorize a secure file for use in all pipelines?
1. In Azure Pipelines , select the Librar y tab.
2. Select the Secure files tab at the top.
3. Select the secure file you want to authorize.
4. In the details view under Proper ties , select Authorize for use in all pipelines , and then select Save .
Service connections
11/2/2020 • 27 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.
You will typically need to connect to external and remote services to execute tasks in a job. For example,
you may need to connect to your Microsoft Azure subscription, to a different build server or file server, to
an online continuous integration environment, or to services you install on remote computers.
You can define service connections in Azure Pipelines or Team Foundation Server (TFS) that are available
for use in all your tasks. For example, you can create a service connection for your Azure subscription and
use this service connection name in an Azure Web Site Deployment task in a release pipeline.
You define and manage service connections from the Admin settings of your project:
Azure DevOps: https://dev.azure.com/{organization}/{project}/adminservices
TFS: https://{tfsserver}/{collection}/{project}/_admin/_services
5. To update the service connection, click on Edit at the top-right corner of the page.
6. Approvals and checks , Security and Delete are part of the more options at the top-right corner.
3. To manage security for a service connection, open the service connection and go to more options at
top-right corner and choose Security .
Service connection is a critical resource for various workflows in Azure DevOps like Classic Build and
Release pipelines, YAML pipelines, KevVault Variable groups etc. Based on the usage patterns, service
connection security is divided into three categories in the service connections new UI.
User permissions
Pipeline permissions
Project permissions
User permissions
You can control who can create, view, use and manage the service connection with user permissions. You
have four roles i.e. Creator, Reader, User and Administrator roles to manage each of these actions. In the
service connections tab, you can set the hub level permissions which are inherited and you can override
the roles for each service connection.
RO L E O N A SERVIC E C O N N EC T IO N P URP O SE
Previously, two special groups, Endpoint Creators and Endpoint Administrator groups were used to
control who can create and manage service connections. Now, as part of service connection new UI, we
are moving to pure RBAC model i.e. using roles. For backward compatibility, in the existing projects,
Endpoint Administrators group is added as Administrator role and Endpoint creators group is assigned
with creator role which ensures there is no change in the behavior for existing service connections.
NOTE
This change is applicable only in Azure DevOps Services where new UI is available. Azure DevOps Server 2019 and
older versions still follow the previous security model.
Along with the new service connections UI, we are introducing Sharing of ser vice connections across
projects . With this feature, service connections now become an organization level object however scoped
to current project by default. In User permissions section, you can see Project and Organization level
permissions. And the functionalities of administrator role are split between the two levels.
Project level permissions
The project level permissions are the user permissions with reader, user, creator and administrator roles,
as explained above, within the project scope. You have inheritance and you can set the roles at the hub
level as well as for each service connection.
The project-level administrator has limited administrative capabilities as below:
A project-level administrator can manage other users and roles at project scope.
A project-level administrator can rename a service connection, update description and enable/disable
"Allow pipeline access" flag.
A project-level administrator can delete a service connection which removes the existence of service
connection from the project.
The user that created the service connection is automatically added to the project level Administrator role
for that service connection. And users/groups assigned administrator role at hub level are inherited if the
inheritance is turned on.
Organization level permissions
Organization level permissions are introduced along with cross project sharing feature. Any permissions
set at this level are reflected across all the projects where the service connection is shared. There is not
inheritance for organization level permissions. Today we only have administrator role at organization level.
The organization-level administrator has all the administrative capabilities that include:
An organization-level administrator can manage organization level users.
An organization-level administrator can edit all the fields of a service connection.
An organization-level administrator can share/un-share a service connection with other projects.
The user that created the service connection is automatically added as an organization level Administrator
role for that service connection. In all the existing service connections, for backward compatibility, all the
connection administrators are made organization-level administrators to ensure there is no change in the
behavior.
Pipeline permissions
Pipeline permissions control which YAML pipelines are authorized to use this service connection. This is
interlinked with 'Allow pipeline access' checkbox you find in service connection creation dialogue.
You can either choose to open access for all pipelines to consume this service connection from the more
options at top-right corner of the Pipeline permissions section in security tab of a service connection.
Or you can choose to lock down the service connection and only allow selected YAML pipelines to
consume this service connection. If any other YAML pipeline refers to this service connection, an
authorization request is raised which has to be approved by the connection administrators.
Project permissions - Cross project sharing of service connections
Only the organization-level administrators from User permissions can share the service connection
with other projects.
The user who is sharing the service connection with a project should have at least create service
connection permission in the target project.
The user who shares the service connection with a project becomes the project-level administrator for
that service connection and the project-level inheritance is turned on in the target project.
The service connection name is appended with the project name and it can be renamed in the target
project scope.
Organization level administrator can unshare a service connection from any shared project.
NOTE
The sharing feature is still under preview and is not yet rolled out. If you want this feature enabled, you can reach
out to us. Project permissions feature is dependent on the new service connections UI and once we enable this
feature, the old service connections UI is no longer usable.
NOTE
Service connection cannot be specified by variable
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
If your subscription is defined in an Azure Government Cloud, ensure your application meets the
relevant compliance requirements before you configure a service connection.
Azure Resource Manager service connection
Defines and secures a connection to a Microsoft Azure subscription using Service Principal Authentication
(SPA) or an Azure-Managed Service Identity. The dialog offers two main modes:
Automated subscription detection . In this mode, Azure Pipelines and TFS will attempt to query
Azure for all of the subscriptions and instances to which you have access using the credentials you
are currently logged on with in Azure Pipelines or TFS (including Microsoft accounts and School or
Work accounts). If no subscriptions are shown, or subscriptions other than the one you want to use,
you must sign out of Azure Pipelines or TFS and sign in again using the appropriate account
credentials.
Manual subscription pipeline . In this mode, you must specify the service principal you want to
use to connect to Azure. The service principal specifies the resources and the access levels that will
be available over the connection. Use this approach when you need to connect to an Azure account
using different credentials from those you are currently logged on with in Azure Pipelines or TFS.
This is also a useful way to maximize security and limit access.
For more information, see Connect to Microsoft Azure
NOTE
If you don't see any Azure subscriptions or instances, or you have problems validating the connection, see
Troubleshoot Azure Resource Manager service connections.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Service Bus ConnectionString The URL of your Azure Service Bus instance. More
information.
Service Bus Queue Name The name of an existing Azure Service Bus queue.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Node Name (Username) Required. The name of the node to connect to. Typically
this is your username.
Client Key Required. The key specified in the Chef .pem file.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Ensure you protect your connection to the Docker host. Learn more.
PA RA M ET ER DESC RIP T IO N
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task inputs.
Azure Container Registry Required. The Azure Container Registry to be used for
creation of service connection.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task inputs.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Password/Token Key Required. The password or access token for the specified
username.
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Password/Token Key Required. The password or access token for the specified
username.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
NOTE
If you select Grant authorization for the Choose authorization option, the dialog shows an Authorize
button that opens the GitHub login page. If you select Personal access token you must obtain a suitable token
and paste it into the Token textbox. The dialog shows the recommended scopes for the token: repo, user,
admin:repo_hook . See this page on GitHub for information about obtaining an access token. Then register your
GitHub account in your profile:
Open your profile from your organization name at the right of the Azure Pipelines page heading.
At the top of the left column, under DETAILS , choose Security .
In the Security tab, in the right column, choose Personal access tokens .
Choose the Add link and enter the information required to create the token.
Also see Artifact sources.
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Accept untrusted SSL certificates Set this option to allow clients to accept a self-signed
certificate instead of installing the certificate in the TFS
service role or the computers hosting the agent.
GitHub Enterprise Server configuration URL The URL is fetched from OAuth configuration.
NOTE
If you select Personal access token you must obtain a suitable token and paste it into the Token textbox. The
dialog shows the recommended scopes for the token: repo, user, admin:repo_hook . See this page on GitHub
for information about obtaining an access token. Then register your GitHub account in your profile:
Open your profile from your account name at the right of the Azure Pipelines page heading.
At the top of the left column, under DETAILS , choose Security .
In the Security tab, in the right column, choose Personal access tokens .
Choose the Add link and enter the information required to create the token.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
PA RA M ET ER DESC RIP T IO N
Accept untrusted SSL certificates Set this option to allow clients to accept a self-signed
certificate instead of installing the certificate in the TFS
service role or the computers hosting the agent.
Also see Azure Pipelines Integration with Jenkins and Artifact sources.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task inputs.
For an RBAC enabled cluster, a ServiceAccount is created in the chosen namespace along with RoleBinding
object so that the created ServiceAccount is able to perform actions only on the chosen namespace.
For an RBAC disabled cluster, a ServiceAccount is created in the chosen namespace. But the created
ServiceAccount has cluster-wide privileges (across namespaces).
NOTE
This option lists all the subscriptions the service connection creator has access to across different Azure tenants. If
you are unable to see subscriptions from other Azure tenants, please check your AAD permissions in those tenants.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task inputs.
For fetching Secret object required to connect and authenticate with the cluster, the following sequence of
commands need to be run -
The above command fetches the name of the secret associated with a ServiceAccount. The output of the
above command is to be substituted in the following command for fetching Secret object -
Copy and paste the Secret object fetched in YAML form into the Secret text-field.
NOTE
When using the service account option, ensure that a RoleBinding exists, which grants permissions in the edit
ClusterRole to the desired service account. This is needed so that the service account can be used by Azure
Pipelines for creating objects in the chosen namespace.
Kubeconfig option
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task inputs.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Python repository url for download Required. The URL of the Python repository.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Python repository url for upload Required. The URL of the Python repository.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Server Certificate Thumbprint Required when connection type is Cer tificate based or
Azure Active Director y .
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Host name Required. The name of the remote host machine or the IP
address.
Port number Required. The port number of the remote host machine
to which you want to connect. The default is port 22.
PA RA M ET ER DESC RIP T IO N
Private key The entire contents of the private key file if using this
type of authentication.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Accept untrusted SSL certificates Set this option to allow the client to accept self-signed
certificates installed on the agent computer(s).
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Connection URL Required. The URL of the TFS or Azure Pipelines instance.
PA RA M ET ER DESC RIP T IO N
Personal Access Token Required for Token Based authentication (TFS 2017 and
newer and Azure Pipelines only). The token to use to
authenticate with the service. Learn more.
PA RA M ET ER DESC RIP T IO N
Connection Name Required. The name you will use to refer to this service
connection in task properties. This is not the name of
your Azure account or subscription. If you are using
YAML, use this name as the azureSubscription or the
equivalent subscription name value in the script.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
- TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are
called definitions, runs are called builds, service connections are called service endpoints, stages are called
environments, and jobs are called phases.
To build your code or deploy your software using Azure Pipelines, you need at least one agent. As
you add more code and people, you'll eventually need more.
When your pipeline runs, the system begins one or more jobs. An agent is computing
infrastructure with installed agent software that runs one job at a time.
Jobs can be run directly on the host machine of the agent or in a container.
Microsoft-hosted agents
If your pipelines are in Azure Pipelines, then you've got a convenient option to run your jobs
using a Microsoft-hosted agent . With Microsoft-hosted agents, maintenance and upgrades are
taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual
machine is discarded after one use. Microsoft-hosted agents can run jobs directly on the VM or in
a container.
Azure Pipelines provides a pre-defined agent pool named Azure Pipelines with Microsoft-
hosted agents.
For many teams this is the simplest way to run your jobs. You can try it first and see if it works for
your build or deployment. If not, you can use a self-hosted agent.
TIP
You can try a Microsoft-hosted agent for no charge.
Self-hosted agents
An agent that you set up and manage on your own to run jobs is a self-hosted agent . You can
use self-hosted agents in Azure Pipelines or Team Foundation Server (TFS). Self-hosted agents
give you more control to install dependent software needed for your builds and deployments.
Also, machine-level caches and configuration persist from run to run, which can boost speed.
TIP
Before you install a self-hosted agent you might want to see if a Microsoft-hosted agent pool will work
for you. In many cases this is the simplest way to get going. Give it a try.
You can install the agent on Linux, macOS, or Windows machines. You can also install an agent on
a Docker container. For more information about installing a self-hosted agent, see:
macOS agent
Linux agent (x64, ARM, RHEL6)
Windows agent (x64, x86)
Docker agent
You can install the agent on Linux, macOS, or Windows machines. For more information about
installing a self-hosted agent, see:
macOS agent
Red Hat agent
Ubuntu 14.04 agent
Ubuntu 16.04 agent
Windows agent v1
NOTE
On macOS, you need to clear the special attribute on the download archive to prevent Gatekeeper
protection from displaying for each assembly in the tar file when ./config.sh is run. The following
command clears the extended attribute on the file:
After you've installed the agent on a machine, you can install any other software on that machine
as required by your jobs.
Parallel jobs
You can use a parallel job in Azure Pipelines to run a single job at a time in your organization. In
Azure Pipelines, you can run parallel jobs on Microsoft-hosted infrastructure or on your own
(self-hosted) infrastructure.
Microsoft provides a free tier of service by default in every organization that includes at least one
parallel job. Depending on the number of concurrent pipelines you need to run, you might need
more parallel jobs to use multiple Microsoft-hosted or self-hosted agents at the same time. For
more information on parallel jobs and different free tiers of service, see Parallel jobs in Azure
Pipelines.
You might need more parallel jobs to use multiple agents at the same time:
Parallel jobs in TFS
IMPORTANT
Starting with Azure DevOps Server 2019, you do not have to pay for self-hosted concurrent jobs in
releases. You are only limited by the number of agents that you have.
Capabilities
Every self-hosted agent has a set of capabilities that indicate what it can do. Capabilities are
name-value pairs that are either automatically discovered by the agent software, in which case
they are called system capabilities , or those that you define, in which case they are called user
capabilities .
The agent software automatically determines various system capabilities such as the name of the
machine, type of operating system, and versions of certain software installed on the machine.
Also, environment variables defined in the machine automatically appear in the list of system
capabilities.
When you author a pipeline you specify certain demands of the agent. The system sends the job
only to agents that have capabilities matching the demands specified in the pipeline. As a result,
agent capabilities allow you to direct jobs to specific agents.
NOTE
Demands and capabilities are designed for use with self-hosted agents so that jobs can be matched with
an agent that meets the requirements of the job. When using Microsoft-hosted agents, you select an
image for the agent that matches the requirements of the job, so although it is possible to add
capabilities to a Microsoft-hosted agent, you don't need to use capabilities with Microsoft-hosted agents.
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
TIP
After you install new software on a self-hosted agent, you must restart the agent for the new capability
to show up. For more information, see Restart Windows agent, Restart Linux agent, and Restart Mac
agent.
Communication
Communication with Azure Pipelines
Communication with TFS
The agent communicates with Azure Pipelines or TFS to determine which job it needs to run, and
to report the logs and job status. This communication is always initiated by the agent. All the
messages from the agent to Azure Pipelines or TFS happen over HTTP or HTTPS, depending on
how you configure the agent. This pull model allows the agent to be configured in different
topologies as shown below.
Here is a common communication pattern between the agent and Azure Pipelines or TFS.
1. The user registers an agent with Azure Pipelines or TFS by adding it to an agent pool. You
need to be an agent pool administrator to register an agent in that agent pool. The identity
of agent pool administrator is needed only at the time of registration and is not persisted
on the agent, nor is it used in any further communication between the agent and Azure
Pipelines or TFS. Once the registration is complete, the agent downloads a listener OAuth
token and uses it to listen to the job queue.
2. The agent listens to see if a new job request has been posted for it in the job queue in
Azure Pipelines/TFS using an HTTP long poll. When a job is available, the agent downloads
the job as well as a job-specific OAuth token. This token is generated by Azure
Pipelines/TFS for the scoped identity specified in the pipeline. That token is short lived and
is used by the agent to access resources (for example, source code) or modify resources
(for example, upload test results) on Azure Pipelines or TFS within that job.
3. After the job is completed, the agent discards the job-specific OAuth token and goes back
to checking if there is a new job request using the listener OAuth token.
The payload of the messages exchanged between the agent and Azure Pipelines/TFS are secured
using asymmetric encryption. Each agent has a public-private key pair, and the public key is
exchanged with the server during registration. The server uses the public key to encrypt the
payload of the job before sending it to the agent. The agent decrypts the job content using its
private key. This is how secrets stored in pipelines or variable groups are secured as they are
exchanged with the agent.
Here is a common communication pattern between the agent and TFS.
An agent pool administrator joins the agent to an agent pool, and the credentials of the
service account (for Windows) or the saved user name and password (for Linux and
macOS) are used to initiate communication with TFS. The agent uses these credentials to
listen to the job queue.
The agent does not use asymmetric key encryption while communicating with the server.
However, you can use HTTPS to secure the communication between the agent and TFS.
Communication to deploy to target servers
When you use the agent to deploy artifacts to a set of servers, it must have "line of sight"
connectivity to those servers. The Microsoft-hosted agent pools, by default, have connectivity to
Azure websites and servers running in Azure.
NOTE
If your Azure resources are running in an Azure Virtual Network, you can get the Agent IP ranges where
Microsoft-hosted agents are deployed so you can configure the firewall rules for your Azure VNet to
allow access by the agent.
Authentication
To register an agent, you need to be a member of the administrator role in the agent pool. The
identity of agent pool administrator is needed only at the time of registration and is not persisted
on the agent, and is not used in any subsequent communication between the agent and Azure
Pipelines or TFS. In addition, you must be a local administrator on the server in order to configure
the agent.
Your agent can authenticate to Azure Pipelines using the following method:
Your agent can authenticate to Azure DevOps Server or TFS using one of the following methods:
Personal Access Token (PAT ):
Generate and use a PAT to connect an agent with Azure Pipelines or TFS 2017 and newer. PAT is
the only scheme that works with Azure Pipelines. The PAT must have Agent Pools (read,
manage) scope (for a deployment group agent, the PAT must have Deployment group (read,
manage) scope), and while a single PAT can be used for registering multiple agents, the PAT is
used only at the time of registering the agent, and not for subsequent communication. For more
information, see the Authenticate with a personal access token (PAT) section in the Windows,
Linux, or macOS self-hosted agents articles.
To use a PAT with TFS, your server must be configured with HTTPS. See Web site settings and
security.
Integrated
Connect a Windows agent to TFS using the credentials of the signed-in user through a Windows
authentication scheme such as NTLM or Kerberos.
To use this method of authentication, you must first configure your TFS server.
1. Sign into the machine where you are running TFS.
2. Start Internet Information Services (IIS) Manager. Select your TFS site and make sure
Windows Authentication is enabled with a valid provider such as NTLM or Kerberos.
Negotiate
Connect to TFS as a user other than the signed-in user through a Windows authentication
scheme such as NTLM or Kerberos.
To use this method of authentication, you must first configure your TFS server.
1. Log on to the machine where you are running TFS.
2. Start Internet Information Services (IIS) Manager. Select your TFS site and make sure
Windows Authentication is enabled with the Negotiate provider and with another method
such as NTLM or Kerberos.
Alternate
Connect to TFS using Basic authentication. To use this method, you must first configure HTTPS on
TFS.
To use this method of authentication, you must configure your TFS server as follows:
1. Sign in to the machine where you are running TFS.
2. Configure basic authentication. See Using tfx against Team Foundation Server 2015
using Basic Authentication.
NOTE
There are security risks when you enable automatic logon or disable the screen saver because
you enable other users to walk up to the computer and use the account that automatically logs
on. If you configure the agent to run in this way, you must ensure the computer is physically
protected; for example, located in a secure facility. If you use Remote Desktop to access the
computer on which an agent is running with auto-logon, simply closing the Remote Desktop
causes the computer to be locked and any UI tests that run on this agent may fail. To avoid this,
use the tscon command to disconnect from Remote Desktop. For example:
%windir%\System32\tscon.exe 1 /dest:console
Agent account
Whether you run an agent as a service or interactively, you can choose which computer account
you use to run the agent. (Note that this is different from the credentials that you use when you
register the agent with Azure Pipelines or TFS.) The choice of agent account depends solely on the
needs of the tasks running in your build and deployment jobs.
For example, to run tasks that use Windows authentication to access an external service, you
must run the agent using an account that has access to that service. However, if you are running
UI tests such as Selenium or Coded UI tests that require a browser, the browser is launched in the
context of the agent account.
On Windows, you should consider using a service account such as Network Service or Local
Service. These accounts have restricted permissions and their passwords don't expire, meaning
the agent requires less management over time.
You can also update agents individually by choosing Update agent from the ... menu.
4. An update request is queued for each agent in the pool, that runs when any currently
running jobs complete. Upgrading typically only takes a few moments - long enough to
download the latest version of the agent software (approximately 200 MB), unzip it, and
restart the agent with the new version. You can monitor the status of your agents on the
Agents tab.
We update the agent software with every update in Azure DevOps Server and TFS. We indicate
the agent version in the format {major}.{minor} . For instance, if the agent version is 2.1 , then
the major version is 2 and the minor version is 1.
When your Azure DevOps Server or TFS server has a newer version of the agent, and that newer
agent is only different in minor version, it can usually be automatically upgraded. An upgrade is
requested when a platform feature or one of the tasks used in the pipeline requires a newer
version of the agent. Starting with Azure DevOps Server 2019, you don't have to wait for a new
server release. You can upload a new version of the agent to your application tier, and that
version will be offered as an upgrade.
If you run the agent interactively, or if there is a newer major version of the agent available, then
you may have to manually upgrade the agents. You can do this easily from the Agent pools tab
under your project collection. Your pipelines won't run until they can target a compatible agent.
You can view the version of an agent by navigating to Agent pools and selecting the
Capabilities tab for the desired agent, as described in View agent details.
![NOTE] For servers with no internet access, manually copy the agent zip file to
C:\ProgramData\Microsoft\Azure DevOps\Agents\ to use as a local file.
FAQ
How do I make sure I have the latest v2 agent version?
1. Navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
From the Agent pools tab, select the desired agent, and choose the Capabilities tab.
5. Look for the Agent.Version capability. You can check this value against the latest published
agent version. See Azure Pipelines Agent and check the page for the highest version
number listed.
6. Each agent automatically updates itself when it runs a task that requires a newer version of
the agent. If you want to manually update some agents, right-click the pool, and select
Update all agents .
Can I update my v2 agents that are part of an Azure DevOps Server pool?
Yes. Beginning with Azure DevOps Server 2019, you can configure your server to look for the
agent package files on a local disk. This configuration will override the default version that came
with the server at the time of its release. This scenario also applies when the server doesn't have
access to the internet.
1. From a computer with Internet access, download the latest version of the agent package
files (in .zip or .tar.gz form) from the Azure Pipelines Agent GitHub Releases page.
2. Transfer the downloaded package files to each Azure DevOps Server Application Tier by
using a method of your choice (such as USB drive, Network transfer, and so on). Place the
agent files under the %ProgramData%\Microsoft\Azure DevOps\Agents folder.
3. You're all set! Your Azure DevOps Server will now use the local files whenever the agents
are updated. Each agent automatically updates itself when it runs a task that requires a
newer version of the agent. But if you want to manually update some agents, right-click
the pool, and then choose Update all agents .
Do self-hosted agents have any performance advantages over Microsoft-hosted agents?
In many cases, yes. Specifically:
If you use a self-hosted agent, you can run incremental builds. For example, if you define a
pipeline that does not clean the repo and does not perform a clean build, your builds will
typically run faster. When you use a Microsoft-hosted agent, you don't get these benefits
because the agent is destroyed after the build or release pipeline is completed.
A Microsoft-hosted agent can take longer to start your build. While it often takes just a few
seconds for your job to be assigned to a Microsoft-hosted agent, it can sometimes take
several minutes for an agent to be allocated depending on the load on our system.
Can I install multiple self-hosted agents on the same machine?
Yes. This approach can work well for agents that run jobs that don't consume many shared
resources. For example, you could try it for agents that run releases that mostly orchestrate
deployments and don't do much work on the agent itself.
You might find that in other cases you don't gain much efficiency by running multiple agents on
the same machine. For example, it might not be worthwhile for agents that run builds that
consume much disk and I/O resources.
You might also run into problems if parallel build jobs are using the same singleton tool
deployment, such as npm packages. For example, one build might update a dependency while
another build is in the middle of using it, which could cause unreliable results and errors.
Learn more
For more information about agents, see the following modules from the Build applications with
Azure DevOps learning path.
Choose a Microsoft-hosted or self-hosted build agent
Host your own build agent in Azure Pipelines
Agent pools
11/2/2020 • 18 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.
Instead of managing each agent individually, you organize agents into agent pools . In TFS, pools are
scoped to the entire server; so you can share an agent pool across project collections and projects.
An agent queue provides access to an agent pool within a project. When you create a build or release
pipeline, you specify which queue it uses. Queues are scoped to your project in TFS 2017 and newer, so
you can only use them across build and release pipelines within a project.
To share an agent pool with multiple projects, in each of those projects, you create an agent queue
pointing to the same agent pool. While multiple queues across projects can use the same agent pool,
multiple queues within a project cannot use the agent pool. Also, each agent queue can use only one
agent pool.
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
You create and manage agent queues from the agent queues tab in project settings.
If you are a project team member, you create and manage agent queues from the agent pools tab in
project settings.
Navigate to your project and choose Project settings , Agent pools .
Navigate to your project and choose Project settings , Agent pools .
Navigate to your project and choose Settings (gear icon) > Agent Queues .
Navigate to your project and choose Settings (gear icon) > Agent Queues .
3. Select the desired project collection, and choose View the collection administration page .
a. Select Agent Queues (For TFS 2015, Select Build and then Queues ).
By default, all contributors in a project are members of the User role on hosted pools. This allows every
contributor in a project to author and run pipelines using Microsoft-hosted agents.
Choosing a pool and agent in your pipeline
YAML
Classic
To choose a Microsoft-hosted agent from the Azure Pipelines pool in your Azure DevOps Services YAML
pipeline, specify the name of the image, using the YAML VM Image Label from this table.
pool:
vmImage: ubuntu-16.04
pool: MyPool
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
You create and manage agent queues from the agent queues tab in project settings.
If you are a project team member, you create and manage agent queues from the agent pools tab in
project settings.
Navigate to your project and choose Project settings , Agent pools .
Navigate to your project and choose Project settings , Agent pools .
Navigate to your project and choose Settings (gear icon) > Agent Queues .
Navigate to your project and choose Settings (gear icon) > Agent Queues .
3. Select the desired project collection, and choose View the collection administration page .
a. Select Agent Queues (For TFS 2015, Select Build and then Queues ).
Pools are used to run jobs. Learn about specifying pools for jobs.
If you've got a lot of self-hosted agents intended for different teams or purposes, you might want to
create additional pools as explained below.
RO L E O N A N A GEN T P O O L IN O RGA N IZ AT IO N
SET T IN GS P URP O SE
Reader Members of this role can view the agent pool as well as
agents. You typically use this to add operators that are
responsible for monitoring the agents and their health.
Service Account Members of this role can use the organization agent
pool to create a project agent pool in a project. If you
follow the guidelines above for creating new project
agent pools, you typically do not have to add any
members here.
RO L E O N A N A GEN T P O O L IN O RGA N IZ AT IO N
SET T IN GS P URP O SE
The All agent pools node in the Agent Pools tab is used to control the security of all organization agent
pools. Role memberships for individual organization agent pools are automatically inherited from those
of the 'All agent pools' node. When using TFS or Azure DevOps Server, by default, TFS and Azure DevOps
Server administrators are also administrators of the 'All agent pools' node.
Roles are also defined on each project agent pool, and memberships in these roles govern what
operations you can perform on an agent pool at the project level.
Reader Members of this role can view the project agent pool.
You typically use this to add operators that are
responsible for monitoring the build and deployment
jobs in that project agent pool.
User Members of this role can use the project agent pool
when authoring pipelines.
The All agent pools node in the Agent pools tab is used to control the security of all project agent pools
in a project. Role memberships for individual project agent pools are automatically inherited from those
of the 'All agent pools' node. By default, the following groups are added to the Administrator role of 'All
agent pools': Build Administrators, Release Administrators, Project Administrators.
The Security action in the Agent pools tab is used to control the security of all project agent pools in a
project. Role memberships for individual project agent pools are automatically inherited from what you
define here. By default, the following groups are added to the Administrator role of 'All agent pools':
Build Administrators, Release Administrators, Project Administrators.
TFS 2015
In TFS 2015, special groups are defined on agent pools, and membership in these groups governs what
operations you can perform.
Members of Agent Pool Administrators can register new agents in the pool and add additional users
as administrators or service accounts.
Add people to the Agent Pool Administrators group to grant them permission manage all the agent
pools. This enables people to create new pools and modify all existing pools. Members of Team
Foundation Administrators group can also perform all these operations.
Users in the Agent Pool Ser vice Accounts group have permission to listen to the message queue for
the specific pool to receive work. In most cases you should not have to manage members of this group.
The agent registration process takes care of it for you. The service account you specify for the agent
(commonly Network Service) is automatically added when you register the agent.
FAQ
If I don't schedule a maintenance window, when will the agents run maintenance?
If no window is scheduled, then the agents in that pool will not run the maintenance job.
What is a maintenance job?
You can configure agent pools to periodically clean up stale working directories and repositories. This
should reduce the potential for the agents to run out of disk space. Maintenance jobs are configured at
the project collection or organization level in agent pool settings.
To configure maintenance job settings:
1. Choose Azure DevOps , Organization settings .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
Choose the desired pool and choose Settings to configure maintenance job settings for that agent pool.
IMPORTANT
You must have the Manage build queues permission to configure maintenance job settings. If you don't see the
Settings tab or the Maintenance Histor y tab, you don't have that permission, which is granted by default to
the Administrator role. For more information, see Security of agent pools.
Configure your desired settings and choose Save .
Select Maintenance Histor y to see the maintenance job history for the current agent pool. You can
download and review logs to see the cleaning steps and actions taken.
The maintenance is done per agent, not per machine; so if you have multiple agents on a single machine,
you may still run into disk space issues.
I'm trying to create a project agent pool that uses an existing organization agent pool, but the controls
are grayed out. Why?
On the 'Create a project agent pool' dialog box, you can't use an existing organization agent pool if it is
already referenced by another project agent pool. Each organization agent pool can be referenced by
only one project agent pool within a given project collection.
I can't select a Microsoft-hosted pool and I can't queue my build. How do I fix this?
Ask the owner of your Azure DevOps organization to grant you permission to use the pool. See Security
of agent pools.
I need more hosted build resources. What can I do?
A: The Azure Pipelines pool provides all Azure DevOps organizations with cloud-hosted build agents and
free build minutes each month. If you need more Microsoft-hosted build resources, or need to run more
jobs in parallel, then you can either:
Host your own agents on infrastructure that you manage.
Buy additional parallel jobs.
Microsoft-hosted agents
11/2/2020 • 18 minutes to read • Edit Online
Azure Pipelines
Microsoft-hosted agents are only available with Azure DevOps Services, which is hosted in the cloud.
You cannot use Microsoft-hosted agents or the Azure Pipelines agent pool with on-premises TFS or
Azure DevOps Server. With these on-premises versions, you must use self-hosted agents.
IMPORTANT
To view the content available for your platform, make sure that you select the correct version of this article from
the version selector which is located above the table of contents. Feature support differs depending on whether
you are working from Azure DevOps Services or an on-premises version of Azure DevOps Server, renamed from
Team Foundation Server (TFS).
To learn which on-premises version you are using, see What platform/version am I using?
If your pipelines are in Azure Pipelines, then you've got a convenient option to run your jobs using a
Microsoft-hosted agent . With Microsoft-hosted agents, maintenance and upgrades are taken care of
for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded
after one use. Microsoft-hosted agents can run jobs directly on the VM or in a container.
Azure Pipelines provides a pre-defined agent pool named Azure Pipelines with Microsoft-hosted
agents.
For many teams this is the simplest way to run your jobs. You can try it first and see if it works for your
build or deployment. If not, you can use a self-hosted agent.
TIP
You can try a Microsoft-hosted agent for no charge.
Software
The Azure Pipelines agent pool offers several virtual machine images to choose from, each including a
broad range of tools and software.
You can see the installed software for each hosted agent by choosing the Included Software link in the
table. When using macOS images, you can manually select from tool versions. See below.
NOTE
In March 2020, we removed the following Azure Pipelines hosted images:
Windows Server 2012R2 with Visual Studio 2015 ( vs2015-win2012r2 )
macOS X High Sierra 10.13 ( macOS-10.13 )
Windows Server Core 1803 - ( win1803 )
NOTE
The Azure Pipelines hosted pool replaces the previous hosted pools that had names that mapped to the
corresponding images. Any jobs you had in the previous hosted pools are automatically redirected to the correct
image in the new Azure Pipelines hosted pool. In some circumstances, you may still see the old pool names, but
behind the scenes the hosted jobs are run using the Azure Pipelines pool. For more information about this
update, see the Single hosted pool release notes from the July 1 2019 - Sprint 154 release notes.
IMPORTANT
To request additional software to be installed on Microsoft-hosted agents, don't create a feedback request on
this document or open a support ticket. Instead, open an issue on our repository, where we manage the scripts
to generate various images.
NOTE
The specification of a pool can be done at multiple levels in a YAML file. If you notice that your pipeline is not
running on the expected image, make sure that you verify the pool specification at the pipeline, stage, and job
levels.
Hardware
Microsoft-hosted agents that run Windows and Linux images are provisioned on Azure general purpose
virtual machines Standard_DS2_v2. These virtual machines are colocated in the same geography as
your Azure DevOps organization.
Agents that run macOS images are provisioned on Mac pros. These agents always run in US and Europe
irrespective of the location of your Azure DevOps organization. If data sovereignty is important to you
and if your organization is not in one of these geographies, then you should not use macOS images.
Learn more.
All of these machines have 10 GB of free disk space available for your pipelines to run. This free space is
consumed when your pipeline checks out source code, downloads packages, pulls docker images, or
generates intermediate files.
IMPORTANT
We cannot honor requests to increase disk space on Microsoft-hosted agents, or to provision more powerful
machines. If the specifications of Microsoft-hosted agents do not meet your needs, then you should consider
self-hosted agents or scale set agents.
Networking
In some setups, you may need to know the range of IP addresses where agents are deployed. For
instance, if you need to grant the hosted agents access through a firewall, you may wish to restrict that
access by IP address. Because Azure DevOps uses the Azure global network, IP ranges vary over time.
We publish a weekly JSON file listing IP ranges for Azure datacenters, broken out by region. This file is
updated weekly with new planned IP ranges. The new IP ranges become effective the following week.
We recommend that you check back frequently (at least once every week) to ensure you keep an up-to-
date list. If agent jobs begin to fail, a key first troubleshooting step is to make sure your configuration
matches the latest list of IP addresses. The IP address ranges for the hosted agents are listed in the
weekly file under AzureCloud.<region> , such as AzureCloud.westus for the West US region.
Your hosted agents run in the same Azure geography as your organization. Each geography contains
one or more regions. While your agent may run in the same region as your organization, it is not
guaranteed to do so. To obtain the complete list of possible IP ranges for your agent, you must use the
IP ranges from all of the regions that are contained in your geography. For example, if your organization
is located in the United States geography, you must use the IP ranges for all of the regions in that
geography.
To determine your geography, navigate to
https://dev.azure.com/<your_organization>/_settings/organizationOverview , get your region, and find
the associated geography from the Azure geography table. Once you have identified your geography,
use the IP ranges from the weekly file for all regions in that geography.
IMPORTANT
You cannot use private connections such as ExpressRoute or VPN to connect Microsoft-hosted agents to your
corporate network. The traffic between Microsoft-hosted agents and your servers will be over public network.
NOTE
Since there is no API in the Azure Management Libraries for .NET to list the regions for a geography, you
must list them manually as shown in the following example.
4. Retrieve the IP addresses for all regions in your geography from the weekly file. If your region is
Brazil South or West Europe , you must include additional IP ranges based on your fallback
geography, as described in the following note.
NOTE
Due to capacity restrictions, some organizations in the Brazil South or West Europe regions may occasionally
see their hosted agents located outside their expected geography. In these cases, in addition to including the IP
ranges as described in the previous section, additional IP ranges must be included for the regions in the capacity
fallback geography.
If your organization is in the Brazil South region, your capacity fallback geography is United States .
If your organization is in the West Europe region, the capacity fallback geography is France .
Our Mac IP ranges are not included in the Azure IPs above, though we are investigating options to publish these
in the future.
Example
In the following example, the hosted agent IP address ranges for an organization in the West US region
are retrieved from the weekly file. Since the West US region is in the United States geography, the IP
addresses for all regions in the United States geography are included. In this example, the IP addresses
are written to the console.
using Newtonsoft.Json.Linq;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
namespace WeeklyFileIPRanges
{
class Program
{
// Path to the locally saved weekly file
const string weeklyFilePath = @"C:\MyPath\ServiceTags_Public_20200504.json";
var ipList =
from v in values
where (string)v["name"] == azureCloudRegion
select v["properties"]["addressPrefixes"];
Service tags
Microsoft-hosted agents can't be listed by service tags. If you're trying to grant hosted agents access to
your resources, you'll need to follow the IP range allow listing method.
Security
Microsoft-hosted agents run on secure Azure platform. However, you must be aware of the following
security considerations.
Although Microsoft-hosted agents run on Azure public network, they are not assigned public IP
addresses. So, external entities cannot target Microsoft-hosted agents.
Microsoft-hosted agents are run in individual VMs, which are re-imaged after each run. Each agent is
dedicated to a single organization, and each VM hosts only a single agent.
There are several benefits to running your pipeline on Microsoft-hosted agents, from a security
perspective. If you run untrusted code in your pipeline, such as contributions from forks, it is safer to
run the pipeline on Microsoft-hosted agents than on self-hosted agents that reside in your corporate
network.
When a pipeline needs to access your corporate resources behind a firewall, you have to allow the IP
address range for the Azure geography. This may increase your exposure as the range of IP
addresses is rather large and since machines in this range can belong to other customers as well. The
best way to prevent this is to avoid the need to access internal resources.
Hosted images do not conform to CIS hardening benchmarks. To use CIS-hardened images, you
must create either self-hosted agents or scale-set agents.
Mono versions associated with Xamarin SDK versions on the Hosted macOS agent can be found here.
This command does not select the Mono version beyond the Xamarin SDK. To manually select a Mono
version, see instructions below.
In case you are using a non-default version of Xcode for building your Xamarin.iOS or Xamarin.Mac
apps, you should additionally execute this command line:
/bin/bash -c "echo '##vso[task.setvariable variable=MD_APPLE_SDK_ROOT;]'$(xcodeRoot);sudo xcode-
select --switch $(xcodeRoot)/Contents/Developer"
Xcode versions on the Hosted macOS agent pool can be found here.
Xcode
If you use the Xcode task included with Azure Pipelines and TFS, you can select a version of Xcode in
that task's properties. Otherwise, to manually set the Xcode version to use on the Hosted macOS
agent pool, before your xcodebuild build task, execute this command line as part of your build,
replacing the Xcode version number 8.3.3 as needed:
/bin/bash -c "sudo xcode-select -s /Applications/Xcode_8.3.3.app/Contents/Developer"
Xcode versions on the Hosted macOS agent pool can be found here.
This command does not work for Xamarin apps. To manually select an Xcode version for building
Xamarin apps, see instructions above.
Mono
To manually select a Mono version to use on the Hosted macOS agent pool, before your Mono build
task, execute this script in each job of your build, replacing the Mono version number 5.4.1 as needed:
SYMLINK=5_4_1
MONOPREFIX=/Library/Frameworks/Mono.framework/Versions/$SYMLINK
echo "##vso[task.setvariable
variable=DYLD_FALLBACK_LIBRARY_PATH;]$MONOPREFIX/lib:/lib:/usr/lib:$DYLD_LIBRARY_FALLBACK_PATH"
echo "##vso[task.setvariable
variable=PKG_CONFIG_PATH;]$MONOPREFIX/lib/pkgconfig:$MONOPREFIX/share/pkgconfig:$PKG_CONFIG_PATH"
echo "##vso[task.setvariable variable=PATH;]$MONOPREFIX/bin:$PATH"
.NET Core
.NET Core 2.2.105 is default on VM images but Mono version 6.0 or greater requires .NET Core
2.2.300+. If you use the Mono 6.0 or greater, you will have to override .NET Core version using .NET
Core Tool Installer task.
Boost
The VM images contain prebuilt Boost libraries with their headers in the directory designated by
BOOST_ROOT environment variable. In order to include the Boost headers, the path $BOOST_ROOT/include
should be added to the search paths.
Example of g++ invocation with Boost libraries:
Videos
Self-hosted Linux agents
11/2/2020 • 21 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
To run your jobs, you'll need at least one agent. A Linux agent can build and deploy different kinds of apps,
including Java and Android apps. We support Ubuntu, Red Hat, and CentOS.
Check prerequisites
The agent is based on .NET Core 2.1. You can run this agent on several Linux distributions. We support the
following subset of .NET Core supported distributions:
x64
CentOS 7, 6 (see note 1)
Debian 9
Fedora 30, 29
Linux Mint 18, 17
openSUSE 42.3 or later
Oracle Linux 7
Red Hat Enterprise Linux 8, 7, 6 (see note 1)
SUSE Enterprise Linux 12 SP2 or later
Ubuntu 18.04, 16.04
ARM32 (see note 2)
Debian 9
Ubuntu 18.04
NOTE
Note 1: RHEL 6 and CentOS 6 require installing the specialized rhel.6-x64 version of the agent.
NOTE
Note 2: ARM instruction set ARMv7 or above is required. Run uname -a to see your Linux distro's instruction set.
Regardless of your platform, you will need to install Git 2.9.0 or higher. We strongly recommend installing the
latest version of Git.
If you'll be using TFVC, you will also need the Oracle Java JDK 1.6 or higher. (The Oracle JRE and OpenJDK are not
sufficient for this purpose.)
The agent installer knows how to check for other dependencies. You can install those dependencies on supported
Linux platforms by running ./bin/installdependencies.sh in the agent directory.
TFS 2018 RTM and older : The shipped agent is based on CoreCLR 1.0. We recommend that, if able, you
should upgrade to a later agent version (2.125.0 or higher). See for more about what's required to run a newer
Azure Pipelines agent prereqs
agent.
If you must stay on the older agent, make sure your machine is prepared with our prerequisites for either of the
supported distributions:
Ubuntu systems
Red Hat/CentOS systems
Subversion
If you're building from a Subversion repo, you must install the Subversion client on the machine.
You should run agent setup manually the first time. After you get a feel for how agents work, or if you want to
automate setting up many agents, consider using unattended config.
Prepare permissions
Decide which user you'll use
As a one-time step, you must register the agent. Someone with permission to administer the agent queue must
complete these steps. The agent will not use this person's credentials in everyday operation, but they're required
to complete registration. Learn more about how agents communicate.
Authenticate with a personal access token (PAT)
1. Sign in with the user account you plan to use in your Team Foundation Server web portal (
https://{your-server}:8080/tfs/ ).
1. Sign in with the user account you plan to use in you Azure DevOps Server web portal (
https://{your-server}/DefaultCollection/ ).
1. Sign in with the user account you plan to use in your Azure DevOps organization (
https://dev.azure.com/{your_organization} ).
2. From your home page, open your profile. Go to your security details.
4. For the scope select Agent Pools (read, manage) and make sure all the other boxes are cleared. If it's a
deployment group agent, for the scope select Deployment group (read, manage) and make sure all
the other boxes are cleared.
Select Show all scopes at the bottom of the Create a new personal access token window window
to see the complete list of scopes.
5. Copy the token. You'll use this token when you configure the agent.
Authenticate as a Windows user (TFS 2015 and TFS 2017)
As an alternative, on TFS 2017, you can use either a domain user or a local Windows user on each of your TFS
application tiers.
On TFS 2015, for macOS and Linux only, we recommend that you create a local Windows user on each of your
TFS application tiers and dedicate that user for the purpose of deploying build agents.
Confirm the user has permission
Make sure the user account that you're going to use has permission to register the agent.
Is the user an Azure DevOps organization owner or TFS or Azure DevOps Server administrator? Stop here , you
have permission.
Otherwise:
1. Open a browser and navigate to the Agent pools tab for your Azure Pipelines organization or Azure
DevOps Server or TFS server:
1. Choose Azure DevOps , Organization settings .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
2. Click the pool on the left side of the page and then click Security .
3. If the user account you're going to use is not shown, then get an administrator to add it. The administrator
can be an agent pool administrator, an Azure DevOps organization owner, or a TFS or Azure DevOps
Server administrator.
If it's a deployment group agent, the administrator can be an deployment group administrator, an Azure
DevOps organization owner, or a TFS or Azure DevOps Server administrator.
You can add a user to the deployment group administrator role in the Security tab on the Deployment
Groups page in Azure Pipelines .
NOTE
If you see a message like this: Sorr y, we couldn't add the identity. Please tr y a different identity. , you probably
followed the above steps for an organization owner or TFS or Azure DevOps Server administrator. You don't need to do
anything; you already have permission to administer the agent queue.
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
./config.sh
Server URL
Azure Pipelines: https://dev.azure.com/{your-organization}
Authentication type
Azure Pipelines
Choose PAT , and then paste the PAT token you created into the command prompt window.
NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent. Learn
more at Communication with Azure Pipelines or TFS.
IMPORTANT
Make sure your server is configured to support the authentication method you want to use.
When you configure your agent to connect to TFS, you've got the following options:
Alternate Connect to TFS or Azure DevOps Server using Basic authentication. After you select Alternate
you'll be prompted for your credentials.
Integrated Not supported on macOS or Linux.
Negotiate (Default) Connect to TFS or Azure DevOps Server as a user other than the signed-in user via a
Windows authentication scheme such as NTLM or Kerberos. After you select Negotiate you'll be prompted
for credentials.
PAT Supported only on Azure Pipelines and TFS 2017 and newer. After you choose PAT, paste the PAT
token you created into the command prompt window. Use a personal access token (PAT) if your Azure
DevOps Server or TFS instance and the agent machine are not in a trusted domain. PAT authentication is
handled by your Azure DevOps Server or TFS instance instead of the domain controller.
NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent on
Azure DevOps Server and the newer versions of TFS. Learn more at Communication with Azure Pipelines or TFS.
Run interactively
For guidance on whether to run the agent in interactive mode or as a service, see Agents: Interactive vs. service.
To run the agent interactively:
1. If you have been running the agent as a service, uninstall the service.
2. Run the agent.
./run.sh
To restart the agent, press Ctrl+C and then run run.sh to restart it.
To use your agent, run a job using the agent's pool. If you didn't choose a different pool, your agent will be in the
Default pool.
Run once
For agents configured to run interactively, you can choose to have the agent accept only one job. To run in this
configuration:
./run.sh --once
Agents in this mode will accept only one job and then spin down gracefully (useful for running in Docker on a
service like Azure Container Instances).
NOTE
If you have a different distribution, or if you prefer other approaches, you can use whatever kind of service mechanism you
prefer. See Service files.
Commands
Change to the agent directory
For example, if you installed in the myagent subfolder of your home directory:
cd ~/myagent$
Install
Command:
This command creates a service file that points to ./runsvc.sh . This script sets up the environment (more details
below) and starts the agents host.
Start
Status
Stop
Uninstall
The snapshot of the environment variables is stored in .env file ( PATH is stored in .path ) under agent root
directory, you can also change these files directly to apply environment variable changes.
Run instructions before the service starts
You can also run your own instructions and commands to run when the service starts. For example, you could
set up the environment or call scripts.
1. Edit runsvc.sh .
2. Replace the following line with your instructions:
Service files
When you install the service, some service files are put in place.
systemd service file
A systemd service file is created:
/etc/systemd/system/vsts.agent.{tfs-name}.{agent-name}.service
For example, you have configured an agent (see above) with the name our-linux-agent . The service file will be
either:
Azure Pipelines : the name of your organization. For example if you connect to
https://dev.azure.com/fabrikam , then the service name would be
/etc/systemd/system/vsts.agent.fabrikam.our-linux-agent.service
TFS or Azure DevOps Ser ver : the name of your on-premises server. For example if you connect to
http://our-server:8080/tfs , then the service name would be
/etc/systemd/system/vsts.agent.our-server.our-linux-agent.service
sudo ./svc.sh install generates this file from this template: ./bin/vsts.agent.service.template
.service file
sudo ./svc.sh start finds the service by reading the .service file, which contains the name of systemd service
file described above.
Alternative service mechanisms
We provide the ./svc.sh script as a convenient way for you to run and manage your agent as a systemd
service. But you can use whatever kind of service mechanism you prefer (for example: initd or upstart).
You can use the template described above as to facilitate generating other kinds of service files.
./config.sh remove
Unattended config
The agent can be set up from a script with no human intervention. You must pass --unattended and the answers
to all questions.
To configure an agent, it must know the URL to your organization or collection and credentials of someone
authorized to set up agents. All other responses are optional. Any command-line parameter can be specified
using an environment variable instead: put its name in upper case and prepend VSTS_AGENT_INPUT_ . For example,
VSTS_AGENT_INPUT_PASSWORD instead of specifying --password .
Required options
--unattended - agent setup will not prompt for information, and all settings must be provided on the
command line
--url <url> - URL of the server. For example: https://dev.azure.com/myorganization or http://my-azure-
devops-server:8080/tfs
--auth <type> - authentication type. Valid values are:
pat (Personal access token)
negotiate (Kerberos or NTLM)
alt (Basic authentication)
integrated (Windows default credentials)
Authentication options
If you chose --auth pat :
--token <token> - specifies your personal access token
If you chose --auth negotiate or --auth alt :
--userName <userName> - specifies a Windows username in the format domain\userName or
[email protected]
--password <password> - specifies a password
Pool and agent names
--pool <pool> - pool name for the agent to join
--agent <agent> - agent name
--replace - replace the agent in a pool. If another agent is listening by the same name, it will start failing
with a conflict
Agent setup
--work <workDirectory> - work directory where job data is stored. Defaults to _work under the root of the
agent directory. The work directory is owned by a given agent and should not share between multiple agents.
--acceptTeeEula - accept the Team Explorer Everywhere End User License Agreement (macOS and Linux
only)
--disableloguploads - don't stream or send console log output to the server. Instead, you may retrieve them
from the agent host's filesystem after the job completes.
Windows-only startup
--runAsService - configure the agent to run as a Windows service (requires administrator permission)
--runAsAutoLogon - configure auto-logon and run the agent on startup (requires administrator permission)
--windowsLogonAccount <account> - used with --runAsService or --runAsAutoLogon to specify the Windows
user name in the format domain\userName or [email protected]
--windowsLogonPassword <password> - used with --runAsService or --runAsAutoLogon to specify Windows
logon password
--overwriteAutoLogon - used with --runAsAutoLogon to overwrite the existing auto logon on the machine
--noRestart - used with --runAsAutoLogon to stop the host from restarting after agent configuration
completes
Deployment group only
--deploymentGroup - configure the agent as a deployment group agent
--deploymentGroupName <name> - used with --deploymentGroup to specify the deployment group for the agent
to join
--projectName <name> - used with --deploymentGroup to set the project name
--addDeploymentGroupTags - used with --deploymentGroup to indicate that deployment group tags should be
added
--deploymentGroupTags <tags> - used with --addDeploymentGroupTags to specify the comma separated list of
tags for the deployment group agent - for example "web, db"
./config.sh --help always lists the latest required and optional responses.
Diagnostics
If you're having trouble with your self-hosted agent, you can try running diagnostics. After configuring the agent:
./run.sh --diagnostics
This will run through a diagnostic suite that may help you troubleshoot the problem. The diagnostics feature is
available starting with agent version 2.165.0.
./config.sh --help
The help provides information on authentication alternatives and unattended configuration.
Capabilities
Your agent's capabilities are cataloged and advertised in the pool so that only the builds and releases it can
handle are assigned to it. See Build and release agent capabilities.
In many cases, after you deploy an agent, you'll need to install software or utilities. Generally you should install
on your agents whatever software and tools you use on your development machine.
For example, if your build includes the npm task, then the build won't run unless there's a build agent in the pool
that has npm installed.
IMPORTANT
After you install new software on an agent, you must restart the agent for the new capability to show up in the pool so
that the build can run.
FAQ
How do I make sure I have the latest v2 agent version?
1. Navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
5. Look for the Agent.Version capability. You can check this value against the latest published agent version.
See Azure Pipelines Agent and check the page for the highest version number listed.
6. Each agent automatically updates itself when it runs a task that requires a newer version of the agent. If
you want to manually update some agents, right-click the pool, and select Update all agents .
Can I update my v2 agents that are part of an Azure DevOps Server pool?
Yes. Beginning with Azure DevOps Server 2019, you can configure your server to look for the agent package files
on a local disk. This configuration will override the default version that came with the server at the time of its
release. This scenario also applies when the server doesn't have access to the internet.
1. From a computer with Internet access, download the latest version of the agent package files (in .zip or
.tar.gz form) from the Azure Pipelines Agent GitHub Releases page.
2. Transfer the downloaded package files to each Azure DevOps Server Application Tier by using a method
of your choice (such as USB drive, Network transfer, and so on). Place the agent files under the
%ProgramData%\Microsoft\Azure DevOps\Agents folder.
3. You're all set! Your Azure DevOps Server will now use the local files whenever the agents are updated.
Each agent automatically updates itself when it runs a task that requires a newer version of the agent. But
if you want to manually update some agents, right-click the pool, and then choose Update all agents .
Why is sudo needed to run the service commands?
./svc.sh uses systemctl , which requires sudo .
Source code: systemd.svc.sh.template on GitHub
I'm running a firewall and my code is in Azure Repos. What URLs does the agent need to communicate with?
If you're running an agent in a secure network behind a firewall, make sure the agent can initiate communication
with the following URLs and IP addresses.
For organizations using the *.visualstudio.com domain:
https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com
https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net
To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your
IP version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place,
as you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24
IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48
https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com
https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net
To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your
IP version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place,
as you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24
IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48
NOTE
This procedure enables the agent to bypass a web proxy. Your build pipeline and scripts must still handle bypassing your
web proxy for each task and tool you run in your build.
For example, if you are using a NuGet task, you must configure your web proxy to support bypassing the URL for the
server that hosts the NuGet feed you're using.
I'm using TFS and the URLs in the sections above don't work for me. Where can I get help?
Web site settings and security
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Self-hosted macOS agents
11/2/2020 • 21 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
To build and deploy Xcode apps or Xamarin.iOS projects, you'll need at least one macOS agent. This agent can
also build and deploy Java and Android apps.
Check prerequisites
Make sure your machine has these prerequisites:
macOS Sierra (10.12) or higher
Git 2.9.0 or higher (latest version strongly recommended - you can easily install with Homebrew)
These prereqs are required for agent version 2.125.0 and higher.
These prereqs are required for agent version 2.124.0 and below. If you're able, we recommend upgrading to
a newer macOS (10.12+) and upgrading to the newest agent.
Make sure your machine has these prerequisites:
OS X Yosemite (10.10), El Capitan (10.11), or macOS Sierra (10.12)
Git 2.9.0 or higher (latest version strongly recommended)
Meets all prereqs for .NET Core 1.x
If you'll be using TFVC, you will also need the Oracle Java JDK 1.6 or higher. (The Oracle JRE and OpenJDK are not
sufficient for this purpose.)
Prepare permissions
If you're building from a Subversion repo, you must install the Subversion client on the machine.
You should run agent setup manually the first time. After you get a feel for how agents work, or if you want to
automate setting up many agents, consider using unattended config.
Decide which user you'll use
As a one-time step, you must register the agent. Someone with permission to administer the agent queue must
complete these steps. The agent will not use this person's credentials in everyday operation, but they're required
to complete registration. Learn more about how agents communicate.
Authenticate with a personal access token (PAT)
1. Sign in with the user account you plan to use in your Team Foundation Server web portal (
https://{your-server}:8080/tfs/ ).
1. Sign in with the user account you plan to use in you Azure DevOps Server web portal (
https://{your-server}/DefaultCollection/ ).
1. Sign in with the user account you plan to use in your Azure DevOps organization (
https://dev.azure.com/{your_organization} ).
2. From your home page, open your profile. Go to your security details.
4. For the scope select Agent Pools (read, manage) and make sure all the other boxes are cleared. If it's a
deployment group agent, for the scope select Deployment group (read, manage) and make sure all
the other boxes are cleared.
Select Show all scopes at the bottom of the Create a new personal access token window window
to see the complete list of scopes.
5. Copy the token. You'll use this token when you configure the agent.
Authenticate as a Windows user (TFS 2015 and TFS 2017)
As an alternative, on TFS 2017, you can use either a domain user or a local Windows user on each of your TFS
application tiers.
On TFS 2015, for macOS and Linux only, we recommend that you create a local Windows user on each of your
TFS application tiers and dedicate that user for the purpose of deploying build agents.
Confirm the user has permission
Make sure the user account that you're going to use has permission to register the agent.
Is the user an Azure DevOps organization owner or TFS or Azure DevOps Server administrator? Stop here , you
have permission.
Otherwise:
1. Open a browser and navigate to the Agent pools tab for your Azure Pipelines organization or Azure
DevOps Server or TFS server:
1. Choose Azure DevOps , Organization settings .
2. Choose Agent pools .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
NOTE
If you see a message like this: Sorr y, we couldn't add the identity. Please tr y a different identity. , you probably
followed the above steps for an organization owner or TFS or Azure DevOps Server administrator. You don't need to do
anything; you already have permission to administer the agent queue.
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
3. Select the Default pool, select the Agents tab, and choose New agent .
4. On the Get the agent dialog box, click macOS .
5. Click the Download button.
6. Follow the instructions on the page.
7. Clear the extended attribute on the tar file: xattr -c vsts-agent-osx-x64-V.v.v.tar.gz .
8. Unpack the agent into the directory of your choice. cd to that directory and run ./config.sh . Make sure
that the path to the directory contains no spaces because tools and scripts don't always properly escape
spaces.
Azure DevOps Server 2019 and Azure DevOps Server 2020
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to Azure DevOps Server, and navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
./config.sh
Server URL
Azure Pipelines: https://dev.azure.com/{your-organization}
Authentication type
Azure Pipelines
Choose PAT , and then paste the PAT token you created into the command prompt window.
NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent. Learn
more at Communication with Azure Pipelines or TFS.
IMPORTANT
Make sure your server is configured to support the authentication method you want to use.
When you configure your agent to connect to TFS, you've got the following options:
Alternate Connect to TFS or Azure DevOps Server using Basic authentication. After you select Alternate
you'll be prompted for your credentials.
Integrated Not supported on macOS or Linux.
Negotiate (Default) Connect to TFS or Azure DevOps Server as a user other than the signed-in user via a
Windows authentication scheme such as NTLM or Kerberos. After you select Negotiate you'll be prompted
for credentials.
PAT Supported only on Azure Pipelines and TFS 2017 and newer. After you choose PAT, paste the PAT token
you created into the command prompt window. Use a personal access token (PAT) if your Azure DevOps
Server or TFS instance and the agent machine are not in a trusted domain. PAT authentication is handled
by your Azure DevOps Server or TFS instance instead of the domain controller.
NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent on
Azure DevOps Server and the newer versions of TFS. Learn more at Communication with Azure Pipelines or TFS.
Run interactively
For guidance on whether to run the agent in interactive mode or as a service, see Agents: Interactive vs. service.
To run the agent interactively:
1. If you have been running the agent as a service, uninstall the service.
2. Run the agent.
./run.sh
To restart the agent, press Ctrl+C and then run run.sh to restart it.
To use your agent, run a job using the agent's pool. If you didn't choose a different pool, your agent will be in the
Default pool.
Run once
For agents configured to run interactively, you can choose to have the agent accept only one job. To run in this
configuration:
./run.sh --once
Agents in this mode will accept only one job and then spin down gracefully (useful for running on a service like
Azure Container Instances).
NOTE
If you prefer other approaches, you can use whatever kind of service mechanism you prefer. See Service files.
Tokens
In the section below, these tokens are replaced:
{agent-name}
{tfs-name}
For example, you have configured an agent (see above) with the name our-osx-agent . In the following examples,
{tfs-name} will be either:
Azure Pipelines: the name of your organization. For example if you connect to
https://dev.azure.com/fabrikam , then the service name would be vsts.agent.fabrikam.our-osx-agent
TFS: the name of your on-premises TFS AT server. For example if you connect to
http://our-server:8080/tfs , then the service name would be vsts.agent.our-server.our-osx-agent
Commands
Change to the agent directory
For example, if you installed in the myagent subfolder of your home directory:
cd ~/myagent$
Install
Command:
./svc.sh install
This command creates a launchd plist that points to ./runsvc.sh . This script sets up the environment (more
details below) and starts the agent's host.
Start
Command:
./svc.sh start
Output:
starting vsts.agent.{tfs-name}.{agent-name}
status vsts.agent.{tfs-name}.{agent-name}:
/Users/{your-name}/Library/LaunchAgents/vsts.agent.{tfs-name}.{agent-name}.plist
Started:
13472 0 vsts.agent.{tfs-name}.{agent-name}
The left number is the pid if the service is running. If second number is not zero, then a problem occurred.
Status
Command:
./svc.sh status
Output:
status vsts.agent.{tfs-name}.{agent-name}:
/Users/{your-name}/Library/LaunchAgents/vsts.{tfs-name}.{agent-name}.testsvc.plist
Started:
13472 0 vsts.agent.{tfs-name}.{agent-name}
The left number is the pid if the service is running. If second number is not zero, then a problem occurred.
Stop
Command:
./svc.sh stop
Output:
stopping vsts.agent.{tfs-name}.{agent-name}
status vsts.agent.{tfs-name}.{agent-name}:
/Users/{your-name}/Library/LaunchAgents/vsts.{tfs-name}.{agent-name}.testsvc.plist
Stopped
Uninstall
Command:
./svc.sh uninstall
NOTE
For more information, see the Terminally Geeky: use automatic login more securely blog. The .plist file mentioned in that
blog may no longer be available at the source, but a copy can be found here: Lifehacker - Make OS X load your desktop
before you log in.
./env.sh
./svc.sh stop
./svc.sh start
The snapshot of the environment variables is stored in .env file under agent root directory, you can also change
that file directly to apply environment variable changes.
Run instructions before the service starts
You can also run your own instructions and commands to run when the service starts. For example, you could set
up the environment or call scripts.
1. Edit runsvc.sh .
2. Replace the following line with your instructions:
Service Files
When you install the service, some service files are put in place.
.plist service file
A .plist service file is created:
~/Library/LaunchAgents/vsts.agent.{tfs-name}.{agent-name}.plist
For example:
~/Library/LaunchAgents/vsts.agent.fabrikam.our-osx-agent.plist
sudo ./svc.sh install generates this file from this template: ./bin/vsts.agent.plist.template
.service file
./svc.sh startfinds the service by reading the .service file, which contains the path to the plist service file
described above.
Alternative service mechanisms
We provide the ./svc.sh script as a convenient way for you to run and manage your agent as a launchd
LaunchAgent service. But you can use whatever kind of service mechanism you prefer.
You can use the template described above as to facilitate generating other kinds of service files. For example, you
modify the template to generate a service that runs as a launch daemon if you don't need UI tests and don't want
to configure automatic log on and lock. See Apple Developer Library: Creating Launch Daemons and Agents.
Replace an agent
To replace an agent, follow the Download and configure the agent steps again.
When you configure an agent using the same name as an agent that already exists, you're asked if you want to
replace the existing agent. If you answer Y , then make sure you remove the agent (see below) that you're
replacing. Otherwise, after a few minutes of conflicts, one of the agents will shut down.
./config.sh remove
Unattended config
The agent can be set up from a script with no human intervention. You must pass --unattended and the answers
to all questions.
To configure an agent, it must know the URL to your organization or collection and credentials of someone
authorized to set up agents. All other responses are optional. Any command-line parameter can be specified
using an environment variable instead: put its name in upper case and prepend VSTS_AGENT_INPUT_ . For example,
VSTS_AGENT_INPUT_PASSWORD instead of specifying --password .
Required options
--unattended - agent setup will not prompt for information, and all settings must be provided on the
command line
--url <url> - URL of the server. For example: https://dev.azure.com/myorganization or http://my-azure-
devops-server:8080/tfs
--auth <type> - authentication type. Valid values are:
pat (Personal access token)
negotiate (Kerberos or NTLM)
alt (Basic authentication)
integrated (Windows default credentials)
Authentication options
If you chose --auth pat :
--token <token> - specifies your personal access token
If you chose --auth negotiate or --auth alt :
--userName <userName> - specifies a Windows username in the format domain\userName or
[email protected]
--password <password> - specifies a password
Pool and agent names
--pool <pool> - pool name for the agent to join
--agent <agent> - agent name
--replace - replace the agent in a pool. If another agent is listening by the same name, it will start failing with
a conflict
Agent setup
--work <workDirectory> - work directory where job data is stored. Defaults to _work under the root of the
agent directory. The work directory is owned by a given agent and should not share between multiple agents.
--acceptTeeEula - accept the Team Explorer Everywhere End User License Agreement (macOS and Linux only)
--disableloguploads - don't stream or send console log output to the server. Instead, you may retrieve them
from the agent host's filesystem after the job completes.
Windows-only startup
--runAsService - configure the agent to run as a Windows service (requires administrator permission)
--runAsAutoLogon - configure auto-logon and run the agent on startup (requires administrator permission)
--windowsLogonAccount <account> - used with --runAsService or --runAsAutoLogon to specify the Windows
user name in the format domain\userName or [email protected]
--windowsLogonPassword <password> - used with --runAsService or --runAsAutoLogon to specify Windows
logon password
--overwriteAutoLogon - used with --runAsAutoLogon to overwrite the existing auto logon on the machine
--noRestart - used with --runAsAutoLogon to stop the host from restarting after agent configuration
completes
Deployment group only
--deploymentGroup - configure the agent as a deployment group agent
--deploymentGroupName <name> - used with --deploymentGroup to specify the deployment group for the agent
to join
--projectName <name> - used with --deploymentGroup to set the project name
--addDeploymentGroupTags - used with --deploymentGroup to indicate that deployment group tags should be
added
--deploymentGroupTags <tags> - used with --addDeploymentGroupTags to specify the comma separated list of
tags for the deployment group agent - for example "web, db"
./config.sh --help always lists the latest required and optional responses.
Diagnostics
If you're having trouble with your self-hosted agent, you can try running diagnostics. After configuring the agent:
./run.sh --diagnostics
This will run through a diagnostic suite that may help you troubleshoot the problem. The diagnostics feature is
available starting with agent version 2.165.0.
./config.sh --help
Capabilities
Your agent's capabilities are cataloged and advertised in the pool so that only the builds and releases it can
handle are assigned to it. See Build and release agent capabilities.
In many cases, after you deploy an agent, you'll need to install software or utilities. Generally you should install on
your agents whatever software and tools you use on your development machine.
For example, if your build includes the npm task, then the build won't run unless there's a build agent in the pool
that has npm installed.
IMPORTANT
After you install new software on an agent, you must restart the agent for the new capability to show up in the pool so
that the build can run.
FAQ
How do I make sure I have the latest v2 agent version?
1. Navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
5. Look for the Agent.Version capability. You can check this value against the latest published agent version.
See Azure Pipelines Agent and check the page for the highest version number listed.
6. Each agent automatically updates itself when it runs a task that requires a newer version of the agent. If
you want to manually update some agents, right-click the pool, and select Update all agents .
Can I update my v2 agents that are part of an Azure DevOps Server pool?
Yes. Beginning with Azure DevOps Server 2019, you can configure your server to look for the agent package files
on a local disk. This configuration will override the default version that came with the server at the time of its
release. This scenario also applies when the server doesn't have access to the internet.
1. From a computer with Internet access, download the latest version of the agent package files (in .zip or
.tar.gz form) from the Azure Pipelines Agent GitHub Releases page.
2. Transfer the downloaded package files to each Azure DevOps Server Application Tier by using a method of
your choice (such as USB drive, Network transfer, and so on). Place the agent files under the
%ProgramData%\Microsoft\Azure DevOps\Agents folder.
3. You're all set! Your Azure DevOps Server will now use the local files whenever the agents are updated. Each
agent automatically updates itself when it runs a task that requires a newer version of the agent. But if you
want to manually update some agents, right-click the pool, and then choose Update all agents .
Where can I learn more about how the launchd service works?
Apple Developer Library: Creating Launch Daemons and Agents
I'm running a firewall and my code is in Azure Repos. What URLs does the agent need to communicate with?
If you're running an agent in a secure network behind a firewall, make sure the agent can initiate communication
with the following URLs and IP addresses.
For organizations using the *.visualstudio.com domain:
https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com
https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net
To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your IP
version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place, as
you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24
IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48
https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net
To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your IP
version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place, as
you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24
IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48
NOTE
This procedure enables the agent to bypass a web proxy. Your build pipeline and scripts must still handle bypassing your
web proxy for each task and tool you run in your build.
For example, if you are using a NuGet task, you must configure your web proxy to support bypassing the URL for the
server that hosts the NuGet feed you're using.
I'm using TFS and the URLs in the sections above don't work for me. Where can I get help?
Web site settings and security
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Self-hosted Windows agents
11/2/2020 • 19 minutes to read • Edit Online
Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015 | Previous versions (XAML builds)
IMPORTANT
For TFS 2015, see Self-hosted Windows agents - TFS 2015.
To build and deploy Windows, Azure, and other Visual Studio solutions you'll need at least one Windows agent.
Windows agents can also build Java and Android apps.
Check prerequisites
Make sure your machine has these prerequisites:
Windows 7, 8.1, or 10 (if using a client OS)
Windows 2008 R2 SP1 or higher (if using a server OS)
PowerShell 3.0 or higher
.NET Framework 4.6.2 or higher
IMPORTANT
Starting December 2019, the minimum required .NET version for build agents is 4.6.2 or higher.
Recommended:
Visual Studio build tools (2015 or higher)
If you're building from a Subversion repo, you must install the Subversion client on the machine.
You should run agent setup manually the first time. After you get a feel for how agents work, or if you want to
automate setting up many agents, consider using unattended config.
Hardware specs
The hardware specs for your agents will vary with your needs, team size, etc. It's not possible to make a general
recommendation that will apply to everyone. As a point of reference, the Azure DevOps team builds the hosted
agents code using pipelines that utilize hosted agents. On the other hand, the bulk of the Azure DevOps code is
built by 24-core server class machines running 4 self-hosted agents apiece.
Prepare permissions
Decide which user you'll use
As a one-time step, you must register the agent. Someone with permission to administer the agent queue must
complete these steps. The agent will not use this person's credentials in everyday operation, but they're
required to complete registration. Learn more about how agents communicate.
Authenticate with a personal access token (PAT)
1. Sign in with the user account you plan to use in your Team Foundation Server web portal (
https://{your-server}:8080/tfs/ ).
1. Sign in with the user account you plan to use in you Azure DevOps Server web portal (
https://{your-server}/DefaultCollection/ ).
1. Sign in with the user account you plan to use in your Azure DevOps organization (
https://dev.azure.com/{your_organization} ).
2. From your home page, open your profile. Go to your security details.
4. For the scope select Agent Pools (read, manage) and make sure all the other boxes are cleared. If it's
a deployment group agent, for the scope select Deployment group (read, manage) and make sure
all the other boxes are cleared.
Select Show all scopes at the bottom of the Create a new personal access token window window
to see the complete list of scopes.
5. Copy the token. You'll use this token when you configure the agent.
Authenticate as a Windows user (TFS 2015 and TFS 2017)
As an alternative, on TFS 2017, you can use either a domain user or a local Windows user on each of your TFS
application tiers.
On TFS 2015, for macOS and Linux only, we recommend that you create a local Windows user on each of your
TFS application tiers and dedicate that user for the purpose of deploying build agents.
Confirm the user has permission
Make sure the user account that you're going to use has permission to register the agent.
Is the user an Azure DevOps organization owner or TFS or Azure DevOps Server administrator? Stop here ,
you have permission.
Otherwise:
1. Open a browser and navigate to the Agent pools tab for your Azure Pipelines organization or Azure
DevOps Server or TFS server:
1. Choose Azure DevOps , Organization settings .
2. Choose Agent pools .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
NOTE
If you see a message like this: Sorr y, we couldn't add the identity. Please tr y a different identity. , you probably
followed the above steps for an organization owner or TFS or Azure DevOps Server administrator. You don't need to do
anything; you already have permission to administer the agent queue.
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
3. Select the Default pool, select the Agents tab, and choose New agent .
4. On the Get the agent dialog box, choose Windows .
5. On the left pane, select the processor architecture of the installed Windows OS version on your machine.
The x64 agent version is intended for 64-bit Windows, whereas the x86 version is intended for 32-bit
Windows. If you aren't sure which version of Windows is installed, follow these instructions to find out.
6. On the right pane, click the Download button.
7. Follow the instructions on the page to download the agent.
8. Unpack the agent into the directory of your choice. Then run config.cmd . This will ask you a series of
questions to configure the agent.
Azure DevOps Server 2019 and Azure DevOps Server 2020
1. Log on to the machine using the account for which you've prepared permissions as explained above.
2. In your web browser, sign in to Azure DevOps Server 2019, and navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
When setup asks for your server URL, for TFS, answer https://{your_server}/tfs .
When setup asks for your authentication type, choose PAT . Then paste the PAT token you created into the
command prompt window.
NOTE
When using PAT as the authentication method, the PAT token is only used during the initial configuration of the agent.
Later, if the PAT expires or needs to be renewed, no further changes are required by the agent.
IMPORTANT
Make sure your server is configured to support the authentication method you want to use.
When you configure your agent to connect to TFS, you've got the following options:
Alternate Connect to TFS using Basic authentication. After you select Alternate you'll be prompted for
your credentials.
Negotiate Connect to TFS as a user other than the signed-in user via a Windows authentication scheme
such as NTLM or Kerberos. After you select Negotiate you'll be prompted for credentials.
Integrated (Default) Connect a Windows agent to TFS using the credentials of the signed-in user via a
Windows authentication scheme such as NTLM or Kerberos. You won't be prompted for credentials after
you choose this method.
PAT Supported only on Azure Pipelines and TFS 2017 and newer. After you choose PAT, paste the PAT
token you created into the command prompt window. Use a personal access token (PAT) if your TFS
instance and the agent machine are not in a trusted domain. PAT authentication is handled by your TFS
instance instead of the domain controller.
NOTE
When using PAT as the authentication method, the PAT token is used only for the initial configuration of the agent. If the
PAT needs to be regenerated, no further changes are needed to the agent.
.\run.cmd
To restart the agent, press Ctrl+C to stop the agent and then run run.cmd to restart it.
Run once
For agents configured to run interactively, you can choose to have the agent accept only one job. To run in this
configuration:
.\run.cmd --once
Agents in this mode will accept only one job and then spin down gracefully (useful for running in Docker on a
service like Azure Container Instances).
Run as a service
If you configured the agent to run as a service, it starts automatically. You can view and control the agent
running status from the services snap-in. Run services.msc and look for one of:
"Azure Pipelines Agent (name of your agent)".
"VSTS Agent (name of your agent)".
"vstsagent.(organization name).(name of your agent)".
To restart the agent, right-click the entry and choose Restar t .
NOTE
If you need to change the agent's logon account, don't do it from the Services snap-in. Instead, see the information
below to re-configure the agent.
To use your agent, run a job using the agent's pool. If you didn't choose a different pool, your agent will be in
the Default pool.
Replace an agent
To replace an agent, follow the Download and configure the agent steps again.
When you configure an agent using the same name as an agent that already exists, you're asked if you want to
replace the existing agent. If you answer Y , then make sure you remove the agent (see below) that you're
replacing. Otherwise, after a few minutes of conflicts, one of the agents will shut down.
.\config remove
Unattended config
Unattended config
The agent can be set up from a script with no human intervention. You must pass --unattended and the
answers to all questions.
To configure an agent, it must know the URL to your organization or collection and credentials of someone
authorized to set up agents. All other responses are optional. Any command-line parameter can be specified
using an environment variable instead: put its name in upper case and prepend VSTS_AGENT_INPUT_ . For
example, VSTS_AGENT_INPUT_PASSWORD instead of specifying --password .
Required options
--unattended - agent setup will not prompt for information, and all settings must be provided on the
command line
--url <url> - URL of the server. For example: https://dev.azure.com/myorganization or http://my-azure-
devops-server:8080/tfs
--auth <type> - authentication type. Valid values are:
pat (Personal access token)
negotiate (Kerberos or NTLM)
alt (Basic authentication)
integrated (Windows default credentials)
Authentication options
If you chose --auth pat :
--token <token> - specifies your personal access token
If you chose --auth negotiate or --auth alt :
--userName <userName> - specifies a Windows username in the format domain\userName or
[email protected]
--password <password> - specifies a password
Pool and agent names
--pool <pool> - pool name for the agent to join
--agent <agent> - agent name
--replace - replace the agent in a pool. If another agent is listening by the same name, it will start failing
with a conflict
Agent setup
--work <workDirectory> - work directory where job data is stored. Defaults to _work under the root of the
agent directory. The work directory is owned by a given agent and should not share between multiple
agents.
--acceptTeeEula - accept the Team Explorer Everywhere End User License Agreement (macOS and Linux
only)
--disableloguploads - don't stream or send console log output to the server. Instead, you may retrieve them
from the agent host's filesystem after the job completes.
Windows-only startup
--runAsService - configure the agent to run as a Windows service (requires administrator permission)
--runAsAutoLogon - configure auto-logon and run the agent on startup (requires administrator permission)
--windowsLogonAccount <account> - used with --runAsService or --runAsAutoLogon to specify the Windows
user name in the format domain\userName or [email protected]
--windowsLogonPassword <password> - used with --runAsService or --runAsAutoLogon to specify Windows
logon password
--overwriteAutoLogon - used with --runAsAutoLogon to overwrite the existing auto logon on the machine
--noRestart - used with --runAsAutoLogon to stop the host from restarting after agent configuration
completes
Deployment group only
--deploymentGroup - configure the agent as a deployment group agent
--deploymentGroupName <name> - used with --deploymentGroup to specify the deployment group for the agent
to join
--projectName <name> - used with --deploymentGroup to set the project name
--addDeploymentGroupTags - used with --deploymentGroup to indicate that deployment group tags should be
added
--deploymentGroupTags <tags> - used with --addDeploymentGroupTags to specify the comma separated list of
tags for the deployment group agent - for example "web, db"
.\config --help always lists the latest required and optional responses.
Diagnostics
If you're having trouble with your self-hosted agent, you can try running diagnostics. After configuring the
agent:
.\run --diagnostics
This will run through a diagnostic suite that may help you troubleshoot the problem. The diagnostics feature is
available starting with agent version 2.165.0.
.\config --help
Capabilities
Your agent's capabilities are cataloged and advertised in the pool so that only the builds and releases it can
handle are assigned to it. See Build and release agent capabilities.
In many cases, after you deploy an agent, you'll need to install software or utilities. Generally you should install
on your agents whatever software and tools you use on your development machine.
For example, if your build includes the npm task, then the build won't run unless there's a build agent in the
pool that has npm installed.
IMPORTANT
After you install new software on an agent, you must restart the agent for the new capability to show up in the pool so
that the build can run.
FAQ
How do I make sure I have the latest v2 agent version?
1. Navigate to the Agent pools tab:
1. Choose Azure DevOps , Organization settings .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
5. Look for the Agent.Version capability. You can check this value against the latest published agent
version. See Azure Pipelines Agent and check the page for the highest version number listed.
6. Each agent automatically updates itself when it runs a task that requires a newer version of the agent. If
you want to manually update some agents, right-click the pool, and select Update all agents .
Can I update my v2 agents that are part of an Azure DevOps Server pool?
Yes. Beginning with Azure DevOps Server 2019, you can configure your server to look for the agent package
files on a local disk. This configuration will override the default version that came with the server at the time of
its release. This scenario also applies when the server doesn't have access to the internet.
1. From a computer with Internet access, download the latest version of the agent package files (in .zip or
.tar.gz form) from the Azure Pipelines Agent GitHub Releases page.
2. Transfer the downloaded package files to each Azure DevOps Server Application Tier by using a method
of your choice (such as USB drive, Network transfer, and so on). Place the agent files under the
%ProgramData%\Microsoft\Azure DevOps\Agents folder.
3. You're all set! Your Azure DevOps Server will now use the local files whenever the agents are updated.
Each agent automatically updates itself when it runs a task that requires a newer version of the agent.
But if you want to manually update some agents, right-click the pool, and then choose Update all
agents .
What version of the agent runs with TFS 2017?
T F S VERSIO N M IN IM UM A GEN T VERSIO N
2017.3 2.112.0
I'm running a firewall and my code is in Azure Repos. What URLs does the agent need to communicate with?
If you're running an agent in a secure network behind a firewall, make sure the agent can initiate
communication with the following URLs and IP addresses.
For organizations using the *.visualstudio.com domain:
https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com
https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net
To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your
IP version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in
place, as you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24
IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48
MyEnv0=MyEnvValue0
MyEnv1=MyEnvValue1
MyEnv2=MyEnvValue2
MyEnv3=MyEnvValue3
MyEnv4=MyEnvValue4
How do I configure the agent to bypass a web proxy and connect to Azure Pipelines?
If you want the agent to bypass your proxy and connect to Azure Pipelines directly, then you should configure
your web proxy to enable the agent to access the following URLs.
For organizations using the *.visualstudio.com domain:
https://login.microsoftonline.com
https://app.vssps.visualstudio.com
https://{organization_name}.visualstudio.com
https://{organization_name}.vsrm.visualstudio.com
https://{organization_name}.vstmr.visualstudio.com
https://{organization_name}.pkgs.visualstudio.com
https://{organization_name}.vssps.visualstudio.com
https://dev.azure.com
https://*.dev.azure.com
https://login.microsoftonline.com
https://management.core.windows.net
https://vstsagentpackage.azureedge.net
To ensure your organization works with any existing firewall or IP restrictions, ensure that dev.azure.com and
*dev.azure.com are open and update your allow-listed IPs to include the following IP addresses, based on your
IP version. If you're currently allow-listing the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in
place, as you don't need to remove them.
IPv4 ranges
13.107.6.0/24
13.107.9.0/24
13.107.42.0/24
13.107.43.0/24
IPv6 ranges
2620:1ec:4::/48
2620:1ec:a92::/48
2620:1ec:21::/48
2620:1ec:22::/48
NOTE
This procedure enables the agent to bypass a web proxy. Your build pipeline and scripts must still handle bypassing your
web proxy for each task and tool you run in your build.
For example, if you are using a NuGet task, you must configure your web proxy to support bypassing the URL for the
server that hosts the NuGet feed you're using.
I'm using TFS and the URLs in the sections above don't work for me. Where can I get help?
Web site settings and security
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Deploy an agent on Windows for TFS 2015
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015 | Previous versions (XAML builds)
To build and deploy Windows, Azure, and other Visual Studio solutions you may need a Windows agent. Windows
agents can also build and deploy Java and Android apps.
Check prerequisites
Before you begin, make sure your agent machine is prepared with these prerequisites:
An operating system that is supported by Visual Studio 2013 or newer
Visual Studio 2013 or Visual Studio 2015
PowerShell 3 or newer (Where can I get a newer version of PowerShell?)
ConfigureAgent.cmd
C:\Agent\Agent\VsoAgent.exe /ChangeWindowsServiceAccount
Run interactively
If you chose to run interactively, then to run the agent:
C:\Agent\Agent\VsoAgent.exe
Command-line parameters
You can use command-line parameters when you configure the agent ( ConfigureAgent.cmd ) and when you run the
agent ( Agent\VsoAgent.exe ). These are useful to avoid being prompted during unattended installation scripts and
for power users.
Common parameters
/Login:UserName,Password[;AuthType=(AAD|Basic|PAT)]
Used for configuration commands against an Azure DevOps organization. The parameter is used to specify the
pool administrator credentials. The credentials are used to perform the pool administration changes and are not
used later by the agent.
When using personal access tokens (PAT) authentication type, specify anything for the user name and specify the
PAT as the password.
If passing the parameter from PowerShell, be sure to escape the semicolon or encapsulate the entire argument in
quotes. For example: '/Login:user,password;AuthType=PAT'. Otherwise the semicolon will be interpreted by
PowerShell to indicate the end of one statement and the beginning of another.
/NoPrompt
Indicates not to prompt and to accept the default for any values not provided on the command-line.
/WindowsServiceLogonAccount:WindowsServiceLogonAccount
Used for configuration commands to specify the identity to use for the Windows service. To specify a domain
account, use the form Domain\SAMAccountName or the user principal name (for example [email protected]).
Alternatively a built-in account can be provided, for example /WindowsServiceLogonAccount:"NT
AUTHORITY\NETWORK SERVICE".
/WindowsServiceLogonPassword:WindowsServiceLogonPassword
Required if the /WindowsServiceLogonAccount parameter is provided.
/Configure
Configure supports the /NoPrompt switch for automated installation scenarios and will return a non-zero exit
code on failure.
For troubleshooting configuration errors, detailed logs can be found in the _diag folder under the agent
installation directory.
/ServerUrl:ServerUrl
The server URL should not contain the collection name. For example, http://example:8080/tfs or
https://dev.azure.com/example
/Name:AgentName
The friendly name to identify the agent on the server.
/PoolName:PoolName
The pool that will contain the agent, for example: /PoolName:Default
/WorkFolder:WorkFolder
The default work folder location is a _work folder directly under the agent installation directory. You can change
the location to be outside of the agent installation directory, for example: /WorkFolder:C:_work. One reason you
may want to do this is to avoid "path too long" issues on the file system.
/Force
Replaces the server registration if a conflicting agent exists on the server. A conflict could be encountered based on
the name. Or a conflict could be encountered if based on the ID a previously configured agent is being
reconfigured in-place without unconfiguring first.
/NoStart
Used when configuring an interactive agent to indicate the agent should not be started after the configuration
completes.
/RunningAsService
Used to indicate the agent should be configured to run as a Windows service.
/StartMode:(Automatic|Manual|Disabled)
/ChangeWindowsServiceAccount
Change Windows service account supports the /NoPrompt switch for automated installation scenarios and will
return a non-zero exit code on failure.
For troubleshooting errors, detailed logs can be found in the _diag folder under the agent installation directory.
/Unconfigure
/Version
Prints the version number.
/?
Prints usage information.
Capabilities
Your agent's capabilities are cataloged and advertised in the pool so that only the builds and releases it can handle
are assigned to it. See Build and release agent capabilities.
In many cases, after you deploy an agent, you'll need to install software or utilities. Generally you should install on
your agents whatever software and tools you use on your development machine.
For example, if your build includes the npm task, then the build won't run unless there's a build agent in the pool
that has npm installed.
IMPORTANT
After you install new software on an agent, you must restart the agent for the new capability to show up in the pool so that
the build can run.
FAQ
What version of PowerShell do I need? Where can I get a newer version?
The Windows Agent requires PowerShell version 3 or later. To check your PowerShell version:
$PSVersionTable.PSVersion
2015.1 1.89.0
2015.2 1.95.1
2015.3 1.95.3
Can I still configure and use XAML build controllers and agents?
Yes. If you are an existing customer with custom build processes you are not yet ready to migrate, you can
continue to use XAML builds, controllers, and agents.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure virtual machine scale set agents
11/7/2020 • 21 minutes to read • Edit Online
Azure Pipelines
Azure virtual machine scale set agents, hereafter referred to as scale set agents, are a form of self-hosted agents
that can be autoscaled to meet your demands. This elasticity reduces your need to run dedicated agents all the
time. Unlike Microsoft-hosted agents, you have flexibility over the size and the image of machines on which
agents run.
If you like Microsoft-hosted agents but are limited by what they offer, you should consider scale set agents. Here
are some examples:
You need more memory, more processor, more storage, or more IO than what we offer in native Microsoft-
hosted agents.
You need NCv2 VM with particular instruction sets for machine learning.
You need to deploy to a private Azure App Service in a private VNET with no inbound connectivity.
You need to open corporate firewall to specific IP addresses so that Microsoft-hosted agents can
communicate with your servers.
You need to restrict network connectivity of agent machines and allow them to reach only approved sites.
You can't get enough agents from Microsoft to meet your needs.
Your jobs exceed the Microsoft-hosted agent timeout.
You can't partition Microsoft-hosted parallel jobs to individual projects or teams in your organization.
You want to run several consecutive jobs on an agent to take advantage of incremental source and machine-
level package caches.
You want to run additional configuration or cache warmup before an agent begins accepting jobs.
If you like self-hosted agents but wish that you could simplify managing them, you should consider scale set
agents. Here are some examples:
You don't want to run dedicated agents around the clock. You want to de-provision agent machines that are
not being used to run jobs.
You run untrusted code in your pipeline and want to re-image agent machines after each job.
You want to simplify periodically updating the base image for your agents.
NOTE
You cannot run Mac agents using scale sets. You can only run Windows or Linux agents this way.
If your desired subscription isn't listed as the default, select your desired subscription.
az group create \
--location westus \
--name vmssagents
4. Create a virtual machine scale set in your resource group. In this example the UbuntuLTS VM image is
specified.
az vmss create \
--name vmssagentspool \
--resource-group vmssagents \
--image UbuntuLTS \
--vm-sku Standard_D2_v3 \
--storage-sku StandardSSD_LRS \
--authentication-type SSH \
--instance-count 2 \
--disable-overprovision \
--upgrade-policy-mode manual \
--single-placement-group false \
--platform-fault-domain-count 1 \
--load-balancer ""
Because Azure Pipelines manages the scale set, the following settings are required or recommended:
--disable-overprovision - required
--upgrade-policy-mode manual - required
--load-balancer "" - Azure pipelines doesn't require a load balancer to route jobs to the agents in the
scale set agent pool, but configuring a load balancer is one way to get an IP address for your scale set
agents that you could use for firewall rules. Another option for getting an IP address for your scale set
agents is to create your scale set using the --public-ip-address options. For more information about
configuring your scale set with a load balancer or public IP address, see the Virtual Machine Scale Sets
documentation and az vmss create.
--instance-count 2 - this setting is not required, but it will give you an opportunity to verify that the
scale set is fully functional before you create an agent pool. Creation of the two VMs can take several
minutes. Later, when you create the agent pool, Azure Pipelines will delete these two VMs and create
new ones.
IMPORTANT
If you run this script using Azure CLI on Windows, you must enclose the "" in --load-balancer "" with single
quotes like this: --load-balancer '""'
If your VM size supports Ephemeral OS disks, the following parameters to enable Ephemeral OS disks are
optional but recommended to improve virtual machine reimage times.
--ephemeral-os-disk true
--os-disk-caching readonly
IMPORTANT
Ephemeral OS disks are not supported on all VM sizes. For list of supported VM sizes, see Ephemeral OS disks for
Azure VMs.
Select any Linux or Windows image - either from Azure Marketplace or your own custom image - to
create the scale set. Do not pre-install Azure Pipelines agent in the image. Azure Pipelines automatically
installs the agent as it provisions new virtual machines. In the above example, we used a plain UbuntuLTS
image. For instructions on creating and using a custom image, see FAQ.
Select any VM SKU and storage SKU.
NOTE
Licensing considerations limit us from distributing Microsoft-hosted images. We are unable to provide these
images for you to use in your scale set agents. But, the scripts that we use to generate these images are open
source. You are free to use these scripts and create your own custom images.
5. After creating your scale set, navigate to your scale set in the Azure portal and verify the following
settings:
Upgrade policy - Manual
You can also verify this setting by running the following Azure CLI command.
az vmss show --resource-group vmssagents --name vmssagentspool --output table
2. Select Azure vir tual machine scale set for the pool type. Select the Azure subscription that contains
the scale set, choose Authorize , and choose the desired virtual machine scale set from that subscription.
If you have an existing service connection you can choose that from the list instead of the subscription.
IMPORTANT
To configure a scale set agent pool, you must have either Owner or User Access Administrator permissions on the
selected subscription. If you have one of these permissions but get an error when you choose Authorize , see
troubleshooting.
3. Choose the desired virtual machine scale set from that subscription.
4. Specify a name for your agent pool.
5. Configure the following options:
Automatically tear down vir tual machines after ever y use - A new VM instance is used for
every job. After running a job, the VM will go offline and be reimaged before it picks up another job.
Save an unhealthy agent for investigation - Whether to save unhealthy agent VMs for
troubleshooting instead of deleting them.
Maximum number of vir tual machines in the scale set - Azure Pipelines will automatically
scale-up the number of agents, but won't exceed this limit.
Number of agents to keep on standby - Azure Pipelines will automatically scale-down the
number of agents, but will ensure that there are always this many agents available to run new jobs. If
you set this to 0 , for example to conserve cost for a low volume of jobs, Azure Pipelines will start a VM
only when it has a job.
Delay in minutes before deleting excess idle agents - To account for the variability in build load
throughout the day, Azure Pipelines will wait this long before deleting an excess idle agent.
Configure VMs to run interactive tests (Windows Server OS Only) - Windows agents can either
be configured to run unelevated with autologon and with interactive UI, or they can be configured to
run with elevated permissions. Check this box to run unelevated with interactive UI. In either case, the
agent user is a member of the Administrators group.
6. When your settings are configured, choose Create to create the agent pool.
IMPORTANT
Caution must be exercised when making changes directly to the scale set in the Azure portal.
You may not change many of the the scale set configuration settings in the Azure portal. Azure Pipelines updates the
configuration of the scale set. Any manual changes you make to the scale set may interfere with the operation of Azure
Pipelines.
You may not rename or delete a scale set without first deleting the scale set pool in Azure Pipelines.
NOTE
It can take an hour or more for Azure Pipelines to scale up or scale down the virtual machines. Azure Pipelines will scale up
in steps, monitor the operations for errors, and react by deleting unusable machines and by creating new ones in the
course of time. This corrective operation can take over an hour.
To achieve maximum stability, scale set operations are done sequentially. For example if the pool needs to scale
up and there are also unhealthy machines to delete, Azure Pipelines will first scale up the pool. Once the pool
has scaled up to reach the desired number of idle agents on standby, the unhealthy machines will be deleted,
depending on the Save an unhealthy agent for investigation setting. For more information, see Unhealthy
agents.
Due to the sampling size of 5 minutes, it is possible that all agents can be running pipelines for a short period of
time and no scaling up will occur.
IMPORTANT
The scripts executed in the Custom Script Extension must return with exit code 0 in order for the VM to finish the VM
creation process. If the custom script extension throws an exception or returns a non-zero exit code, the Azure Pipeline
extension will not be executed and the VM will not register with Azure DevOps agent pool.
NOTE
These URLs may change.
5. The configuration script creates a local user named AzDevOps if the operating system is Windows Server
or Linux. For Windows 10 Client OS, the agent runs as LocalSystem. The script then unzips, installs, and
configures the Azure Pipelines Agent. As part of configuration, the agent registers with the Azure DevOps
agent pool and appears in the agent pool list in the Offline state.
6. For most scenarios, the configuration script then immediately starts the agent to run as the local user
AzDevOps . The agent goes Online and is ready to run pipeline jobs.
If the pool is configured for interactive UI, the virtual machine reboots after the agent is configured. After
reboot, the local user will auto-login and immediately start the pipelines agent. The agent then goes
Online and is ready to run pipeline jobs.
c. Deallocate the VM
e. Restart the VM
2. Remote Desktop (or SSH) to the VM's public IP address to customize the image. You may need to open
ports in the firewall to unblock the RDP (3389) or SSH (22) ports.
a. Windows - If <MyDiskSizeGb> is greater than 128 GB, extend the OS disk size to fill the disk size
you declared above.
Open DiskPart tool as administrator and run these DiskPart commands:
a. list volume (to see the volumes)
b. select volume 2 (depends on which volume is the OS drive)
c. extend size 72000 (to extend the drive by 72 GB, from 128 GB to 200 GB)
3. Install any additional software on the VM.
4. To customize the permissions of the pipeline agent user, you can create a user named AzDevOps , and
grant that user the permissions you require. This user will be created by the scaleset agent startup script if
it does not already exist.
5. Reboot the VM when finished with customizations
6. Generalize the VM.
Windows - From an admin console window:
Linux :
IMPORTANT
Wait for the VM to finish generalization and shutdown. Do not proceed until the VM has stopped. Allow 60
minutes.
7. Deallocate the VM
az vm deallocate --resource-group <myResourceGroup> --name <MyVM>
9. Create a VM Image based on the generalized image. When performing these steps to update an existing
scaleset image, make note of the image ID url in the output.
11. Verify that both VMs created in the scale set come online, have different names, and reach the Succeeded
state
You are now ready to create an agent pool using this scale set.
Troubleshooting issues
Navigate to your Azure DevOps Project settings , select Agent pools under Pipelines , and select your agent
pool. Click the tab labeled Diagnostics .
The Diagnostic tab shows all actions executed by Azure DevOps to Create, Delete, or Reimage VMs in your Azure
Scale Set. Diagnostics also logs any errors encountered while trying to perform these actions. Review the errors
to make sure your scaleset has sufficient resources to scale up. If your Azure subscription has reached the
resource limit in VMs, CPU cores, disks, or IP Addresses, those errors will show up here.
Unhealthy Agents
When agents or virtual machines are failing to start, not connecting to Azure DevOps, or going offline
unexpectedly, Azure DevOps logs the failures to the Agent Pool's Diagnostics tab and tries to delete the
associated virtual machine. Networking configuration, image customization, and pending reboots can cause
these issues. Connecting to the VM to debug and gather logs can help with the investigation.
If you would like Azure DevOps to save an unhealthy agent VM for investigation and not automatically delete it
when it detects the unhealthy state, navigate to your Azure DevOps Project settings , select Agent pools
under Pipelines , and select your agent pool. Choose Settings , select the option Save an unhealthy agent
for investigation , and choose Save .
Now, when an unhealthy agent is detected in the scale set, Azure DevOps saves that agent and associated virtual
machine. The saved agent will be visible on the Diagnostics tab of the Agent pool UI. Navigate to your Azure
DevOps Project settings , select Agent pools under Pipelines , select your agent pool, choose Diagnostics ,
and make note of the agent name.
Find the associated virtual machine in your Azure virtual machine scale set via the Azure portal, in the
Instances list.
To delete the saved agent when you are done with your investigation, navigate to your Azure DevOps Project
settings , select Agent pools under Pipelines , and select your agent pool. Choose the tab labeled
Diagnostics . Find the agent on the Agents saved for investigation card, and choose Delete . This removes
the agent from the pool and deletes the associated virtual machine.
FAQ
Where can I find the images used for Microsoft-hosted agents?
How do I configure scale set agents to run UI tests?
How can I delete agents?
Can I configure the scale set agent pool to have zero agents on standby?
Where can I find the images used for Microsoft-hosted agents?
Licensing considerations limit us from distributing Microsoft-hosted images. We are unable to provide these
images for you to use in your scale set agents. But, the scripts that we use to generate these images are open
source. You are free to use these scripts and create your own custom images.
How do I configure scale set agents to run UI tests?
Create a Scale Set with a Windows Server OS and when creating the Agent Pool select the "Configure VMs to
run interactive tests" option.
How can I delete agents?
Navigate to your Azure DevOps Project settings , select Agent pools under Pipelines , and select your agent
pool. Click the tab labeled Agents . Click the 'Enabled' toggle button to disable the agent. The disabled agent will
complete the pipeline it is currently running and will not pick up additional work. Within a few minutes after
completing its current pipeline job, the agent will be deleted.
Can I configure the scale set agent pool to have zero agents on standby?
Yes, if you set Number of agents to keep on standby to zero, for example to conserve cost for a low volume
of jobs, Azure Pipelines starts a VM only when it has a job.
Run a self-hosted agent behind a web proxy
2/26/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during
configuration. This allows your agent to connect to Azure Pipelines or TFS through the proxy. This in turn allows
the agent to get sources and download artifacts. Finally, it passes the proxy details through to tasks which also
need proxy settings in order to reach the web.
We store your proxy credential responsibly on each platform to prevent accidental leakage. On Linux, the
credential is encrypted with a symmetric key based on the machine ID. On macOS, we use the Keychain. On
Windows, we use the Credential Store.
NOTE
Agent version 122.0, which shipped with TFS 2018 RTM, has a known issue configuring as a service on Windows. Because
the Windows Credential Store is per user, you must configure the agent using the same user the service is going to run as.
For example, in order to configure the agent service run as mydomain\buildadmin , you must launch config.cmd as
mydomain\buildadmin . You can do that by logging into the machine with that user or using Run as a different user
in the Windows shell.
How the agent handles the proxy within a build or release job
The agent will talk to Azure DevOps/TFS service through the web proxy specified in the .proxy file.
Since the code for the Get Source task in builds and Download Artifact task in releases are also baked into the
agent, those tasks will follow the agent proxy configuration from the .proxy file.
The agent exposes proxy configuration via environment variables for every task execution. Task authors need to
use azure-pipelines-task-lib methods to retrieve proxy configuration and handle the proxy within their task.
Note that many tools do not automatically use the agent configured proxy settings. For example, tools such as
curl and dotnet may require proxy environment variables such as http_proxy to also be set on the machine.
In the agent root directory, create a .proxy file with your proxy server url.
Windows
macOS and Linux
If your proxy doesn't require authentication, then you're ready to configure and run the agent. See Deploy an
agent on Windows.
NOTE
For backwards compatibility, if the proxy is not specified as described above, the agent also checks for a proxy URL from the
VSTS_HTTP_PROXY environment variable.
Proxy authentication
If your proxy requires authentication, the simplest way to handle it is to grant permissions to the user under
which the agent runs. Otherwise, you can provide credentials through environment variables. When you provide
credentials through environment variables, the agent keeps the credentials secret by masking them in job and
diagnostic logs. To grant credentials through environment variables, set the following variables:
Windows
macOS and Linux
$env:VSTS_HTTP_PROXY_USERNAME = "proxyuser"
$env:VSTS_HTTP_PROXY_PASSWORD = "proxypassword"
NOTE
This procedure enables the agent infrastructure to operate behind a web proxy. Your build pipeline and scripts must still
handle proxy configuration for each task and tool you run in your build. For example, if you are using a task that makes a
REST API call, you must configure the proxy for that task.
github\.com
bitbucket\.com
Run a self-hosted agent in Docker
11/2/2020 • 9 minutes to read • Edit Online
This article provides instructions for running your Azure Pipelines agent in Docker. You can set up a self-hosted
agent in Azure Pipelines to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux
hosts) with Docker. This is useful when you want to run agents with outer orchestration, such as Azure Container
Instances. In this article, you'll walk through a complete container example, including handling agent self-update.
Both Windows and Linux are supported as container hosts. You pass a few environment variables to docker run ,
which configures the agent to connect to Azure Pipelines or Azure DevOps Server. Finally, you customize the
container to suit your needs. Tasks and scripts might depend on specific tools being available on the container's
PATH , and it's your responsibility to ensure that these tools are available.
This feature requires agent version 2.149 or later. Azure DevOps 2019 didn't ship with a compatible agent version.
However, you can upload the correct agent package to your application tier if you want to run Docker agents.
Windows
Enable Hyper-V
Hyper-V isn't enabled by default on Windows. If you want to provide isolation between containers, you must
enable Hyper-V. Otherwise, Docker for Windows won't start.
Enable Hyper-V on Windows 10
Enable Hyper-V on Windows Server 2016
NOTE
You must enable virtualization on your machine. It's typically enabled by default. However, if Hyper-V installation fails, refer
to your system documentation for how to enable virtualization.
mkdir C:\dockeragent
4. Save the following content to a file called C:\dockeragent\Dockerfile (no file extension):
FROM mcr.microsoft.com/windows/servercore:ltsc2019
WORKDIR /azp
COPY start.ps1 .
$Env:AZP_TOKEN_FILE = "\azp\.token"
$Env:AZP_TOKEN | Out-File -FilePath $Env:AZP_TOKEN_FILE
}
Remove-Item Env:AZP_TOKEN
Set-Location agent
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$(Get-Content
${Env:AZP_TOKEN_FILE})"))
$package = Invoke-RestMethod -Headers @{Authorization=("Basic $base64AuthInfo")}
"$(${Env:AZP_URL})/_apis/distributedtask/packages/agent?platform=win-x64&`$top=1"
$packageUrl = $package[0].Value.downloadUrl
Write-Host $packageUrl
Write-Host "2. Downloading and installing Azure Pipelines agent..." -ForegroundColor Cyan
try
{
Write-Host "3. Configuring Azure Pipelines agent..." -ForegroundColor Cyan
.\config.cmd --unattended `
--agent "$(if (Test-Path Env:AZP_AGENT_NAME) { ${Env:AZP_AGENT_NAME} } else { ${Env:computername}
})" `
--url "$(${Env:AZP_URL})" `
--auth PAT `
--token "$(Get-Content ${Env:AZP_TOKEN_FILE})" `
--pool "$(if (Test-Path Env:AZP_POOL) { ${Env:AZP_POOL} } else { 'Default' })" `
--work "$(if (Test-Path Env:AZP_WORK) { ${Env:AZP_WORK} } else { '_work' })" `
--replace
.\run.cmd
}
finally
{
Write-Host "Cleanup. Removing Azure Pipelines agent..." -ForegroundColor Cyan
Optionally, you can control the pool and agent work directory by using additional environment variables.
If you want a fresh agent container for every pipeline run, pass the --once flag to the run command. You must
also use a container orchestration system, like Kubernetes or Azure Container Instances, to start new copies of the
container when the work completes.
Linux
Install Docker
Depending on your Linux Distribution, you can either install Docker Community Edition or Docker Enterprise
Edition.
Create and build the Dockerfile
Next, create the Dockerfile.
1. Open a terminal.
2. Create a new directory (recommended):
mkdir ~/dockeragent
cd ~/dockeragent
FROM ubuntu:18.04
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
NOTE
Tasks might depend on executables that your container is expected to provide. For instance, you must add the zip
and unzip packages to the RUN apt-get command in order to run the ArchiveFiles and ExtractFiles
tasks.
5. Save the following content to ~/dockeragent/start.sh , making sure to use Unix-style (LF) line endings:
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
rm -rf /azp/agent
mkdir /azp/agent
cd /azp/agent
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
AZP_AGENT_RESPONSE=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;api-version=3.0-preview' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=linux-x64")
source ./env.sh
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
# `exec` the node runtime so it's aware of TERM and INT signals
# AgentService.js understands how to handle agent self-update and restart
# Running it with the --once flag at the end will shut down the agent after the build is executed
exec ./externals/node/bin/node ./bin/AgentService.js interactive
Optionally, you can control the pool and agent work directory by using additional environment variables.
If you want a fresh agent container for every pipeline run, pass the --once flag to the run command. You must
also use a container orchestration system, like Kubernetes or Azure Container Instances, to start new copies of the
container when the work completes.
Environment variables
EN VIRO N M EN T VA RIA B L E DESC RIP T IO N
Doing this has serious security implications. The code inside the container can now run as root on your Docker
host.
If you're sure you want to do this, see the bind mount documentation on Docker.com.
This Kubernetes YAML creates a replica set and a deployment, where replicas: 1 indicates the number or
the agents that are running on the cluster.
5. Run this command:
Common errors
If you're using Windows, and you get the following error:
Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
This topic explains how to run a v2 self-hosted agent with self-signed certificate.
This error may indicate the server certificate you used on your TFS server is not trusted by the build machine.
Make sure you install your self-signed ssl server certificate into the OS certificate store.
You can easily verify whether the certificate has been installed correctly by running few commands. You should be
good as long as SSL handshake finished correctly even you get a 401 for the request.
If somehow you can't successfully install certificate into your machine's certificate store due to various reasons,
like: you don't have permission or you are on a customized Linux machine. The agent version 2.125.0 or above has
the ability to ignore SSL server certificate validation error.
IMPORTANT
This is not secure and not recommended, we highly suggest you to install the certificate into your machine certificate store.
./config.cmd/sh --sslskipcertvalidation
NOTE
There is limitation of using this flag on Linux and macOS
The libcurl library on your Linux or macOS machine needs to built with OpenSSL, More Detail
Git get sources fails with SSL certificate problem (Windows agent only)
We ship command-line Git as part of the Windows agent. We use this copy of Git for all Git related operation.
When you have a self-signed SSL certificate for your on-premises TFS server, make sure to configure the Git we
shipped to allow that self-signed SSL certificate. There are 2 approaches to solve the problem.
1. Set the following git config in global level by the agent's run as user.
NOTE
Setting system level Git config is not reliable on Windows. The system .gitconfig file is stored with the copy of Git we
packaged, which will get replaced whenever the agent is upgraded to a new version.
2. Enable git to use SChannel during configure with 2.129.0 or higher version agent Pass --gituseschannel
during agent configuration
./config.cmd --gituseschannel
NOTE
Git SChannel has more restrict requirement for your self-signed certificate. Self-singed certificate that generated by
IIS or PowerShell command may not be capable with SChanel.
Your client certificate private key password is securely stored on each platform.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.
A deployment group is a logical set of deployment target machines that have agents installed on each one.
Deployment groups represent the physical environments; for example, "Dev", "Test", "UAT", and "Production". In
effect, a deployment group is just another grouping of agents, much like an agent pool.
When authoring an Azure Pipelines or TFS Release pipeline, you can specify the deployment targets for a job
using a deployment group. This makes it easy to define parallel execution of deployment tasks.
Deployment groups:
Specify the security context and runtime targets for the agents. As you create a deployment group, you
add users and give them appropriate permissions to administer, manage, view, and use the group.
Let you view live logs for each server as a deployment takes place, and download logs for all servers to
track your deployments down to individual machines.
Enable you to use machine tags to limit deployment to specific sets of target servers.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Deployment groups make it easy to define logical groups of target machines for deployment, and install the
required agent on each machine. This topic explains how to create a deployment group, and install and provision
the agent on each virtual or physical machine in your deployment group.
You can install the agent in any one of these ways:
Run the script that is generated automatically when you create a deployment group.
Install the Azure Pipelines Agent Azure VM extension on each of the VMs.
Use the Azure Resource Group Deployment task in your release pipeline.
For information about agents and pipelines, see:
Parallel jobs in Team Foundation Server.
Parallel jobs in Azure Pipelines.
Pricing for Azure Pipelines features
When prompted to configure tags for the agent, press Y and enter any tags you will use to identify
subsets of the machines in the group for partial deployments.
Tags you assign allow you to limit deployment to specific servers when the deployment group is
used in a Run on machine group job.
When prompted for the user account, press Return to accept the defaults.
Wait for the script to finish with the message
Service vstsagent.{organization-name}.{computer-name} started successfully .
7. In the Deployment groups page of Azure Pipelines , open the Machines tab and verify that the agents
are running. If the tags you configured are not visible, refresh the page.
4. In the Install extension blade, specify the name of the Azure Pipelines subscription to use. For example, if
the URL is https://dev.azure.com/contoso , just specify contoso .
5. Specify the project name and the deployment group name.
6. Optionally, specify a name for the agent. If not specified, it uses the VM name appended with -DG .
7. Enter the Personal Access Token (PAT) to use for authentication against Azure Pipelines.
8. Optionally, specify a comma-separated list of tags that will be configured on the agent. Tags are not case-
sensitive, and each must no more than 256 characters.
9. Choose OK to begin installation of the agent on this VM.
10. Add the extension to any other VMs you want to include in this deployment group.
"resources": [
{
"name": "[concat(parameters('vmNamePrefix'),copyIndex(),'/TeamServicesAgent')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "[parameters('location')]",
"apiVersion": "2015-06-15",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines/',
concat(parameters('vmNamePrefix'),copyindex()))]"
],
"properties": {
"publisher": "Microsoft.VisualStudio.Services",
"type": "TeamServicesAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"VSTSAccountName": "[parameters('VSTSAccountName')]",
"TeamProject": "[parameters('TeamProject')]",
"DeploymentGroup": "[parameters('DeploymentGroup')]",
"AgentName": "[parameters('AgentName')]",
"Tags": "[parameters('Tags')]"
},
"protectedSettings": {
"PATToken": "[parameters('PATToken')]"
}
}
}
]
where:
VSTSAccountName is required. The Azure Pipelines subscription to use. Example: If your URL is
https://dev.azure.com/contoso , just specify contoso
TeamProject is required. The project that has the deployment group defined within it
DeploymentGroup is required. The deployment group against which deployment agent will be registered
AgentName is optional. If not specified, the VM name with -DG appended will be used
Tags is optional. A comma-separated list of tags that will be set on the agent. Tags are not case sensitive and
each must be no more than 256 characters
PATToken is required. The Personal Access Token that will be used to authenticate against Azure Pipelines to
download and configure the agent
NOTE
If you are deploying to a Linux VM, ensure that the type parameter in the code is TeamServicesAgentLinux .
For more information about ARM templates, see Define resources in Azure Resource Manager templates.
To use the template:
1. In the Deployment groups tab of Azure Pipelines , choose +New to create a new group.
2. Enter a name for the group, and optionally a description, then choose Create .
3. In the Releases tab of Azure Pipelines , create a release pipeline with a stage that contains the Azure
Resource Group Deployment task.
4. Provide the parameters required for the task such as the Azure subscription, resource group name, location,
and template information, then save the release pipeline.
5. Create a release from the release pipeline to install the agents.
Install agents using the advanced deployment options
1. In the Deployment groups tab of Azure Pipelines , choose +New to create a new group.
2. Enter a name for the group, and optionally a description, then choose Create .
3. In the Releases tab of Azure Pipelines , create a release pipeline with a stage that contains the Azure
Resource Group Deployment task.
4. Select the task and expand the Advanced deployment options for vir tual machines section. Configure
the parameters in this section as follows:
Enable Prerequisites : select Configure with Deployment Group Agent .
Azure Pipelines/TFS endpoint : Select an existing Team Foundation Server/TFS service connection
that points to your target. Agent registration for deployment groups requires access to your Visual
Studio project. If you do not have an existing service connection, choose Add and create one now.
Configure it to use a Personal Access Token (PAT) with scope restricted to Deployment Group .
Project : Specify the project containing the deployment group.
Deployment Group : Specify the name of the deployment group against which the agents will be
registered.
Copy Azure VM tags to agents : When set (ticked), any tags already configured on the Azure VM
will be copied to the corresponding deployment group agent. By default, all Azure tags are copied
using the format Key: Value . For example, Role: Web .
5. Provide the other parameters required for the task such as the Azure subscription, resource group name,
and location, then save the release pipeline.
6. Create a release from the release pipeline to install the agents.
Related topics
Run on machine group job
Deploy an agent on Windows
Deploy an agent on macOS
Deploy an agent on Linux
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Deployment groups make it easy to define groups of target servers for deployment. Tasks that you define in a
deployment group job run on some or all of the target servers, depending on the arguments you specify for the
tasks and the job itself.
You can select specific sets of servers from a deployment group to receive the deployment by specifying the
machine tags that you have defined for each server in the deployment group. You can also specify the proportion
of the target servers that the pipeline should deploy to at the same time. This ensures that the app running on
these servers is capable of handling requests while the deployment is taking place.
YAML
Classic
NOTE
Deployment group jobs are not yet supported in YAML. You can use Virtual machine resources in Environments to do a
rolling deployment to VMs in YAML pipelines.
Rolling deployments can be configured by specifying the keyword rolling: under strategy: node of a
deployment job.
strategy:
rolling:
maxParallel: [ number or percentage as x% ]
preDeploy:
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
deploy:
steps:
...
routeTraffic:
steps:
...
postRouteTraffic:
steps:
...
on:
failure:
steps:
...
success:
steps:
...
YAML builds are not yet available on TFS.
Timeouts
Use the job timeout to specify the timeout in minutes for jobs in this job. A zero value for this option means that
the timeout is effectively infinite and so, by default, jobs run until they complete or fail. You can also set the
timeout for each task individually - see task control options. Jobs targeting Microsoft-hosted agents have
additional restrictions on how long they may run.
Related topics
Jobs
Conditions
Deploy to Azure VMs using deployment groups in
Azure Pipelines
11/2/2020 • 7 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
In earlier versions of Azure Pipelines, applications that needed to be deployed to multiple servers required a
significant amount of planning and maintenance. Windows PowerShell remoting had to be enabled manually,
required ports opened, and deployment agents installed on each of the servers. The pipelines then had to be
managed manually if a roll-out deployment was required.
All the above challenges have been evolved seamlessly with the introduction of the Deployment Groups.
A deployment group installs a deployment agent on each of the target servers in the configured group and
instructs the release pipeline to gradually deploy the application to those servers. Multiple pipelines can be created
for the roll-out deployments so that the latest version of an application can be delivered in a phased manner to
multiple user groups for validation of newly introduced features.
NOTE
Deployment groups are a concept used in Classic pipelines. If you are using YAML pipelines, see Environments.
Prerequisites
A Microsoft Azure account.
An Azure DevOps organization.
Use the Azure DevOps Demo Generator to provision the tutorial project on your Azure DevOps organization.
NOTE
It takes approximately 10-15 minutes to complete the deployment. If you receive any naming conflict errors, try
changing the parameter you provide for Env Prefix Name .
2. Once the deployment completes, you can review all of the resources generated in the specified resource
group using the Azure portal. Select the DB server VM with sqlSr v in its name to view its details.
3. Make a note of the DNS name . This value is required in a later step. You can use the copy button to copy it
to the clipboard.
8. Select the Azure Resource Group Deployment task. Configure a service connection to the Azure
subscription used earlier to create infrastructure. After authorizing the connection, select the resource group
created for this tutorial.
9. This task will run on the virtual machines hosted in Azure, and will need to be able to connect back to this
pipeline in order to complete the deployment group requirements. To secure the connection, they will need a
personal access token (PAT) . From the User settings dropdown, open Personal access tokens in a
new tab. Most browsers support opening a link in a new tab via right-click context menu or Ctrl+Click .
10. In the new tab, select New Token .
11. Enter a name and select the Full access scope. Select Create to create the token. Once created, copy the
token and close the browser tab. You return to the Azure Pipeline editor.
13. Enter the Connection URL to the current instance of Azure DevOps. This URL is something like
https://dev.azure.com/[Your account] . Paste in the Personal Access Token created earlier and specify a
Ser vice connection name . Select Verify and save .
14. Select the current Team project and the Deployment group created earlier.
15. Select the Deployment group phase stage. This stage executes tasks on the machines defined in the
deployment group. This stage is linked to the SQL-Svr-DB tag. Choose the Deployment Group from the
dropdown.
16. Select the IIS Deployment phase stage. This stage deploys the application to the web servers using the
specified tasks. This stage is linked to the WebSr v tag. Choose the Deployment Group from the
dropdown.
17. Select the Disconnect Azure Network Load Balancer task. As the target machines are connected to the
NLB, this task will disconnect the machines from the NLB prior to the deployment and reconnect them back
to the NLB after the deployment. Configure the task to use the Azure connection, resource group, and load
balancer (there should only be one).
18. Select the IIS Web App Manage task. This task runs on the deployment target machines registered with
the deployment group configured for the task/stage. It creates a web app and application pool locally with
the name Par tsUnlimited running under the port 80
19. Select the IIS Web App Deploy task. This task runs on the deployment target machines registered with the
deployment group configured for the task/stage. It deploys the application to the IIS server using Web
Deploy .
20. Select the Connect Azure Network Load Balancer task. Configure the task to use the Azure connection,
resource group, and load balancer (there should only be one).
21. Select the Variables tab and enter the variable values as below.
VA RIA B L E N A M E VA RIA B L E VA L UE
DatabaseName PartsUnlimited-Dev
DBPassword P2ssw0rd@123
DBUserName sqladmin
ServerName localhost
IMPORTANT
Make sure to replace your SQL server DNS name (which you noted from Azure portal earlier) in
DefaultConnectionString variable.
Your DefaultConnectionString should be similar to this string after replacing the SQL DNS:
Data Source=cust1sqljo5zndv53idtw.westus2.cloudapp.azure.com;Initial Catalog=PartsUnlimited-Dev;User
ID=sqladmin;Password=P2ssw0rd@123;MultipleActiveResultSets=False;Connection Timeout=30;
4. Copy the DNS of the VM. The Azure Load Balancer will distribute incoming traffic among healthy
instances of servers defined in a load-balanced set. As a result, the DNS of all web server instances is the
same.
5. Open a new browser tab to the DNS of the VM. Confirm the deployed app is running.
Summary
In this tutorial, you deployed a web application to a set of Azure VMs using Azure Pipelines and Deployment
Groups. While this scenario covered a handful of machines, you can easily scale the process up to support
hundreds, or even thousands, of machines using virtually any configuration.
Cleaning up resources
This tutorial created an Azure DevOps project and some resources in Azure. If you're not going to continue to use
these resources, delete them with the following steps:
1. Delete the Azure DevOps project created by the Azure DevOps Demo Generator.
2. All Azure resources created during this tutorial were assigned to the resource group specified during
creation. Deleting that group will delete the resources they contain. This deletion can be done via the CLI or
portal.
Next steps
Provision agents for deployment groups
Set retention policies for builds, tests, and releases
11/2/2020 • 17 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
In this article, learn how to manage the retention policies for your project.
Retention policies let you set how long to keep runs, tests, and releases stored in the system. To save storage
space, you want to delete older runs, tests, and releases.
The following retention policies are available in Azure DevOps in your Project settings :
Pipeline - Set how long to keep artifacts, symbols, attachments, runs, and pull request runs.
Release (classic) - Set whether to save builds and view the default and maximum retention settings.
Test - Set how long to keep automated and manual test runs, results, and attachments.
If you are using an on-premises server, you can also specify retention policy defaults for a project and when
releases are permanently destroyed. Learn more about release retention.
Prerequisites
By default, members of the Contributors, Build Admins, Project Admins, and Release Admins groups can manage
retention policies.
To manage test results, you must have one of the following subscriptions:
Enterprise
Test Professional
MSDN Platforms
You can also buy monthly access to Azure Test Plans and assign the Basic + Test Plans access level. See Testing
access by user role.
The setting for number of recent runs to keep for each pipeline requires a little more explanation. The
interpretation of this setting varies based on the type of repository you build in your pipeline.
Azure Repos: Azure Pipelines always retains the configured number of latest runs for the default branch
and for each protected branch of the repository. A branch that has any branch policies configured is
considered to be a protected branch. As an example, consider a repository with the default branch called
main . Also, let us assume that the release branch in this repository has a branch policy. In this case, if
you configured the policy to retain 3 runs, then the latest 3 runs of main as well as the latest 3 runs of
release branch are retained. In addition, the latest 3 runs of this pipeline (irrespective of the branch) are
also retained.
To clarify this logic further, let us say that the list of runs for this pipeline is as follows with the most recent
run at the top. The table shows which runs will be retained if you have configured to retain the latest 3
runs (ignoring the effect of the number of days setting):
RETA IN ED / N OT
RUN # B RA N C H RETA IN ED WH Y?
All other Git repositories: Azure Pipelines retains the configured number of latest runs for the default
branch of the repository and for the whole pipeline.
TFVC: Azure Pipelines retains the configured number of latest runs for the whole pipeline, irrespective of
the branch.
What parts of the run get deleted
When the retention policies mark a build for deletion, you can control which information related to the build is
deleted:
Build record: You can choose to delete the entire build record or keep basic information about the build even
after the build is deleted.
Source label: If you label sources as part of the build, then you can choose to delete the tag (for Git) or the
label (for TFVC) created by a build.
Automated test results: You can choose to delete the automated test results associated with the build (for
example, results published by the Publish Test Results build task).
The following information is deleted when a build is deleted:
Logs
Published artifacts
Published symbols
The following information is deleted when a run is deleted:
Logs
All artifacts
All symbols
Binaries
Test results
Run metadata
When are runs deleted
Your retention policies are processed once a day. The time that the policies get processed variables because we
spread the work throughout the day for load-balancing purposes. There is no option to change this process.
A run is deleted if all of the following conditions are true:
It exceeds the number of days configured in the retention settings
It is not one of the recent runs as configured in the retention settings
It is not marked to be retained indefinitely
It is not retained by a release
Your retention policies run every day at 3:00 A.M. UTC. There is no option to change the time the policies run.
Delete a run
You can delete runs using the context menu on the Pipeline run details page.
NOTE
If any retention policies currently apply to the run, they must be removed before the run can be deleted. For instructions,
see Pipeline run details - delete a run.
Set release retention policies
NOTE
If you are using Azure Pipelines, you can view but not change the global release retention policies for your project.
The release retention policies for a classic release pipeline determine how long a release and the run linked to it
are retained. Using these policies, you can control how many days you want to keep each release after it has
been last modified or deployed and the minimum number of releases that should be retained for each
pipeline.
The retention timer on a release is reset every time a release is modified or deployed to a stage. The minimum
number of releases to retain setting takes precedence over the number of days. For example, if you specify to
retain a minimum of three releases, the most recent three will be retained indefinitely - irrespective of the
number of days specified. However, you can manually delete these releases when you no longer require them.
See FAQ below for more details about how release retention works.
As an author of a release pipeline, you can customize retention policies for releases of your pipeline on the
Retention tab.
The retention policy for YAML and build pipelines is the same. You can see your pipeline's retention settings in
Project Settings for Pipelines in the Settings section.
You can also customize these policies on a stage-by-stage basis.
Global release retention policy
If you are using an on-premises Team Foundation Server, you can specify release retention policy defaults and
maximums for a project. You can also specify when releases are permanently destroyed (removed from the
Deleted tab in the build explorer).
If you are using Azure Pipelines, you can view but not change these settings for your project.
Global release retention policy settings can be managed from the Release settings of your project:
Azure Pipelines:
https://dev.azure.com/{organization}/{project}/_settings/release?app=ms.vss-build-web.build-release-hub-
group
On-premises:
https://{your_server}/tfs/{collection_name}/{project}/_admin/_apps/hub/ms.vss-releaseManagement-
web.release-project-admin-hub
The maximum retention policy sets the upper limit for how long releases can be retained for all release
pipelines. Authors of release pipelines cannot configure settings for their definitions beyond the values specified
here.
The default retention policy sets the default retention values for all the release pipelines. Authors of build
pipelines can override these values.
The destruction policy helps you keep the releases for a certain period of time after they are deleted. This
policy cannot be overridden in individual release pipelines.
NOTE
In TFS, release retention management is restricted to specifying the number of days, and this is available only in TFS
2015.3 and newer.
In this example, if a release that is deployed to Dev is not promoted to QA for 10 days, it is a potential candidate
for deletion. However, if that same release is deployed to QA eight days after being deployed to Dev, its retention
timer is reset, and it is retained in the system for another 30 days.
When specifying custom policies per pipeline, you cannot exceed the maximum limits set by administrator.
Interaction between build and release retention policies
The build linked to a release has its own retention policy, which may be shorter than that of the release. If you
want to retain the build for the same period as the release, set the Retain associated ar tifacts checkbox for
the appropriate stages. This overrides the retention policy for the build, and ensures that the artifacts are
available if you need to redeploy that release.
When you delete a release pipeline, delete a release, or when the retention policy deletes a release automatically,
the retention policy for the associated build will determine when that build is deleted.
NOTE
In TFS, interaction between build and release retention is available in TFS 2017 and newer.
3. In the Retention page under the Test section, select a limit for how long you want to keep manual test data.
Automated test-runs retention policies
By default, Azure DevOps keeps automated test results related to builds only as long as you keep those builds. To
keep test results after you delete your builds, edit the build retention policy. If you use Git for version control, you
can specify how long to keep automated test results based on the branch.
1. Sign into Azure DevOps. You'll need at least build level permissions to edit build pipelines.
2. Go to your project and then select project settings at the bottom of the page.
You can also customize these policies on a branch-by-branch basis if you are building from Git repositories.
The maximum retention policy sets the upper limit for how long runs can be retained for all build pipelines.
Authors of build pipelines cannot configure settings for their definitions beyond the values specified here.
The default retention policy sets the default retention values for all the build pipelines. Authors of build
pipelines can override these values.
The Permanently destroy releases helps you keep the runs for a certain period of time after they are deleted.
This policy cannot be overridden in individual build pipelines.
Git repositories
If your repository type is one of the following, you can define multiple retention policies with branch filters:
Azure Repos Git or TFS Git.
GitHub.
Other/external Git.
For example, your team may want to keep:
User branch builds for five days, with a minimum of a single successful or partially successful build for each
branch.
Main and feature branch builds for 10 days, with a minimum of three successful or partially successful builds
for each of these branches. You exclude a special feature branch that you want to keep for a longer period of
time.
Builds from the special feature branch and all other branches for 15 days, with a minimum of a single
successful or partially successful build for each branch.
The following example retention policy for a build pipeline meets the above requirements:
When specifying custom policies for each pipeline, you cannot exceed the maximum limits set by administrator.
Clean up pull request builds
If you protect your Git branches with pull request builds, then you can use retention policies to automatically
delete the completed builds. To do it, add a policy that keeps a minimum of 0 builds with the following branch
filter:
refs/pull/*
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
This article describes the licensing model for Azure Pipelines in Team Foundation Server 2017 (TFS 2017) or
newer. We don't charge you for Team Foundation Build (TFBuild) so long as you have a TFS Client Access License
(CAL).
A TFS parallel job gives you the ability to run a single release at a time in a project collection. You can keep
hundreds or even thousands of release jobs in your collection. But, to run more than one release at a time, you
need additional parallel jobs.
One free parallel job is included with every collection in a Team Foundation Server. Every Visual Studio
Enterprise subscriber in a Team Foundation Server contributes one additional parallel job.
You can buy additional private jobs from the Visual Studio Marketplace.
IMPORTANT
Starting with Azure DevOps Server 2019, you do not have to pay for self-hosted concurrent jobs in releases. You are only
limited by the number of agents that you have.
Learn how to estimate how many parallel jobs you need and buy more parallel jobs for your organization.
N UM B ER O F PA RA L L EL JO B S T IM E L IM IT
Public project 10 free Microsoft-hosted parallel jobs No overall time limit per month
that can run for up to 360 minutes (6
hours) each time
Private project One free job that can run for up to 60 1,800 minutes (30 hours) per month
minutes each time
When the free tier is no longer sufficient, you can pay for additional capacity per parallel job. Paid parallel jobs
remove the monthly time limit and allow you to run each job for up to 360 minutes (6 hours). Buy Microsoft-
hosted parallel jobs.
When you purchase your first Microsoft-hosted parallel job, the number of parallel jobs you have in the
organization is still 1. To be able to run two jobs concurrently, you will need to purchase two parallel jobs if you
are currently on the free tier. The first purchase only removes the time limits on the first job.
TIP
If your pipeline exceeds the maximum job timeout, try splitting your pipeline into multiple jobs. For more information on
jobs, see Specify jobs in your pipeline.
Do I need parallel jobs in TFS 2015? Short answer: no. More details
2. View the maximum number of parallel jobs that are available in your organization.
3. Select View in-progress jobs to display all the builds and releases that are actively consuming an
available parallel job or that are queued waiting for a parallel job to be available.
Estimate costs
A simple rule of thumb: Estimate that you'll need one parallel job for every four to five users in your
organization.
In the following scenarios, you might need multiple parallel jobs:
If you have multiple teams, and if each of them require CI, you'll likely need a parallel job for each team.
If your CI trigger applies to multiple branches, you'll likely need a parallel job for each active branch.
If you develop multiple applications by using one organization or server, you'll likely need additional parallel
jobs: one to deploy each application at the same time.
IMPORTANT
Hosted XAML build controller isn't supported. If you have an organization where you need to run XAML builds, set up an
on-premises build server and switch to an on-premises build controller. For more information about the hosted XAML
model, see Get started with XAML.
FAQ
How do I qualify for the free tier of public projects?
We'll automatically apply the free tier limits for public projects if you meet both of these conditions:
Your pipeline is part of an Azure Pipelines public project.
Your pipeline builds a public repository from GitHub or from the same public project in your Azure DevOps
organization.
Can I assign a parallel job to a specific project or agent pool?
Currently, there isn't a way to partition or dedicate parallel job capacity to a specific project or agent pool. For
example:
You purchase two parallel jobs in your organization.
You start two runs in the first project, and both the parallel jobs are consumed.
You start a run in the second project. That run won't start until one of the runs in your first project is
completed.
Are there limits on who can use Azure Pipelines?
You can have as many users as you want when you're using Azure Pipelines. There is no per-user charge for
using Azure Pipelines. Users with both basic and stakeholder access can author as many builds and releases as
they want.
Are there any limits on the number of builds and release pipelines that I can create?
No. You can create hundreds or even thousands of pipelines for no charge. You can register any number of self-
hosted agents for no charge.
As a Visual Studio Enterprise subscriber, do I get additional parallel jobs for TFS and Azure Pipelines?
Yes. Visual Studio Enterprise subscribers get one parallel job in Team Foundation Server 2017 or later and one
self-hosted parallel job in each Azure DevOps Services organization where they are a member.
What about the option to pay for hosted agents by the minute?
Some of our earlier customers are still on a per-minute plan for the hosted agents. In this plan, you pay
$0.05/minute for the first 20 hours after the free tier, and $0.01/minute after 20 hours. Because of the following
limitations in this plan, you might want to consider moving to the parallel jobs model:
When you're using the per-minute plan, you can run only one job at a time.
If you run builds for more than 14 paid hours in a month, the per-minute plan might be less cost-effective
than the parallel jobs model.
I use XAML build controllers with my organization. How am I charged for those?
You can register one XAML build controller for each self-hosted parallel job in your organization. Your
organization gets at least one free self-hosted parallel job, so you can register one XAML build controller for no
additional charge. For each additional XAML build controller, you'll need an additional self-hosted parallel job.
Who can use the system?
TFS users with a TFS CAL can author as many releases as they want.
To approve releases, a TFS CAL is not necessary; any user with stakeholder access can approve or reject releases.
Do I need parallel jobs to run builds on TFS?
No, on TFS you don't need parallel jobs to run builds. You can run as many builds as you want at the same time
for no additional charge.
Do I need parallel jobs to manage releases in versions before TFS 2017?
No.
In TFS 2015, so long as your users have a TFS CAL, they can manage releases for no additional charge in trial
mode. We called it "trial mode" to indicate that we would eventually charge for managing releases. Despite this
label, we fully support managing releases in TFS 2015.
Pipeline permissions and security roles
11/2/2020 • 11 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
To support security of your pipeline operations, you can add users to a built-in security group, set individual
permissions for a user or group, or add users to pre-defined roles. You manage security for the following objects
from Azure Pipelines in the web portal, either from the user or admin context.
This topic provides a description of the permissions and roles used to secure operations. To learn how to add a
user or group to Azure Pipelines, see Users.
For permissions, you grant or restrict permissions by setting the permission state to Allow or Deny, either for a
security group or an individual user. For a role, you add a user or group to the role. To learn more about how
permissions are set, including inheritance, see [About permissions and
inheritance(../../organizations/security/about-permissions.md). To learn how inheritance is supported for role-
based membership, see About security roles.
NOTE
When the Free access to Pipelines for Stakeholders preview feature is enabled for the organization, Stakeholders get
access to all Build and Release features. This is indicated by the preview icon shown in the following table. Without this
feature enabled, stakeholders can only view and approve releases. To learn more, see Provide Stakeholders access to edit
build and release pipelines.
View release ️
✔ ️
✔ ️
✔ ️
✔ ️
✔
pipelines
Define builds ️
✔ ️
✔ ️
✔
with
continuous
integration
Define ️
✔ ️
✔ ️
✔
releases and
manage
deployments
STA K EH O L DE C O N T RIB UTO B UIL D P RO JEC T REL EA SE
TA SK RS REA DERS RS A DM IN S A DM IN S A DM IN S
Approve ️
✔ ️
✔ ️
✔ ️
✔
releases
Azure ️
✔ ️
✔ ️
✔
Artifacts (5
users free)
Queue builds, ️
✔ ️
✔ ️
✔
edit build
quality
Manage build ️
✔ ️
✔
queues and
build qualities
Manage build ️
✔ ️
✔ ️
✔
retention
policies, delete
and destroy
builds
Administer ️
✔ ️
✔
build
permissions
Manage ️
✔ ️
✔
release
permissions
Create and ️
✔ ️
✔ ️
✔ ️
✔
edit task
groups
Manage task ️
✔ ️
✔ ️
✔
group
permissions
Can view ️
✔ ️
✔ ️
✔ ️
✔ ️
✔
library items
such as
variable
groups
Use and ️
✔ ️
✔ ️
✔
manage
library items
such as
variable
groups
Build
TA SK STA K EH O L DERS REA DERS C O N T RIB UTO RS B UIL D P RO JEC T A DM IN S
A DM IN S
View builds ️
✔ ️
✔ ️
✔ ️
✔ ️
✔
View build ️
✔ ️
✔ ️
✔ ️
✔ ️
✔
pipeline
Administer build ️
✔ ️
✔
permissions
Delete or Edit ️
✔ ️
✔ ️
✔
build pipeline
Delete or Destroy ️
✔ ️
✔
builds
Manage build ️
✔ ️
✔
qualities
Manage build ️
✔ ️
✔
queue
Override check-in ️
✔
validation by
build
Queue builds ️
✔ ️
✔ ️
✔
Retain indefinitely ️
✔ ️
✔
Stop builds ️
✔ ️
✔
Update build ️
✔
information
Release
TA SK STA K EH O L DERS REA DERS C O N T RIB UTO RS P RO JEC T A DM IN S REL EA SE
A DM IN S
Approve releases ️
✔ ️
✔ ️
✔ ️
✔
View releases ️
✔ ️
✔ ️
✔ ️
✔ ️
✔
View release ️
✔ ️
✔ ️
✔ ️
✔
pipeline
Administer ️
✔ ️
✔
release
permissions
Delete release ️
✔ ️
✔ ️
✔
pipeline or
release stage
Delete releases ️
✔ ️
✔ ️
✔
Edit release ️
✔ ️
✔
pipeline
Manage ️
✔ ️
✔
deployments
Manage release ️
✔ ️
✔ ️
✔
approvers
Manage releases ️
✔ ️
✔
Task groups
TA SK STA K EH O L DER REA DERS C O N T RIB UTO R B UIL D P RO JEC T REL EA SE
S S A DM IN S A DM IN S A DM IN S
Administer ️
✔ ️
✔ ️
✔
task group
permissions
Delete task ️
✔ ️
✔ ️
✔
group
Edit task ️
✔ ️
✔ ️
✔
group
Build
TA SK STA K EH O L DERS REA DERS C O N T RIB UTO RS B UIL D P RO JEC T A DM IN S
A DM IN S
View builds ️
✔ ️
✔ ️
✔ ️
✔
View build ️
✔ ️
✔ ️
✔ ️
✔
definition
Administer build ️
✔ ️
✔
permissions
Delete or Edit ️
✔ ️
✔ ️
✔
build definitions
Delete or Destroy ️
✔ ️
✔
builds
Manage build ️
✔ ️
✔
queue
Override check-in ️
✔
validation by
build
Queue builds ️
✔ ️
✔ ️
✔
Retain indefinitely ️
✔ ️
✔
Stop builds ️
✔ ️
✔
Update build ️
✔
information
Release
TA SK STA K EH O L DERS REA DERS C O N T RIB UTO RS P RO JEC T A DM IN S REL EA SE
A DM IN S
Approve releases ️
✔ ️
✔ ️
✔ ️
✔
View releases ️
✔ ️
✔ ️
✔ ️
✔ ️
✔
View release ️
✔ ️
✔ ️
✔ ️
✔
definition
Administer ️
✔ ️
✔
release
permissions
Delete release ️
✔ ️
✔ ️
✔
definition or
release stage
Delete releases ️
✔ ️
✔ ️
✔
Edit release ️
✔ ️
✔
definition
Manage ️
✔ ️
✔
deployments
Manage release ️
✔ ️
✔ ️
✔
approvers
Manage releases ️
✔ ️
✔
Pipeline permissions
Build and YAML pipeline permissions follow a hierarchical model. Defaults for all the permissions can be set at the
project level and can be overridden on an individual build pipeline.
To set the permissions at project level for all pipelines in a project, choose Security from the action bar on the
main page of Builds hub.
To set or override the permissions for a specific pipeline, choose Security from the context menu of the pipeline.
The following permissions are defined for pipelines. All of these can be set at both the levels.
Administer build permissions Can change any of the other permissions listed here.
Delete builds Can delete builds for a pipeline. Builds that are deleted are
retained in the Deleted tab for a period of time before they
are destroyed.
Edit build pipeline Can create pipelines and save any changes to a build pipeline,
including configuration variables, triggers, repositories, and
retention policy.
Override check-in validation by build Applies to TFVC gated check-in builds. This does not apply to
PR builds.
Stop builds Can stop builds queued by other team members or by the
system.
Update build information It is recommended to leave this alone. It's intended to enable
service accounts, not team members.
Default values for all of these permissions are set for team project collections and project groups. For example,
Project Collection Administrators , Project Administrators , and Build Administrators are given all of the
above permissions by default.
When it comes to security, there are different best practices and levels of permissiveness. While there's no one
right way to handle permissions, we hope these examples help you empower your team to work securely with
builds.
In many cases you probably also want to set Delete build pipeline to Allow . Otherwise these team
members can't delete even their own build pipelines.
Without Delete builds permission, users cannot delete even their own completed builds. However, keep in
mind that they can automatically delete old unneeded builds using retention policies.
We recommend that you do not grant these permissions directly to a person. A better practice is to add the
person to the build administrator group or another group, and manage permissions on that group.
Release permissions
Permissions for release pipelines follow a hierarchical model. Defaults for all the permissions can be set at the
project level and can be overridden on an individual release pipeline. Some of the permissions can also be
overridden on a specific stage within a pipeline. The hierarchical model helps you define default permissions for all
definitions at one extreme, and to lock down the production stage for an application at the other extreme.
To set permissions at project level for all release definitions in a project, open the shortcut menu from the icon
next to All release pipelines and choose Security .
To set or override the permissions for a specific release pipeline, open the shortcut menu from the icon next to
that pipeline name. Then choose Security to open the Permissions dialog.
To specify security settings for individual stages in a release pipeline, open the Permissions dialog by choosing
Security on the shortcut menu that opens from the ellipses (...) on a stage in the release pipeline editor.
The following permissions are defined for releases. The scope column explains whether the permission can be set
at the project, release pipeline, or stage level.
Administer release permissions Can change any of the other Project, Release pipeline, Stage
permissions listed here.
Delete release pipeline Can delete release pipeline(s). Project, Release pipeline
Delete release stage Can delete stage(s) in release Project, Release pipeline, Stage
pipeline(s).
Delete releases Can delete releases for a pipeline. Project, Release pipeline
P ERM ISSIO N DESC RIP T IO N SC O P ES
Edit release pipeline Can save any changes to a release Project, Release pipeline
pipeline, including configuration
variables, triggers, artifacts, and
retention policy as well as configuration
within a stage of the release pipeline. To
make changes to a specific stage in a
release pipeline, the user also needs
Edit release stage permission.
Edit release stage Can edit stage(s) in release pipeline(s). Project, Release pipeline, Stage
To save the changes to the release
pipeline, the user also needs Edit
release pipeline permission. This
permission also controls whether a user
can edit the configuration inside the
stage of a specific release instance. The
user also needs Manage releases
permission to save the modified release.
Manage deployments Can initiate a deployment of a release Project, Release pipeline, Stage
to a stage. This permission is only for
deployments that are manually initiated
by selecting the Deploy or Redeploy
actions in a release. If the condition on
a stage is set to any type of automatic
deployment, the system automatically
initiates deployment without checking
the permission of the user that created
the release. If the condition is set to
start after some stage, manually
initiated deployments do not wait for
those stages to be successful.
Manage release approvers Can add or edit approvers for stage(s) Project, Release pipeline, Stage
in release pipeline(s). This permissions
also controls whether a user can edit
the approvers inside the stage of a
specific release instance.
Manage releases Can edit the configuration in releases. Project, Release pipeline
To edit the configuration of a specific
stage in a release instance (including
variables marked as
settable at release time ), the user
also needs Edit release stage
permission.
View release pipeline Can view release pipeline(s). Project, Release pipeline
View releases Can view releases belonging to release Project, Release pipeline
pipeline(s).
Default values for all of these permissions are set for team project collections and project groups. For example,
Project Collection Administrators , Project Administrators , and Release Administrators are given all of
the above permissions by default. Contributors are given all permissions except Administer release
permissions . Readers , by default, are denied all permissions except View release pipeline and View releases .
Task group permissions
Task group permissions follow a hierarchical model. Defaults for all the permissions can be set at the project level
and can be overridden on an individual task group pipeline.
You use task groups to encapsulate a sequence of tasks already defined in a build or a release pipeline into a single
reusable task. You define and manage task groups in the Task groups tab in Azure Pipelines .
Administer task group permissions Can add and remove users or groups to task group security.
RO L E DESC RIP T IO N
RO L E DESC RIP T IO N
Administrator Can manage membership of all other roles for the service
connection as well as use the endpoint to author build or
release pipelines. The system automatically adds the user that
created the service connection to the Administrator role for
that pool.
Ser vice Account Can view agents, create sessions, and listen for jobs from the
agent pool.
User Can view and use the deployment pool for creating
deployment groups.
Environment permissions
You can use roles to control who can create, view, and manage environments. When you create an environment in
a YAML, contributors and project administrators will be granted the administrator role. When you create an
environment through the UI, only the creator will have the administrator role.
Related notes
Set build and release permissions
Default permissions and access
Permissions and groups reference
Add users to Azure Pipelines
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
If your teammates want to edit pipelines, then have an administrator add them to your project:
1. Make sure you are a member of the Project Administrators group (learn more).
2. Go to your project summary: https://dev.azure.com/{your-organization}/{your-project}
4. After the teammates accept the invitation, ask them to verify that they can create and edit pipelines.
3. On the permissions dialog box, make sure the following permissions are set to Allow.
Permissions for build and release functions are primarily set at the object-level for a specific build or release, or for
select tasks, at the collection level. For a simplified view of permissions assigned to built-in groups, see
Permissions and access.
In addition to permission assignments, you manage security for several resources—such as variable groups,
secure files, and deployment groups—by adding users or groups to a role. You grant or restrict permissions by
setting the permission state to Allow or Deny, either for a security group or an individual user. For definitions of
each build and release permission and role, see Build and release permissions.
To set the permissions for a specific build pipeline, open the context menu for the build and click Security.
2. Choose the group you want to set permissions for, and then change the permission setting to Allow or
Deny.
For example, here we change the permission for Edit build pipeline for the Contributors group to Allow.
If you want to manage the permissions for a specific release, then open the Security dialog for that release.
2. Choose the group you want to set permissions for, and then change the permission setting to Allow or
Deny.
For example, here we deny access to several permissions for the Contributors group.
Manage Library roles for variable groups, secure files, and deployment
groups
Permissions for variable groups, secure files, and deployment groups are managed by roles. For a description of
the roles, see About security roles.
NOTE
Feature availability : These features are available on Azure Pipelines and TFS 2017 and later versions.
You can set the security for all artifacts for a project, as well as set the security for individual artifacts. The method
is similar for all three artifact types. You set the security for variable groups and secure files from Azure
Pipelines , Librar y page, and for deployment groups, from the Deployment groups page.
For example, here we show how to set the security for variable groups.
1. Build-Release hub, Librar y page, open the Security dialog for all variable groups.
If you want to manage the permissions for a specific variable group, then open the Security dialog for that
group.
2. Add the user or group and choose the role you want them to have.
For example, here we deny access to several permissions for the Contributors group.
3. Click Add .
NOTE
Feature availability : These features are available on Azure Pipelines and TFS 2017 and later versions.
1. From the web portal Build-Release hub, Task groups page, open the Security dialog for all task groups.
If you want to manage the permissions for a specific task group, then open the Security dialog for that
group.
2. Add the user or group and then set the permissions you want them to have.
For example, here we add Raisa and set her permissions to Administer all task groups.
3. Click Add .
For example, here we show how to add a user to the Administrator role for a service connection.
1. From the web portal, click the gear Settings icon to open the project settings admin context.
2. Click Ser vices , click the service connection that you want to manage, and then click Roles .
3. Add the user or group and choose the role you want them to have. For a description of each role, see About
security roles.
For example, here we add Raisa to the Administrator role.
4. Click Add .
You will need to be a member of the Project Collection Administrator group to manage the security for a pool.
Once you've been added to the Administrator role, you can then manage the pool. For a description of each role,
see About security roles.
1. From the web portal, click the gear Settings icon and choose Organization settings or Collection settings
to open the collection-level settings admin context.
2. Click Deployment Pools , and then open the Security dialog for all deployment pools.
If you want to manage the permissions for a specific deployment group, then open the Security dialog for
that group.
3. Add the user or group and choose the role you want them to have.
For example, here we add Raisa to the Administrator role.
4. Click Add .
Related notes
Default build and release permissions
Default permissions and access
Permissions and groups reference
Run Git commands in a script
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 |Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
For some workflows you need your build pipeline to run Git commands. For example, after a CI build on a feature
branch is done, the team might want to merge the branch to master.
Git is available on Microsoft-hosted agents and on on-premises agents.
If you see this page, select the repo, and then click the link:
On the Version Control tab, select the repository in which you want to run Git commands, and then select
Project Collection Build Ser vice . By default, this identity can read from the repo but cannot push any changes
back to it.
Grant permissions needed for the Git commands you want to run. Typically you'll want to grant:
Create branch: Allow
Contribute: Allow
Read: Allow
Create tag: Allow
When you're done granting the permissions, make sure to click Save changes .
Enable your pipeline to run command-line Git
On the variables tab set this variable:
NAME VA L UE
system.prefergit true
steps:
- checkout: self
persistCredentials: true
steps:
- checkout: self
clean: true
Examples
List the files in your repo
Make sure to follow the above steps to enable Git.
On the build tab add this task:
TA SK A RGUM EN T S
Tool: git
@echo off
ECHO SOURCE BRANCH IS %BUILD_SOURCEBRANCH%
IF %BUILD_SOURCEBRANCH% == refs/heads/master (
ECHO Building master branch so no merge is needed.
EXIT
)
SET sourceBranch=origin/%BUILD_SOURCEBRANCH:refs/heads/=%
ECHO GIT CHECKOUT MASTER
git checkout master
ECHO GIT STATUS
git status
ECHO GIT MERGE
git merge %sourceBranch% -m "Merge to master"
ECHO GIT STATUS
git status
ECHO GIT PUSH
git push origin
ECHO GIT STATUS
git status
TA SK A RGUM EN T S
Path : merge.bat
FAQ
Can I run Git commands if my remote repo is in GitHub or another Git service such as Bitbucket Cloud?
Yes
Which tasks can I use to run Git commands?
Batch Script
Command Line
PowerShell
Shell Script
How do I avoid triggering a CI build when the script pushes?
Add ***NO_CI*** to your commit message. Here are examples:
git commit -m "This is a commit message ***NO_CI***"
git merge origin/features/hello-world -m "Merge to master ***NO_CI***"
Add [skip ci] to your commit message or description. Here are examples:
git commit -m "This is a commit message [skip ci]"
git merge origin/features/hello-world -m "Merge to master [skip ci]"
You can also use any of the variations below. This is supported for commits to Azure Repos Git, Bitbucket Cloud,
GitHub, and GitHub Enterprise Server.
[skip ci] or [ci skip]
skip-checks: true or skip-checks:true
[skip azurepipelines] or [azurepipelines skip]
[skip azpipelines] or [azpipelines skip]
[skip azp] or [azp skip]
***NO_CI***
How does enabling scripts to run Git commands affect how the build pipeline gets build sources?
When you set system.prefergit to true , the build pipeline uses command-line Git instead of LibGit2Sharp to
clone or fetch the source files.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure Pipelines with Microsoft Teams
11/2/2020 • 7 minutes to read • Edit Online
Azure Pipelines
If Microsoft Teams is your choice for collaboration, you can use the Azure Pipelines app built for Microsoft Teams to
easily monitor the events for your pipelines. Set up and manage subscriptions for builds, releases, YAML pipelines,
pending approvals and more from the app and get notifications for these events in your Teams channels.
NOTE
This feature is only available on Azure DevOps Services. Typically, new features are introduced in the cloud service first, and
then made available on-premises in the next major version or update of Azure DevOps Server. To learn more, see Azure
DevOps Feature Timeline.
The project URL can be to any page within your project (except URLs to pipelines).
For example:
You can also monitor a specific pipeline using the following command:
The pipeline URL can be to any page within your pipeline that has a definitionId or buildId/releaseId present in
the URL.
For example:
or:
@azure pipelines subscribe https://dev.azure.com/myorg/myproject/_release?
definitionId=123&view=mine&_a=releases
For Build pipelines, the channel is subscribed to the Build completed notification. For Release pipelines, the channel
is subscribed to the Release deployment started, Release deployment completed, and Release deployment approval
pending notifications. For YAML pipelines, subscriptions are created for the Run stage state changed and Run stage
waiting for approval notifications.
This command lists all of the current subscriptions for the channel and allows you to add/remove subscriptions.
[!NOTE] Team administrators aren't able to remove or modify subscriptions created by Project administrators.
Example: Get notifications only if the deployments are pushed to prod environment
Approve deployments from your channel
You can approve deployments from within your channel without navigating to the Azure Pipelines portal by
subscribing to the Release deployment approval pending notification for classic Releases or the Run stage waiting
for approval notification for YAML pipelines. Both of these subscriptions are created by default when you subscribe
to the pipeline.
Whenever the running of a stage is pending for approval, a notification card with options to approve or reject the
request is posted in the channel. Approvers can review the details of the request in the notification and take
appropriate action. In the following example, the deployment was approved and the approval status is displayed on
the card.
The app supports all of the checks and approval scenarios present in the Azure Pipelines portal, like single
approver, multiple approvers (any one user, any order, in sequence), and teams as approvers. You can approve
requests as an individual or on behalf of a team.
For example:
This command deletes all the subscriptions related to any pipeline in the project and removes the pipelines from
the channel.
IMPORTANT
Only project administrators can run this command.
Threaded notifications
To logically link a set of related notifications and also to reduce the space occupied by notifications in a channel,
notifications are threaded. All notifications linked to a particular run of a pipeline will be linked together.
The following example shows the compact view of linked notifications.
When expanded, you can see all the of the linked notifications, as shown in the following example.
Commands reference
Here are all the commands supported by the Azure Pipelines app:
SL A SH C O M M A N D F UN C T IO N A L IT Y
@azure pipelines subscribe [pipeline url/ project url] Subscribe to a pipeline or all pipelines in a project to receive
notifications
@azure pipelines signout Sign out from your Azure Pipelines account
@azure pipelines unsubscribe all [project url] Remove all pipelines (belonging to a project) and their
associated subscriptions from a channel
NOTE
You can use the Azure Pipelines app for Microsoft Teams only with a project hosted on Azure DevOps Services at this time.
The user must be an admin of the project containing the pipeline to set up the subscriptions
Notifications are currently not supported inside chat/direct messages
Deployment approvals which have applied the Revalidate identity of approver before completing the approval
policy are not supported
'Third party application access via OAuth' must be enabled to receive notifications for the organization in Azure DevOps
(Organization Settings -> Security -> Policies)
Multi-tenant support
In your organization if you are using a different email or tenant for Microsoft Teams and Azure DevOps, perform
the following steps to sign in and connect based on your use case.
Configuration failed. Please make sure that the organization '{organization name }' exists and that you have
sufficient permissions.
Sign out of Azure DevOps by navigating to https://aka.ms/VsSignout using your browser.
Open an In private or incognito browser window and navigate to https://aex.dev.azure.com/me and sign in. In
the dropdown under the profile icon to the left, select the directory that contains the organization containing the
pipeline for which you wish to subscribe.
In the same browser , start a new tab and sign in to https://teams.microsoft.com/ . Run the
@Azure Pipelines signout command and then run the @Azure Pipelines signin command in the channel where
the Azure Pipelines app for Microsoft Teams is installed.
Select the Sign in button and you'll be redirected to a consent page like the one in the following example. Ensure
that the directory shown beside the email is same as what was chosen in the previous step. Accept and complete
the sign-in process.
If these steps don't resolve your authentication issue, reach out to us at Developer Community.
Related articles
Azure Boards with Microsoft Teams
Azure Repos with Microsoft Teams
Azure Pipelines with Slack
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines
If you use Slack, you can use the Azure Pipelines app for Slack to easily monitor the events for your pipelines. Set
up and manage subscriptions for builds, releases, YAML pipelines, pending approvals and more from the app and
get notifications for these events in your Slack channels.
NOTE
This feature is only available on Azure DevOps Services. Typically, new features are introduced in the cloud service first, and
then made available on-premises in the next major version or update of Azure DevOps Server. To learn more, see Azure
DevOps Feature Timeline.
To start monitoring all pipelines in a project, use the following slash command inside a channel:
The project URL can be to any page within your project (except URLs to pipelines).
For example:
You can also monitor a specific pipeline using the following command:
The pipeline URL can be to any page within your pipeline that has definitionId or buildId/releaseId in the URL.
For example:
or:
The subscribe command gets you started with a few subscriptions by default. For Build pipelines, the channel is
subscribed to Build completed notification. For Release pipelines, the channel will start receiving Release
deployment started, Release deployment completed and Release deployment approval pending notifications. For
YAML pipelines, subscriptions are created for the Run stage state changed and Run stage waiting for approval
notifications.
Manage subscriptions
To manage the subscriptions for a channel, use the following command:
/azpipelines subscriptions
This command will list all the current subscriptions for the channel and allow you to add new subscriptions.
[!NOTE] Team administrators aren't able to remove or modify subscriptions created by Project administrators.
Using filters effectively to customize subscriptions
When a user subscribes to any pipeline, a few subscriptions are created by default without any filters being applied.
Often, users have the need to customize these subscriptions. For example, users may want to hear only about failed
builds or get notified only when deployments are pushed to production. The Azure Pipelines app supports filters to
customize what you see in your channel.
1. Run the /azpipelines subscriptions command
2. In the list of subscriptions, if there is a subscription that is unwanted or must be modified (Example: creating
noise in the channel), select the Remove button
3. Select the Add subscription button
4. Select the required pipeline and the desired event
5. Select the appropriate filters to customize your subscription
Example: Get notifications only for failed builds
Example: Get notifications only if the deployments are pushed to production environment
Approve deployments from your channel
You can approve deployments from within your channel without navigating to the Azure Pipelines portal by
subscribing to the Release deployment approval pending notification for classic Releases or the Run stage waiting
for approval notification for YAML pipelines. Both of these subscriptions are created by default when you subscribe
to the pipeline.
Whenever the running of a stage is pending for approval, a notification card with options to approve or reject the
request is posted in the channel. Approvers can review the details of the request in the notification and take
appropriate action. In the following example, the deployment was approved and the approval status is displayed on
the card.
The app supports all the checks and approval scenarios present in Azure Pipelines portal, like single approver,
multiple approvers (any one user, any order, in sequence) and teams as approvers. You can approve requests as an
individual or on behalf of a team.
For this feature to work, users have to be signed-in. Once they are signed in, this feature will work for all channels
in a workspace.
For example:
This command deletes all the subscriptions related to any pipeline in the project and removes the pipelines from
the channel.
IMPORTANT
Only project administrators can run this command.
Commands reference
Here are all the commands supported by the Azure Pipelines app:
SL A SH C O M M A N D F UN C T IO N A L IT Y
/azpipelines subscribe [pipeline url/ project url] Subscribe to a pipeline or all pipelines in a project to receive
notifications
/azpipelines unsubscribe all [project url] Remove all pipelines (belonging to a project) and their
associated subscriptions from a channel
NOTE
You can use the Azure Pipelines app for Slack only with a project hosted on Azure DevOps Services at this time.
The user has to be an admin of the project containing the pipeline to set up the subscriptions
Notifications are currently not supported inside direct messages
Deployment approvals which have 'Revalidate identity of approver before completing the approval' policy applied, are not
supported
'Third party application access via OAuth' must be enabled to receive notifications for the organization in Azure DevOps
(Organization Settings -> Security -> Policies)
Troubleshooting
If you are experiencing the following errors when using the Azure Pipelines App for Slack, follow the procedures in
this section.
Sorry, something went wrong. Please try again.
Configuration failed. Please make sure that the organization '{organization name}' exists and that you have
sufficient permissions.
Sorry, something went wrong. Please try again.
The Azure Pipelines app uses the OAuth authentication protocol, and requires Third-party application access via
OAuth for the organization to be enabled. To enable this setting, navigate to Organization Settings > Security >
Policies , and set the Third-par ty application access via OAuth for the organization setting to On .
Configuration failed. Please make sure that the organization '{organization name }' exists and that you have
sufficient permissions.
Sign out of Azure DevOps by navigating to https://aka.ms/VsSignout using your browser.
Open an In private or incognito browser window and navigate to https://aex.dev.azure.com/me and sign in. In
the dropdown under the profile icon to the left, select the directory that contains the organization containing the
pipeline for which you wish to subscribe.
In the same browser , start a new tab, navigate to https://slack.com , and sign in to your work space (use web
client ). Run the /azpipelines signout command followed by the /azpipelines signin command.
Select the Sign in button and you'll be redirected to a consent page like the one in the following example. Ensure
that the directory shown beside the email is same as what was chosen in the previous step. Accept and complete
the sign-in process.
If these steps don't resolve your authentication issue, reach out to us at Developer Community.
Related articles
Azure Boards with Slack
Azure Repos with Slack
Create a service hook for Azure DevOps with Slack
Integrate with ServiceNow change management
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines
Azure Pipelines and ServiceNow bring an integration of Azure Pipelines with ServiceNow Change Management to
enhance collaboration between development and IT teams. By including change management in CI/CD pipelines,
teams can reduce the risks associated with changes and follow service management methodologies such as ITIL,
while gaining all DevOps benefits from Azure Pipelines.
This topic covers:
Configuring ServiceNow for integrating with Azure Pipelines
Including ServiceNow change management process as a release gate
Monitoring change management process from releases
Keeping ServiceNow change requests updated with deployment result
Prerequisites
This tutorial extends the tutorial Use approvals and gates. You must have completed that tutorial first .
You'll also need a non-developer instance of ServiceNow to which applications can be installed from the store.
2. Select the ServiceNow service connection you created earlier and enter the values for the properties of the
change request.
Inputs for the gate:
Short description: A summary of the change.
Description: A detailed description of the change.
Category: The category of the change. For example, Hardware , Network , Software .
Priority: The priority of the change.
Risk: The risk level for the change.
Impact: The effect that the change has on the business.
Configuration Item: The configuration item (CI) that the change applies to.
Assignment group: The group that the change is assigned to.
Schedule of change request: The schedule of the change. The date and time should be in the UTC format
yyyy-MM-ddTHH:mm:ssZ . For example, 2018-01-31T07:56:59Z
Additional change request parameters: Additional properties of the change request you want set. Name
must be the field name (not the label) prefixed with u_ . For example, u_backout_plan . The value must
be a valid to be accepted value in ServiceNow. Invalid entries are ignored.
Gate success criteria:
Desired state: The gate will succeed, and the pipeline continues when the change request status is the
same as the value you specify.
Gate output variables:
CHANGE_REQUEST_NUMBER : Number of the change request created in ServiceNow.
CHANGE_SYSTEM_ID : System ID of the change request created in ServiceNow.
The ServiceNow gate produces output variables. You must specify the reference name to be able to use
these output variables in the deployment workflow. Gate variables can be accessed by using
PREDEPLOYGATE as a prefix. For example, when the reference name is set to gate1 , the change number
can be obtained as $(PREDEPLOYGATE.gate1.CHANGE_REQUEST_NUMBER) .
3. At the end of your deployment process, add an agentless phase with a task to update the status of the
change after deployment.
Execute a release
1. Create a new release from the configured release pipeline in Azure DevOps
2. After completing the Dev stage, the pipeline creates a new change request in ServiceNow for the release
and waits for it to reach the desired state.
3. The values defined as gate parameters will be used. You can get the change number that was created from
the logs.
4. The ServiceNow change owner will see the release in the queue as a new change.
5. The release that caused the change to be requested can be tracked from the Azure DevOps Pipeline
metadata section of the change.
6. The change goes through its normal life cycle: Approval, Scheduled, and more until it is ready for
implementation.
7. When the change is ready for implementation (it is in the Implement state), the release in Azure DevOps
proceeds. The gates status will be as shown here:
8. After the deployment, the change request is closed automatically.
FAQs
Q: What versions of ServiceNow are supported?
A : The integration is compatible with Kingston and above ServiceNow versions.
Q: What types of change request can be managed with the integration?
A : Only normal change requests are currently supported with this integration.
Q: How do I set additional change properties?
A : You can specify additional change properties of the change request in the Additional change request
parameters field. The properties are specified as key-value pairs in JSON format, the name being the field name
(not the label) prefixed with u_ in ServiceNow and a valid value.
Q: Can I update custom fields in the change request with additional change request parameters?
A : If custom fields are defined in ServiceNow for the change requests, mapping of the custom fields in import set
transform map must be added. See ServiceNow Change Management Extension for details.
Q: I don't see drop-down values populated for Category, Status, and others. What should I do?
A : Change Management Core and Change Management - State Model plugins must be active on your ServiceNow
instance for the drop-downs to work. See Upgrade change management and Update change request states for
more details.
Related topics
Approvals and gates overview
Manual intervention
Use approvals and gates to control your deployment
Stages
Triggers
See also
Video: Deploy quicker and safer with gates in Azure Pipelines
Configure your release pipelines for safe deployments
Tutorial: Use approvals and gates to control your deployment
Twitter sentiment as a release gate
GitHub issues as a release gate
Author custom gates. Library with examples
Help and support
See our troubleshooting page
Get advice on Stack Overflow, and feel free to post your questions, search for answers, or suggest a feature on
our Azure DevOps Developer Community. Support page.
Continuously deploy from a Jenkins build
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Azure Pipelines supports integration with Jenkins so that you can use Jenkins for Continuous Integration (CI) while
gaining several DevOps benefits from an Azure Pipelines release pipeline that deploys to Azure:
Reuse your existing investments in Jenkins build jobs
Track work items and related code changes
Get end-to-end traceability for your CI/CD workflow
Consistently deploy to a range of cloud services
Enforce quality of builds by gating deployments
Define work flows such as manual approval processes and CI triggers
Integrate Jenkins with JIRA and Azure Pipelines to show associated issues for each Jenkins job
Integrate with other service management tools such as ServiceNow
A typical approach is to use Jenkins to build an app from source code hosted in a Git repository such as GitHub and
then deploy it to Azure using Azure Pipelines.
...
jobs:
- job: DeployMyApp
pool:
name: Default
steps:
- task: AzureRmWebAppDeployment@4
inputs:
connectionType: 'AzureRM'
azureSubscription: your-subscription-name
appType: webAppLinux
webAppName: 'MyApp'
deployToSlotOrASE: false
packageForLinux: '$(System.DefaultWorkingDirectory)/**/*.zip'
takeAppOfflineFlag: true
...
See also
Artifacts
Stages
Triggers
YAML schema reference
Use Terraform to manage infrastructure deployment
11/2/2020 • 8 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Terraform is a tool for building, changing and versioning infrastructure safely and efficiently. Terraform can manage
existing and popular cloud service providers as well as custom in-house solutions.
Configuration files describe to Terraform the components needed to run a single application or your entire
datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then
executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what
changed and create incremental execution plans which can be applied.
In this tutorial, you learn about:
The structure of a Terraform file
Building an application using an Azure CI pipeline
Deploying resources using Terraform in an Azure CD pipeline
Prerequisites
1. A Microsoft Azure account.
2. An Azure DevOps account.
3. Use the Azure DevOps Demo Generator to provision the tutorial project on your Azure DevOps organization.
This URL automatically selects the Terraform template in the demo generator.
terraform {
required_version = ">= 0.11"
backend "azurerm" {
storage_account_name = "__terraformstorageaccount__"
container_name = "terraform"
key = "terraform.tfstate"
access_key ="__storagekey__"
features{}
}
}
provider "azurerm" {
version = "=2.0.0"
features {}
}
resource "azurerm_resource_group" "dev" {
name = "PULTerraform"
location = "West Europe"
}
sku {
tier = "Free"
size = "F1"
}
}
webapp.tf is a terraform configuration file. Terraform uses its own file format, called HCL (Hashicorp
Configuration Language). The structure is similar to YAML. In this example, Terraform will deploy the Azure
resource group, app service plan, and app service required to deploy the website. However, since the names
of those resources are not yet known, they are marked with tokens that will be replaced with real values
during the release pipeline.
As an added benefit, this Infrastructure-as-Code (IaC) file can be managed as part of source control. You may
learn more about working with Terraform and Azure in this Terraform Basics lab.
2. Select Edit . This CI pipeline has tasks to compile the .NET Core project. These tasks restore dependencies,
build, test, and publish the output as a zip file which can be deployed to an app service.
3. In addition to the application build, the pipeline publishes Terraform files as build artifacts so that they will
be available to other pipelines, such as the CD pipeline to be used later. This is done via the Copy files task,
which copies the Terraform folder to the Artifacts directory.
4. Select Queue to queue a new build. Select Run to use the default options. When the build page appears,
select Agent job 1 . The build may take a few minutes to complete.
2. The CD pipeline has been configured to accept the artifacts published by the CI pipeline. There is only one
stage, which is the Dev stage that performs the deployment. Select it to review its tasks.
3. There are eight tasks defined in the release stage. Most of them require some configuration to work with the
target Azure account.
4. Select the Agent job and configure it to use the Azure Pipelines agent pool and vs2017-win2016
specification.
5. Select the Azure CLI task and configure it to use a service connection to the target Azure account. If the
target Azure account is under the same user logged in to Azure DevOps, then available subscriptions can be
selected and authorized from the dropdown. Otherwise, use the Manage link to manually create a service
connection. Once created, this connection can be reused for future tasks.
This task executes a series of Azure CLI commands to set up some basic infrastructure required to use
Terraform.
By default, Terraform stores state locally in a file named terraform.tfstate . When working with Terraform in
a team, use of a local file makes Terraform implementation complicated. With remote state, Terraform writes
the state data to a remote data store. Here the pipeline uses an Azure CLI task to create an Azure storage
account and storage container to store the Terraform state. For more information on Terraform remote state,
see Terraform's docs for working with Remote State.
6. Select the Azure PowerShell task and configure it to use the Azure Resource Manager connection type
and use the service connection created earlier.
This task uses PowerShell commands to retrieve the storage account key needed for the Terraform
provisioning.
# Using this script we will fetch storage key which is required in terraform file to authenticate
backend storage account
7. Select the Replace tokens task. If you recall the webapp.tf file reviewed earlier, there were several
resources that were unknown at the time and marked with token placeholders, such as
terraformstorageaccount . This task replaces those tokens with variable values relevant to the
deployment, including those from the pipeline's Variables . You may review those under Variables if you
like, but return to Tasks afterwards.
8. Select the Install Terraform task. This installs and configures the specified version of Terraform on the
agent for the remaining tasks.
When running Terraform in automation, the focus is usually on the core plan/apply cycle. The next three
tasks follow these stages.
9. Select the Terraform init task. This task runs the terraform init command. This command looks through
all of the *.tf files in the current working directory and automatically downloads any of the providers
required for them. In this example, it will download Azure provider as it is going to deploy Azure resources.
For more information, see Terraform's documentation for the init command.
Select the Azure subscription created earlier and enter terraform as the container. Note that the key is set
to terraform.tfstate .
10. Select the Terraform plan task. This task runs the terraform plan command. This command is used to
create an execution plan by determining what actions are necessary to achieve the desired state specified in
the configuration files. This is just a dry run and shows which actions will be performed. For more
information, see Terraform's documentation for the plan command.
Select the Azure subscription created earlier.
11. Select the Terraform apply task. This task runs the terraform validate and apply command. This
command deploys the resources. By default, it will also prompt for confirmation before applying. Since this
is an automated deployment, the auto-approve argument is included. For more information, see
Terraform's documentation for the plan command.
Select the Azure subscription created earlier.
12. Select the Azure App Ser vice Deploy task. Select the Azure subscription created earlier. By the time this
task runs, Terraform has ensured that the deployment environment has been configured to meet the app's
requirements. It will use the created app service name set in the Variables section.
13. From the top of the page, select Save and confirm.
14. Select Create release . Specify the recent build and select Create . Your build number will most likely be
different than this example.
15. Select the new release to track the pipeline.
17. Once the release has completed, select the Azure App Ser vice Deploy task.
18. Copy the name of the app service from the task title. Note that the name you see will vary slightly.
19. Open a new browser tab and navigate to the app service. The domain format is [app ser vice
name].azurewebsites.net , so the final URL will be something like:
https://pulterraformweb99ac17bf.azurewebsites.net.
Summary
In this tutorial, you learned how to automate repeatable deployments with Terraform on Azure using Azure
Pipelines.
Clean up resources
This tutorial created an Azure DevOps project and some resources in Azure. If you're not going to continue to use
these resources, delete them with the following steps:
1. Delete the Azure DevOps project created by the Azure DevOps Demo Generator.
2. All Azure resources created during this tutorial were assigned to either the PULTerraform or terraformrg
resource groups. Deleting those two groups will delete the resources they contain. This can be done via the
CLI or portal. The following example shows you how to delete the resource groups using Azure CLI.
Next steps
Terraform with Azure
Migrate from Jenkins to Azure Pipelines
2/26/2020 • 7 minutes to read • Edit Online
Jenkins has traditionally been installed by enterprises in their own data centers and managed in an on-premises
fashion, though a number of providers offer managed Jenkins hosting.
Azure Pipelines, on the other hand, is a cloud native continuous integration pipeline, providing the management of
build and release pipelines and build agent virtual machines hosted in the cloud.
However, Azure Pipelines offers a fully on-premises option as well with Azure DevOps Server, for those customers
who have compliance or security concerns that require them to keep their code and build within the enterprise data
center.
In addition, Azure Pipelines supports a hybrid cloud and on-premises model, where Azure Pipelines manages the
build and release orchestration and enabling build agents both in the cloud and installed on-premises, for
customers with custom needs and dependencies for some build agents but who are looking to move most
workloads to the cloud.
This document provides a guide to translate a Jenkins pipeline configuration to Azure Pipelines, information about
moving container-based builds and selecting build agents, mapping environment variables, and how to handle
success and failures of the build pipeline.
Configuration
You'll find a familiar transition from a Jenkins declarative pipeline into an Azure Pipelines YAML configuration. The
two are conceptually similar, supporting "configuration as code" and allowing you to check your configuration into
your version control system. Unlike Jenkins, however, Azure Pipelines uses the industry-standard YAML to
configure the build pipeline.
Despite the language difference, however, the concepts between Jenkins and Azure Pipelines and the way they're
configured are similar. A Jenkinsfile lists one or more stages of the build process, each of which contains one or
more steps that are performed in order. For example, a "build" stage may run a task to install build-time
dependencies, then perform a compilation step. While a "test" stage may invoke the test harness against the
binaries that were produced in the build stage.
For example:
Jenkinsfile
pipeline {
agent none
stages {
stage('Build') {
steps {
sh 'npm install'
sh 'npm run build'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
}
}
This translates easily to an Azure Pipelines YAML configuration, with a job corresponding to each stage, and steps to
perform in each job:
azure-pipelines.yml
jobs:
- job: Build
steps:
- script: npm install
- script: npm run build
- job: Test
steps:
- script: npm test
Visual Configuration
If you are not using a Jenkins declarative pipeline with a Jenkinsfile, and are instead using the graphical interface to
define your build configuration, then you may be more comfortable with the classic editor in Azure Pipelines.
Container-Based Builds
Using containers in your build pipeline allows you to build and test within a docker image that has the exact
dependencies that your pipeline needs, already configured. It saves you from having to include a build step that
installs additional software or configures the environment. Both Jenkins and Azure Pipelines support container-
based builds.
In addition, both Jenkins and Azure Pipelines allow you to share the build directory on the host agent to the
container volume using the -v flag to docker. This allows you to chain multiple build jobs together that can use the
same sources and write to the same output directory. This is especially useful when you use many different
technologies in your stack; you may want to build your backend using a .NET Core container and your frontend
with a TypeScript container.
For example, to run a build in an Ubuntu 14.04 ("Trusty") container, then run tests in an Ubuntu 16.04 ("Xenial")
container:
Jenkinsfile
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'ubuntu:trusty'
args '-v $HOME:/build -w /build'
}
}
steps {
sh 'make'
}
}
stage('Test') {
agent {
docker {
image 'ubuntu:xenial'
args '-v $HOME:/build -w /build'
}
}
steps {
sh 'make test'
}
}
}
}
Azure Pipelines provides container jobs to enable you to run your build within a container:
azure-pipelines.yml
resources:
containers:
- container: trusty
image: ubuntu:trusty
- container: xenial
image: ubuntu:xenial
jobs:
- job: build
container: trusty
steps:
- script: make
- job: test
dependsOn: build
container: xenial
steps:
- script: make test
In addition, Azure Pipelines provides a docker task that allows you to run, build, or push an image.
Agent Selection
Jenkins offers build agent selection using the agent option to ensure that your build pipeline - or a particular stage
of the pipeline - runs on a particular build agent machine. Similarly, Azure Pipelines offers a number of options to
configure where your build environment runs.
Hosted Agent Selection
Azure Pipelines offers cloud hosted build agents for Linux, Windows, and macOS builds. To select the build
environment, you can use the vmimage keyword. For example, to select a macOS build:
pool:
vmimage: macOS-10.14
Additionally, you can specify a container and specify a docker image for finer grained control over how your build
is run.
On-Premises Agent Selection
If you host your build agents on-premises, then you can define the build agent "capabilities" based on the
architecture of the machine or the software that you've installed on it. For example, if you've set up an on-premises
build agent with the java capabilities, then you can ensure that your job runs on it using the demands keyword:
pool:
demands: java
Environment Variables
In Jenkins, you typically define environment variables for the entire pipeline. For example, to set two environment
variables, CONFIGURATION=debug and PLATFORM=x86 :
Jenkinsfile
pipeline {
environment {
CONFIGURATION = 'debug'
PLATFORM = 'x64'
}
}
Similarly, in Azure Pipelines you can configure variables that are used both within the YAML configuration and are
set as environment variables during job execution:
azure-pipelines.yml
variables:
configuration: debug
platform: x64
Additionally, in Azure Pipelines you can define variables that are set only for the duration of a particular job:
azure-pipelines.yml
jobs:
- job: debug build
variables:
configuration: debug
steps:
- script: ./build.sh $(configuration)
- job: release build
variables:
configuration: release
steps:
- script: ./build.sh $(configuration)
Predefined Variables
Both Jenkins and Azure Pipelines set a number of environment variables to allow you to inspect and interact with
the execution environment of the continuous integration system.
The URL that displays the build logs. BUILD_URL This is not set as an environment
variable in Azure Pipelines but can be
derived from other variables.1
1 To derive the URL that displays the build logs in Azure Pipelines, combine the following environment variables in
this format:
${SYSTEM_TEAMFOUNDATIONCOLLECTIONURI}/${SYSTEM_TEAMPROJECT}/_build/results?buildId=${BUILD_BUILDID}
Jenkinsfile
post {
always {
echo "The build has finished"
}
success {
echo "The build succeeded"
}
failure {
echo "The build failed"
}
}
Similarly, Azure Pipelines has a rich conditional execution framework that allows you to run a job, or steps of a job,
based on a number of conditions including pipeline success or failure.
To emulate the simple Jenkins post -build conditionals, you can define jobs that run based on the always() ,
succeeded() or failed() conditions:
azure-pipelines.yml
jobs:
- job: always
steps:
- script: echo "The build has finished"
condition: always()
- job: success
steps:
- script: echo "The build succeeded"
condition: succeeded()
- job: failed
steps:
- script: echo "The build failed"
condition: failed()
In addition, you can combine other conditions, like the ability to run a task based on the success or failure of an
individual task, environment variables, or the execution environment, to build a rich execution pipeline.
Migrate from Travis to Azure Pipelines
11/2/2020 • 14 minutes to read • Edit Online
Azure Pipelines is more than just a Continuous Integration tool, it's a flexible build and release orchestration platform. It's designed for
the software development and deployment process, but because of this extensibility, there are a number of differences from simpler
build systems like Travis.
This purpose of this guide is to help you migrate from Travis to Azure Pipelines. This guide describes the philosophical differences
between Travis and Azure Pipelines, examines the practical effects on the configuration of each system, and shows how to translate
from a Travis configuration to an Azure Pipelines configuration.
We need your help to make this guide better! Submit comments below or contribute your changes directly.
Key differences
There are numerous differences between Travis and Azure Pipelines, including version control configuration, environment variables, and
virtual machine environments, but at a higher level:
Azure Pipelines configuration is more precise and relies less on shorthand configuration and implied steps. You'll see this in
places like language selection and in the way Azure Pipelines allows flow to be controlled.
Travis builds have stages, jobs and phases, while Azure Pipelines simply has steps that can be arranged and executed in an
arbitrary order or grouping that you choose. This gives you flexibility over the way that your steps are executed, including the
way they're executed in parallel.
Azure Pipelines allows job definitions and steps to be stored in separate YAML files in the same or a different repository, enabling
steps to be shared across multiple pipelines.
Azure Pipelines provides full support for building and testing on Microsoft-managed Linux, Windows, and macOS images. See
Microsoft-hosted agents for more details.
Language
Travis uses the language keyword to identify the prerequisite build environment to provision for your build. For example, to select
Node.JS 8.x:
.travis.yml
language: node_js
node_js:
- "8"
Microsoft-hosted agents contain the SDKs for many languages out-of-the-box and most languages need no configuration. But where a
language has multiple versions installed, you may need to execute a language selection task to set up the environment.
For example, to select Node.JS 8.x:
azure-pipelines.yml
steps:
- task: NodeTool@0
inputs:
versionSpec: '8.x'
Language mappings
The language keyword in Travis does not just imply that a particular version of language tools be used, but also that a number of build
steps be implicitly performed. Azure Pipelines, on the other hand, does not do any work without your input, so you'll need to specify the
commands that you want to execute.
Here is a translation guide from the language keyword to the commands that are executed automatically for the most commonly-used
languages:
L A N GUA GE C O M M A N DS
c ./configure
cpp make
make install
go go get -t -v ./...
make or go test
java Gradle :
groovy gradle assemble
gradle check
Maven :
mvn install -DskipTests=true -Dmaven.javadoc.skip=true -B -V
mvn test -B
Ant :
ant test
Build.PL :
perl ./Build.pl
./Build test
Makefile.PL :
perl Makefile.PL
make test
Makefile :
make test
php phpunit
L A N GUA GE C O M M A N DS
steps:
- task: NodeTool@0
inputs:
versionSpec: '8.x'
- task: UseRubyVersion@0
inputs:
versionSpec: '>= 2.5'
Phases
In Travis, steps are defined in a fixed set of named phases such as before_install or before_script . Azure Pipelines does not have
named phases and steps can be grouped, named, and organized in whatever way makes sense for the pipeline.
For example:
.travis.yml
before_install:
- npm install -g bower
install:
- npm install
- bower install
script:
- npm run build
- npm test
azure-pipelines.yml
steps:
- script: npm install -g bower
- script: npm install
- script: bower install
- script: npm run build
- script: npm test
steps:
- script: |
npm install -g bower
npm install
bower install
displayName: 'Install dependencies'
- script: npm run build
- script: npm test
Parallel jobs
Travis provides parallelism by letting you define a stage, which is a group of jobs that are executed in parallel. A Travis build can have
multiple stages; once all jobs in a stage have completed, the execution of the next stage can begin.
Azure Pipelines gives you finer grained control of parallelism. You can make each step dependent on any other step you want. In this
way, you specify which steps run serially, and which can run in parallel. So you can fan out with multiple steps run in parallel after the
completion of one step, and then fan back in with a single step that runs afterward. This model gives you options to define complex
workflows if necessary. For now, here's a simple example:
For example, to run a build script, then upon its completion run both the unit tests and the integration tests in parallel, and once all tests
have finished, package the artifacts and then run the deploy to pre-production:
.travis.yml
jobs:
include:
- stage: build
script: ./build.sh
- stage: test
script: ./test.sh unit_tests
- script: ./test.sh integration_tests
- stage: package
script: ./package.sh
- stage: deploy
script: ./deploy.sh pre_prod
azure-pipelines.yml
jobs:
- job: build
steps:
- script: ./build.sh
- job: test1
dependsOn: build
steps:
- script: ./test.sh unit_tests
- job: test2
dependsOn: build
steps:
- script: ./test.sh integration_tests
- job: package
dependsOn:
- test1
- test2
script: ./package.sh
- job: deploy
dependsOn: package
steps:
- script: ./deploy.sh pre_prod
jobs:
- job: build
steps:
- script: ./build.sh
- job: test1
dependsOn: build
steps:
- script: ./test.sh unit_tests
- job: test2
dependsOn: build
steps:
- script: ./test.sh integration_tests
- job: package
dependsOn: test1
script: ./package.sh
- job: deploy
dependsOn:
- test1
- test2
- package
steps:
- script: ./deploy.sh pre_prod
Step reuse
Most teams like to reuse as much business logic as possible to save time and avoid replication errors, confusion, and staleness. Instead
of duplicating your change, you can make the change in a common area, and your leverage increases when you have similar processes
that build on multiple platforms.
In Travis you can use matrices to run multiple executions across a single configuration. In Azure Pipelines you can use matrices in the
same way, but you can also implement configuration reuse by using YAML templates.
Example: Environment variable in a matrix
One of the most common ways to run several builds with a slight variation is to change the execution using environment variables. For
example, your build script can look for the presence of an environment variable and change the way your software is built, or the way
its tested.
You can use a matrix to have run a build configuration several times, once for each value in the environment variable. For example, to
run a given script three times, each time with a different setting for an environment variable:
.travis.yml
os: osx
env:
matrix:
- MY_ENVIRONMENT_VARIABLE: 'one'
- MY_ENVIRONMENT_VARIABLE: 'two'
- MY_ENVIRONMENT_VARIABLE: 'three'
script: echo $MY_ENVIRONMENT_VARIABLE
azure-pipelines.yml
pool:
vmImage: 'macOS-10.14'
strategy:
matrix:
set_env_to_one:
MY_ENVIRONMENT_VARIABLE: 'one'
set_env_to_two:
MY_ENVIRONMENT_VARIABLE: 'two'
set_env_to_three:
MY_ENVIRONMENT_VARIABLE: 'three'
steps:
- script: echo $(MY_ENVIRONMENT_VARIABLE)
You can easily use the environment variable matrix options in Azure Pipelines to enable a matrix for different language versions. For
example, you can set an environment variable in each matrix variable that corresponds to the language version that you want to use,
then in the first step, use that environment variable to run the language configuration task:
.travis.yml
os: linux
matrix:
include:
- rvm: 2.3.7
- rvm: 2.4.4
- rvm: 2.5.1
script: ruby --version
azure-pipelines.yml
azure-pipelines.yml
strategy:
matrix:
linux:
imageName: 'ubuntu-16.04'
mac:
imageName: 'macos-10.14'
windows:
imageName: 'vs2017-win2016'
pool:
vmImage: $(imageName)
steps:
- script: echo Hello, world!
build: ./build.sh
after_success: echo Success
after_failure: echo Failed
azure-pipelines.yml
steps:
- script: ./build.sh
- script: echo Success
condition: succeeded()
- script: echo Failed
condition: failed()
jobs:
- job: build
steps:
- script: ./build.sh
- job: alert
dependsOn: build
condition: and(failed(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
steps:
- script: ./sound_the_alarms.sh
Predefined variables
Both Travis and Azure Pipelines set a number of environment variables to allow you to inspect and interact with the execution
environment of the CI system.
In most cases there's an Azure Pipelines variable to match the environment variable in Travis. Here's a list of commonly-used
environment variables in Travis and their analog in Azure Pipelines:
TRAVIS_BRANCH CI builds : The name of the branch the build was queued
BUILD_SOURCEBRANCH for, or the name of the branch the pull request
is targeting.
Pull request builds :
SYSTEM_PULLREQUEST_TARGETBRANCH
TRAVIS_COMMIT Pull request builds : For pull request validation builds, Azure
git rev-parse HEAD^2 Pipelines sets BUILD_SOURCEVERSION to the
resulting merge commit of the pull request into
master; this command will identify the pull
request commit itself.
TRAVIS_PULL_REQUEST Azure Repos : The pull request number that triggered this
SYSTEM_PULLREQUEST_PULLREQUESTID build. (For GitHub builds, this is a unique
identifier that is not the pull request number.)
GitHub :
SYSTEM_PULLREQUEST_PULLREQUESTNUMBER
TRAVIS_PULL_REQUEST_BRANCH SYSTEM_PULLREQUEST_SOURCEBRANCH The name of the branch where the pull request
originated.
TRAVIS_PULL_REQUEST_SHA Pull request builds : For pull request validation builds, Azure
git rev-parse HEAD^2 Pipelines sets BUILD_SOURCEVERSION to the
resulting merge commit of the pull request into
master; this command will identify the pull
request commit itself.
Build Reasons :
The TRAVIS_EVENT_TYPE variable contains values that map to values provided by the Azure Pipelines BUILD_REASON variable:
Operating Systems :
The TRAVIS_OS_NAME variable contains values that map to values provided by the Azure Pipelines AGENT_OS variable:
branches:
only:
- master
- /^releases.*/
azure-pipelines.yml
trigger:
branches:
include:
- master
- releases*
Output caching
Travis supports caching dependencies and intermediate build output to improve build times. Azure Pipelines does not support caching
intermediate build output, but does offer integration with Azure Artifacts for dependency storage.
Git submodules
Travis and Azure Pipelines both clone git repos "recursively" by default. This means that submodules are cloned by the agent, which is
useful since submodules usually contain dependencies. However, the extra cloning takes time, so if you don't need the dependencies
then you can disable cloning submodules:
.travis.yml
git:
submodules: false
azure-pipelines.yml
checkout:
submodules: false
Migrate from XAML builds to new builds
11/2/2020 • 13 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
We introduced XAML build automation capabilities based on the Windows Workflow Foundation in Team
Foundation Server (TFS) 2010. We released another version of XAML builds in TFS 2013.
After that we sought to expand beyond .NET and Windows and add support for other kinds of apps that are based
on operating systems such as macOS and Linux. It became clear that we needed to switch to a more open, flexible,
web-based foundation for our build automation engine. In early 2015 in Azure Pipelines, and then in TFS 2015, we
introduced a simpler task- and script-driven cross-platform build system.
Because the systems are so different, there's no automated or general way to migrate a XAML build pipeline into a
new build pipeline. The migration process is to manually create the new build pipelines that replicate what your
XAML builds do.
If you're building standard .NET applications, you probably used our default templates as provided out-of-the-box.
In this case the process should be reasonably easy.
If you have customized your XAML templates or added custom tasks, then you'll need to also take other steps
including writing scripts, installing extensions, or creating custom tasks.
(If you don't see your project listed on the home page, select Browse .)
On-premises TFS: http://{your_server}:8080/tfs/DefaultCollection/{your_project}
Azure Pipelines: https://dev.azure.com/{your_organization}/{your_project}
The TFS URL doesn't work for me. How can I get the correct URL?
2. Create a build pipeline (Pipelines tab > Builds) ▼
Build pipeline name You can change it whenever you save When editing the pipeline: On the
the pipeline. Tasks tab, in left pane click
Pipeline , and the Name field
appears in right pane.
In the Builds hub (Mine or All
pipelines tab), open the action
menu and choose Rename .
Queue processing Not yet supported. As a partial Not yet supported. As an alternative,
alternative, disable the triggers. disable the triggers.
Source Settings tab On the Repositor y tab specify your On the Tasks tab, in left pane click Get
mappings with Active paths as Map sources . Specify your workspace
and Cloaked paths as Cloak . mappings with Active paths as Map
and Cloaked paths as Cloak .
The new build pipeline offers you some new options. The specific extra options you'll see depend on the version
you're using of TFS or Azure Pipelines. If you're using Azure Pipelines, first make sure to display Advanced
settings . See Build TFVC repositories.
Git
XA M L SET T IN G T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T
Source Settings tab On the Repositor y tab specify the On the Tasks tab, in left pane click Get
repository and default branch. sources . Specify the repository and
default branch.
The new build pipeline offers you some new options. The specific extra options you'll see depend on the version
you're using of TFS or Azure Pipelines. If you're using Azure Pipelines, first make sure to display Advanced
settings . See Pipeline options for Git repositories.
Trigger tab
Trigger tab On the Triggers tab, select the trigger you want to use: CI,
scheduled, or gated.
The new build pipeline offers you some new options. For example:
You can potentially create fewer build pipelines to replace a larger number of XAML build pipelines. This is
because you can use a single new build pipeline with multiple triggers. And if you're using Azure Pipelines,
then you can add multiple scheduled times.
The Rolling builds option is replaced by the Batch changes option. You can't specify minimum time
between builds. But if you're using Azure Pipelines, you can specify the maximum number of parallel jobs
per branch.
If your code is in TFVC, you can add folder path filters to include or exclude certain sets of files from
triggering a CI build.
If your code is in TFVC and you're using the gated check-in trigger, you've got the option to also run CI builds
or not. You can also use the same workspace mappings as your repository settings, or specify different
mappings.
If your code is in Git, then you specify the branch filters directly on the Triggers tab. And you can add folder
path filters to include or exclude certain sets of files from triggering a CI build.
The specific extra options you'll see depend on the version you're using of TFS or Azure Pipelines. See Build pipeline
triggers
We don't yet support the Build even if nothing has changed since the previous build option.
Build Defaults tab
Build controller On the General tab, select the default On the Options tab, select the default
agent pool. agent pool.
Staging location On the Tasks tab, specify arguments to On the Tasks tab, specify arguments to
the Copy Files and Publish Build the Copy Files and Publish Build
Artifacts tasks. See Build artifacts. Artifacts tasks. See Build artifacts.
The new build pipeline offers you some new options. For example:
You don't need a controller, and the new agents are easier to set up and maintain. See Build and release
agents.
You can exactly specify which sets of files you want to publish as build artifacts. See Build artifacts.
Process tab
TF Version Control
XA M L P RO C ESS PA RA M ET ER T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T
Clean workspace On the Repositor y tab, open the On the Tasks tab, in left pane click Get
Clean menu, and then select true . sources . Display Advanced settings ,
and then select Clean . (We plan to
change move this option out of
advanced settings.)
Get version You can't specify a changeset in the You can't specify a changeset in the
build pipeline, but you can specify one build pipeline, but you can specify one
when you manually queue a build. when you manually queue a build.
Label Sources On the Repositor y tab, select an Tasks tab, in left pane click Get
option from the Label sources menu. sources . Select one of the Tag
sources options. (We plan to change
the name of this to Label sources .)
The new build pipeline offers you some new options. See Build TFVC repositories.
Git
Clean repository Repositor y tab, open Clean menu, On the Tasks tab, in left pane click Get
select true . sources . Show Advanced settings ,
and then select Clean . (We plan to
change move this option out of
advanced settings.)
Checkout override You can't specify a commit in the build You can't specify a commit in the build
pipeline, but you can specify one when pipeline, but you can specify one when
you manually queue a build. you manually queue a build.
The new build pipeline offers you some new options. See Pipeline options for Git repositories.
Build
On the Build tab (TFS 2017 and newer) or the Tasks tab (Azure Pipelines), after you select the Visual Studio Build
task, you'll see the arguments that are equivalent to the XAML build parameters.
Projects Solution
Output location The Visual Studio Build task builds and outputs files in the
same way you do it on your dev machine, in the local
workspace. We give you full control of publishing artifacts out
of the local workspace on the agent. See Artifacts in Azure
Pipelines.
Advanced, post- and pre-build scripts You can run one or more scripts at any point in your build
pipeline by adding one or more instances of the PowerShell,
Batch, and Command tasks. For example, see Use a PowerShell
script to customize your build pipeline.
IMPORTANT
In the Visual Studio Build arguments, on the Visual Studio Version menu, make sure to select version of Visual Studio that
you're using.
The new build pipeline offers you some new options. See Visual Studio Build.
Learn more: Visual Studio Build task (for building solutions), MSBuild task (for building individual projects).
Test
See continuous testing and Visual Studio Test task.
Publish Symbols
Path to publish symbols Click the Publish Symbols task and then copy the path into
the Path to publish symbols argument.
Advanced
Maximum agent execution time None On the Options tab you can specify
Build job timeout in minutes .
XA M L P RO C ESS PA RA M ET ER T F S 2017 A N D N EW ER EQ UIVA L EN T A Z URE P IP EL IN ES EQ UIVA L EN T
Name filter, Tag comparison operator, A build pipeline asserts demands that A build pipeline asserts demands that
Tags filter are matched with agent capabilities. See are matched with agent capabilities. See
Agent capabilities. Agent capabilities.
Build number format On the General tab, copy your build On the General tab, copy your build
number format into the Build number number format into the Build number
format field. format field.
Create work item on failure On the Options tab, select this check On the Options tab, enable this
box. option.
Update work items with build number None On the Options tab you can enable
Automatically link new work in
this build .
The new build pipeline offers you some new options. See:
Agent capabilities
Build number format
Retention Policy tab
Retention Policy tab On the Retention tab specify the policies you want to
implement.
The new build pipeline offers you some new options. See Build and release retention policies.
TIP
If you're using TFS 2017 or newer, you can write a short PowerShell script directly inside your build pipeline.
TFS 2017 or newer inline PowerShell script
For all these tasks we offer a set of built-in variables, and if necessary, you can define your own variables. See Build
variables.
Write a custom task
If necessary, you can write your own custom extensions to custom tasks for your builds and releases.
Reuse patterns
In XAML builds you created custom XAML templates. In the new builds, it's easier to create reusable patterns.
Create a template
If you don't see a template for the kind of app you can start from an empty pipeline and add the tasks you need.
After you've got a pattern that you like, you can clone it or save it as a template directly in your web browser. See
Create your first pipeline.
Task groups (TFS 2017 or newer)
In XAML builds, if you change the template, then you also change the behavior of all pipelines based on it. In the
new build system, templates don't work this way. Instead, a template behaves as a traditional template. After you
create the build pipeline, subsequent changes to the template have no effect on build pipelines.
If you want to create a reusable and automatically updated piece of logic, then create a task group. You can then
later modify the task group in one place and cause all the pipelines that use it to automatically be changed.
FAQ
I don't see XAML builds. What do I do?
XAML builds are deprecated. We strongly recommend that you migrate to the new builds as explained above.
If you're not yet ready to migrate, then to enable XAML builds you must connect a XAML build controller to your
organization. See Configure and manage your build system.
If you're not yet ready to migrate, then to enable XAML builds:
1. Install TFS 2018.2.
2. Connect your XAML build servers to your TFS instance. See Configure and manage your build system.
How do I add conditional logic to my build pipeline?
Although the new build pipelines are essentially linear, we do give you control of the conditions under which a task
runs.
On TFS 2015 and newer: You can select Enabled, Continue on error, or Always run.
On Azure Pipelines, you can specify one of four built-in choices to control when a task is run. If you need more
control, you can specify custom conditions. For example:
and(failed(), in(variables['Build.Reason'], 'IndividualCI', 'BatchedCI'),
startsWith(variables['Build.SourceBranch'], 'refs/heads/features/'))
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Network Virtualization provides ability to create multiple virtual networks on a shared physical network. Isolated
virtual networks can be created using SCVMM Network Virtualization concepts. VMM uses the concept of logical
networks and corresponding VM networks to create isolated networks of virtual machines.
You can create an isolated network of virtual machines that span across different hosts in a host-cluster or a
private cloud.
You can have VMs from different networks residing in the same host machine and still be isolated from each
other.
You can define IP address from the any IP pool of your choice for a VM Network.
See also: Hyper-V Network Virtualization Overview.
Prerequisites
SCVMM Server 2012 R2 or later.
Window 2012 R2 host machines with Hyper-V set up with at least two physical NICs attached.
One NIC (perhaps external) with corporate network or Internet access.
One NIC configured in Trunk Mode with a VLAN ID (such as 991) and routable IP subnets (such as
10.10.30.1/24). You network administrator can configure this.
All Hyper-V hosts in the host group have the same VLAN ID. This host group will be used for your isolated
networks.
Verify the setup is working correctly by following these steps:
1. Open an RDP session to each of the host machines and open an administrator PowerShell session.
2. Run the command Get-NetVirtualizationProviderAddress . This gets the provider address for the physical
NIC configured in trunk mode with a VLAN ID.
3. Go to another host and open an administrator PowerShell session. Ping other machines using the command
ping -p <Provider address> . This confirms all host machines are connected to a physical NIC in trunk mode
with IPs routable across the host machines. If this test fails, contact your network administrator.
Back to list of tasks
3. In the popup, enter an appropriate name and select One Connected Network -> Allow new networks
created on this logical network to use network vir tualization , then click Next .
4. Add a new Network Site and select the host group to which the network site will be scoped. Enter the
VLAN ID used to configure physical NIC in the Hyper-V host group and the corresponding routable IP
subnet(s). To assist tracking, change the network site name to one that is memorable.
8. Provide the gateway address. By default, you can use the first IP address in your subnet.
9. Click Next and leave the existing DNS and WINS settings. Complete the creation of the network site.
10. Now create another Logical Network for external Internet access, but this time select One Connected
network -> Create a VM network with same name to allow vir tual machines to access this
logical network directly and then click Next .
11. Add a network site and select the same host group, but this time add the VLAN as 0 . This means the
communication uses the default access mode NIC (Internet).
12. Click Next and Save .
13. The result should look like the following in your administrator console after creating the logical networks.
2. Select Uplink por t profile and select Hyper-V Por t as the load balancing algorithm, then click Next .
3. Select the Network Virtualization site created previously and choose the Enable Hyper-V Network
Vir tualization checkbox, then save the profile.
4. Now create another Hyper-V port profile for external logical network. Select Uplink mode and Host
default as the load balancing algorithm, then click Next .
5. Select the other network site to be used for external communication, but and this time don't enable network
virtualization. Then save the profile.
3. Click Next to open to Uplink tab. Click Add uplink por t profile and add the network virtualization port
profile you just created.
4. Click Next and save the logical switch.
5. Now create another logical switch for the external network for Internet communication. This time add the
other uplink port profile you created for the external network.
4. Create another logical switch for external connectivity, assign the physical adapter used for external
communication, and select the external port profile.
5. Do the same for all the Hyper-V hosts in the host group.
This is a one-time configuration for a specific host group of machines. After completing this setup, you can
dynamically provision your isolated network of virtual machines using the SCVMM extension in TFS and Azure
Pipelines builds and releases.
Back to list of tasks
You can create any of the above topologies using the SCVMM extension, as shown in the following steps.
1. Open your TFS or Azure Pipelines instance and install the SCVMM extension if not already installed. For
more information, see SCVMM deployment.
The SCVMM task provides a more efficient way capability to perform lab management operations
using build and release pipelines. You can manage SCVMM environments, provision isolated virtual
networks, and implement build-deploy-test scenarios.
5. In case of topologies 1 and 2 , leave the VM Network name empty, which will clear all the old VM
networks present in the created VMs (if any). For topology 3 , you must provide information about the
external VM network here.
6. Enter the Cloud Name of the host where you want to provision your isolated network. In case of private
cloud, ensure the host machines added to the cloud are connected to the same logical and external switches
as explained above.
7. Select the Network Vir tualization option to create the virtualization layer.
8. Based on the topology you would like to create, decide if the network requires an Active Directory VM. For
example, to create Topology 2 (AD-backed isolated network), you require an Active directory VM. Select the
Add Active Director y VM checkbox, enter the AD VM name and the stored VM source. Also enter the
static IP address configured in the AD VM source and the DNS suffix.
9. Enter the settings for the VM Network and subnet you want to create, and the backing logical network you
created in the previous section (Logical Networks). Ensure the VM network name is unique. If possible,
append the release name for easier tracking later.
10. In the Boundar y Vir tual Machine options section, set Create boundar y VM for communication
with Azure Pipelines/TFS . This will be the entry point for external communication.
11. Enter the boundary VM name and the source template (the boundary VM source should always be a VM
template), and enter name of the existing external VM network you created for external communication.
12. Provide details for configuring the boundary VM agent to communicate with Azure Pipelines/TFS. You can
configure a deployment agent or an automation agent. This agent will be used for app deployments.
13. Ensure the agent name you provide is unique. This will be used as demand in succeeding job properties so
that the correct agent will be selected. If you selected the deployment group agent option, this parameter is
replaced by the value of the tag, which must also be unique.
14. Ensure the boundary VM template has the agent configuration files downloaded and saved in the VHD
before the template is created. Use this path as the agent installation path above.
6. After testing is completed, you can destroy the VMs by using the Delete VM task option.
Now you can create release from this release pipeline. Each release will dynamically provision your isolated virtual
network and run your deploy and test tasks in the environment. You can find the test results in the release
summary. After your tests are completed, you can automatically decommission your environments. You can create
as many environments as you need with just a click from Azure Pipelines .
Back to list of tasks
See also
SCVMM deployment
Hyper-V Network Virtualization Overview
FAQ
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure Pipelines | TFS 2018 | TFS 2017 | TFS 2015 | Previous versions (XAML builds)
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.
This article provides an index of built-in tasks. To learn more about tasks, including creating custom tasks,
custom extensions, and finding tasks on the Visual Studio Marketplace, see Tasks concepts.
Build
TA SK VERSIO N S
- Build, test, package, or publish a Azure Pipelines, TFS 2017 and newer
.NET Core CLI task
dotnet application, or run a custom dotnet command. For
package commands, supports NuGet.org and
authenticated feeds like Package Management and MyGet.
- Learn how to build with Azure Pipelines, TFS 2015 RTM and newer
Ant build and release task
Apache Ant
- CMake build and Azure Pipelines, TFS 2015 RTM and newer
CMake build and release task
release task
- Build, push or run multi- Azure Pipelines, Azure DevOps Server 2019
Docker Compose task
container Docker applications. Task can be used with
Docker or Azure Container registry.
- Build and push Docker images to any Azure Pipelines, TFS 2018 and newer
Docker task
container registry using Docker registry service
connection
TA SK VERSIO N S
- Gradle build and Azure Pipelines, TFS 2015 RTM and newer
Gradle build and release task
release task
- Gulp build and release Azure Pipelines, TFS 2015 RTM and newer
Gulp build and release task
task
- Index Sources & Azure Pipelines, TFS 2015 RTM and newer
Index Sources & Publish Symbols
Publish Symbols build and release task
- Maven build and Azure Pipelines, TFS 2015 RTM and newer
Maven build and release task
release task
- MSBuild build and Azure Pipelines, TFS 2015 RTM and newer
MSBuild build and release task
release task
Utility
TA SK VERSIO N S
- Use an archive file to then create Azure Pipelines, TFS 2017 and newer
Archive Files task
a source folder
- Execute .bat or .cmd scripts when Azure Pipelines, TFS 2015 RTM and newer
Batch Script task
building your code
Cache task - Improve build performance by caching files, Azure Pipelines, TFS 2017 and newer
like dependencies, between pipeline runs.
- Execute tools from a command Azure Pipelines, TFS 2015 RTM and newer
Command Line task
prompt when building code
- Copy build TFS 2015 RTM. Deprecated on Azure Pipelines and newer
Copy and Publish Build Artifacts task versions of TFS.
artifacts to a staging folder and publish them
- Copy files between folders with Azure Pipelines, TFS 2015.3 and newer
Copy Files task
match patterns when building code
- Use cURL to upload files Azure Pipelines, TFS 2015 RTM and newer
cURL Upload Files task
with supported protocols
- Pause execution of a build or release Azure Pipelines, Azure DevOps Server 2019
Delay task
pipeline for a fixed delay time
- Delete files from the agent working Azure Pipelines, TFS 2015.3 and newer
Delete Files task
directory when building code
- Extract files from archives to a Azure Pipelines, TFS 2017 and newer
Extract Files task
target folder using minimatch patterns on (TFS)
- Upload files to a remote machine Azure Pipelines, TFS 2017 and newer
FTP Upload task
using the File Transfer Protocol (FTP), or securely with FTPS
on (TFS)
- Build and release task to Azure Pipelines, TFS 2018 and newer
Invoke HTTP REST API task
invoke an HTTP API and parse the response with a build or
release pipeline
- Execute PowerShell scripts Azure Pipelines, TFS 2015 RTM and newer
PowerShell task
- Publish build artifacts to Azure Pipelines, TFS 2015 RTM and newer
Publish Build Artifacts task
Azure Pipelines, Team Foundation Server (TFS), or to a file
share
Query Work Items task - Ensure the number of Azure Pipelines, TFS 2017 and newer
matching items returned by a work item query is within
the configured threshold
- Execute a bash script when building Azure Pipelines, TFS 2015 RTM and newer
Shell Script task
code
Test
TA SK VERSIO N S
- Test app packages with Visual Azure Pipelines, TFS 2017 and newer
App Center Test task
Studio App Center.
Azure Pipelines
Cloud-based Apache JMeter Load Test task
(Deprecated) - Runs the Apache JMeter load test in cloud
- Publish Test Results to Azure Pipelines, TFS 2015 RTM and newer
Publish Test Results task
integrate test reporting into your build and release
pipelines
TA SK VERSIO N S
- This task is deprecated. Azure Pipelines, TFS 2015 RTM and newer
Xamarin Test Cloud task
Use the App Center Test task instead.
Package
TA SK VERSIO N S
- Learn all about how you can use Azure Pipelines, TFS 2015 RTM and newer
CocoaPods task
CocoaPods packages when you are building code in Azure
Pipelines or Team Foundation Server (TFS).
- Azure Pipelines
Maven Authenticate task (for task runners)
Provides credentials for Azure Artifacts feeds and external
Maven repositories.
- How to use npm packages when building Azure Pipelines, TFS 2015 RTM and newer
npm task
code in Azure Pipelines
Deploy
TA SK VERSIO N S
Azure CLI task - build task to run a shell or batch Azure Pipelines, Azure DevOps Server 2019
script containing Microsoft Azure CLI commands
- build task to copy files to Azure Pipelines, TFS 2015.3 and newer
Azure File Copy task
Microsoft Azure storage blobs or virtual machines (VMs)
- Azure Key Vault task for use in Azure Pipelines, Azure DevOps Server 2019
Azure Key Vault task
the jobs of all of your build and release pipelines
- Azure Pipelines
Azure virtual machine scale set deployment task
Deploy virtual machine scale set image
- Copy Files Over SSH task Azure Pipelines, TFS 2017 and newer
Copy Files Over SSH task
for use in the jobs of all of your build and release pipelines
Azure Pipelines
MySQL Database Deployment On Machine Group
task - The task is used to deploy for MySQL Database.
- SSH task for use in the jobs Azure Pipelines, TFS 2017 and newer
SSH Deployment task
of all of your build and release pipelines
TA SK VERSIO N S
Tool
TA SK VERSIO N S
- Install the Docker CLI on an Azure Pipelines, Azure DevOps Server 2019
Docker Installer task
agent machine
Open source
These tasks are open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn step-by-step how to build my app?
Build your app
Can I add my own build tasks?
Yes: Add a build task
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some
features are available on-premises if you have upgraded to the latest version of TFS.
.NET Core CLI task
11/2/2020 • 10 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Azure Pipelines
Use this task to build, test, package, or publish a dotnet application, or to run a custom dotnet command. For
package commands, this task supports NuGet.org and authenticated feeds like Package Management and
MyGet.
If your .NET Core or .NET Standard build depends on NuGet packages, make sure to add two copies of this step:
one with the restore command and one with the build command.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
YAML snippet
# .NET Core
# Build, test, package, or publish a dotnet application, or run a custom dotnet command
- task: DotNetCoreCLI@2
inputs:
#command: 'build' # Options: build, push, pack, publish, restore, run, test, custom
#publishWebProjects: true # Required when command == Publish
#projects: # Optional
#custom: # Required when command == Custom
#arguments: # Optional
#publishTestResults: true # Optional
#testRunTitle: # Optional
#zipAfterPublish: true # Optional
#modifyOutputPath: true # Optional
#feedsToUse: 'select' # Options: select, config
#vstsFeed: # Required when feedsToUse == Select
#includeNuGetOrg: true # Required when feedsToUse == Select
#nugetConfigPath: # Required when feedsToUse == Config
#externalFeedCredentials: # Optional
#noCache: false
restoreDirectory:
#verbosityRestore: 'Detailed' # Options: -, quiet, minimal, normal, detailed, diagnostic
#packagesToPush: '$(Build.ArtifactStagingDirectory)/*.nupkg' # Required when command == Push
#nuGetFeedType: 'internal' # Required when command == Push# Options: internal, external
#publishVstsFeed: # Required when command == Push && NuGetFeedType == Internal
#publishPackageMetadata: true # Optional
#publishFeedCredentials: # Required when command == Push && NuGetFeedType == External
#packagesToPack: '**/*.csproj' # Required when command == Pack
#packDirectory: '$(Build.ArtifactStagingDirectory)' # Optional
#nobuild: false # Optional
#includesymbols: false # Optional
#includesource: false # Optional
#versioningScheme: 'off' # Options: off, byPrereleaseNumber, byEnvVar, byBuildNumber
#versionEnvVar: # Required when versioningScheme == ByEnvVar
#majorVersion: '1' # Required when versioningScheme == ByPrereleaseNumber
#minorVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#patchVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#buildProperties: # Optional
#verbosityPack: 'Detailed' # Options: -, quiet, minimal, normal, detailed, diagnostic
workingDirectory:
Arguments
A RGUM EN T DESC RIP T IO N
selectOrConfig You can either choose to select a feed from Azure Artifacts
Feeds to use and/or NuGet.org here, or commit a NuGet.config file to
your source code repository and set its path using the
nugetConfigPath argument.
Options: select , config
Argument aliases: feedsToUse
A RGUM EN T DESC RIP T IO N
projects The path to the csproj file(s) to use. You can use wildcards
Path to project(s) (e.g. **/*.csproj for all .csproj files in all subfolders).
verbosityPack Specifies the amount of detail displayed in the output for the
Verbosity pack command.
verbosityRestore Specifies the amount of detail displayed in the output for the
Verbosity restore command.
searchPatternPack Pattern to search for csproj or nuspec files to pack. You can
Path to csproj or nuspec file(s) to pack separate multiple patterns with a semicolon, and you can
make a pattern negative by prefixing it with ! . Example:
**/*.csproj;!**/*.Tests.csproj
Argument aliases: packagesToPack
publishWebProjects If true, the task will try to find the web projects in the
Publish Web Projects repository and run the publish command on them. Web
projects are identified by presence of either a web.config file
or wwwroot folder in the directory. Note that this argument
defaults to true if not specified.
publishTestResults Enabling this option will generate a test results TRX file in
Publish test results $(Agent.TempDirectory) and results will be published to
the server.
This option appends
--logger trx --results-directory
$(Agent.TempDirectory)
to the command line arguments.
Code coverage can be collected by adding
--collect "Code coverage" to the command line
arguments. This is currently only available on the Windows
platform.
C O N T RO L O P T IO N S
Examples
Build
Build a project
# Build project
- task: DotNetCoreCLI@2
inputs:
command: 'build'
Push
Push NuGet packages to internal feed
# Push non test NuGet packages from a build to internal organization Feed
- task: DotNetCoreCLI@2
inputs:
command: 'push'
searchPatternPush:
'$(Build.ArtifactStagingDirectory)/*.nupkg;!$(Build.ArtifactStagingDirectory)/*.Tests.nupkg'
feedPublish: 'FabrikamFeed'
Pack
Pack a NuGetPackage to a specific output directory
Test
Run tests in your repository
FAQ
Why is my build, publish, or test step failing to restore packages?
Most dotnet commands, including build , publish , and test include an implicit restore step. This will fail
against authenticated feeds, even if you ran a successful dotnet restore in an earlier step, because the earlier
step will have cleaned up the credentials it used.
To fix this issue, add the --no-restore flag to the Arguments textbox.
In addition, the test command does not recognize the feedRestore or vstsFeed arguments and feeds
specified in this manner will not be included in the generated NuGet.config file when the implicit restore step
runs. It is recommended that an explicit dotnet restore step be used to restore packages. The restore
command respects the feedRestore and vstsFeed arguments.
Why should I check in a NuGet.config?
Checking a NuGet.config into source control ensures that a key piece of information needed to build your
project-the location of its packages-is available to every developer that checks out your code.
However, for situations where a team of developers works on a large range of projects, it's also possible to add
an Azure Artifacts feed to the global NuGet.config on each developer's machine. In these situations, using the
"Feeds I select here" option in the NuGet task replicates this configuration.
Troubleshooting
File structure for output files is different from previous builds
Azure DevOps hosted agents are configured with .NET Core 3.0, 2.1 and 2.2. CLI for .NET Core 3.0 has a different
behavior while publishing projects using output folder argument. When publishing projects with the output
folder argument (-o), the output folder is created in the root directory and not in the project file’s directory.
Hence while publishing more than one projects, all the files are published to the same directory, which causes an
issue.
To resolve this issue, use the Add project name to publish path parameter (modifyOutputPath in YAML) in the
.NET Core CLI task. This creates a sub folder with project file’s name, inside the output folder. Hence all your
projects will be published under different sub-folder’s inside the main output folder.
steps:
- task: DotNetCoreCLI@2
displayName: 'dotnet publish'
inputs:
command: publish
publishWebProjects: false
projects: '**/*.csproj'
arguments: '-o testpath'
zipAfterPublish: false
modifyOutputPath: true
Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Android build task (deprecated; use Gradle)
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build an Android app using Gradle and optionally start the emulator for unit tests.
Deprecated
The Android Build task has been deprecated. Use the Gradle task instead.
Demands
The build agent must have the following capabilities:
Android SDK (with the version number you will build against)
Android Support Repository (if referenced by Gradle file)
Arguments
A RGUM EN T DESC RIP T IO N
Location of Gradle Wrapper The location in the repository of the gradlew wrapper
used for the build. For agents on Windows (including
Microsoft-hosted agents), you must use the
gradlew.bat wrapper. Agents on Linux or macOS can
use the gradlew shell script.
See The Gradle Wrapper.
Project Directory Relative path from the repo root to the root directory of the
application (likely where your build.gradle file is).
Gradle Arguments Provide any options to pass to the Gradle command line.
The default value is build
See Gradle command line.
Create AVD Select this check box if you would like the AVD to be
created if it does not exist.
A RGUM EN T DESC RIP T IO N
AVD Target SDK Android SDK version the AVD should target. The default value
is android-19
AVD Device (Optional) Device pipeline to use. Can be a device index or id.
The default value is Nexus 5
AVD ABI The Application Binary Interface to use for the AVD. The
default value is default/armeabi-v7a
See ABI Management.
Overwrite Existing AVD Select this check box if an existing AVD with the same name
should be overwritten.
Create AVD Optional Arguments Provide any options to pass to the android create avd
command.
See Android Command Line.
EM UL ATO R O P T IO N S
Start and Stop Android Emulator Check if you want the emulator to be started and stopped
when Android Build task finishes.
Note: You must deploy your own agent to use this
option. You cannot use a Microsoft-hosted pool if you
want to use an emulator.
Timeout in Seconds How long should the build wait for the emulator to start. The
default value is 300 seconds.
Headless Display Check if you want to start the emulator with no GUI (headless
mode).
Emulator Optional Arguments (Optional) Provide any options to pass to the emulator
command. The default value is
-no-snapshot-load -no-snapshot-save
Delete AVD Check if you want the AVD to be deleted upon completion.
C O N T RO L O P T IO N S
Related tasks
Android Signing
Android signing task
2/26/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task in a pipeline to sign and align Android APK files.
Demands
The build agent must have the following capabilities:
Java JDK
YAML snippet
# Android signing
# Sign and align Android APK files
- task: AndroidSigning@3
inputs:
#apkFiles: '**/*.apk'
#apksign: true # Optional
#apksignerKeystoreFile: # Required when apksign == True
#apksignerKeystorePassword: # Optional
#apksignerKeystoreAlias: # Optional
#apksignerKeyPassword: # Optional
#apksignerArguments: '--verbose' # Optional
#apksignerFile: # Optional
#zipalign: true # Optional
#zipalignFile: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
files (Required) Relative path from the repo root to the APK(s)
APK files you want to sign. You can use wildcards to specify
multiple files. For example:
outputs\apk*.apk to sign all .APK files in the
outputs\apk\ subfolder
*\bin\.apk to sign all .APK files in all bin subfolders
SIGN IN G O P T IO N S
apksign (Optional) Select this option to sign the APK with a provided
Sign the APK Android Keystore file. Unsigned APKs can only run in an
emulator. APKs must be signed to run on a device.
Default value: true
A RGUM EN T DESC RIP T IO N
keystoreAlias (Optional) Enter the alias that identifies the public/private key
Alias pair to be used in the keystore file.
Argument aliases: apksignerKeystoreAlias
keyPass (Optional) Enter the key password for the alias and Android
Key password Keystore file.
Impor tant: Use a new variable with its lock enabled on
the Variables pane to encrypt this value. See secret
variables.
Z IPA L IGN O P T IO N S
C O N T RO L O P T IO N S
Related tasks
Android Build
Ant task
6/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build with Apache Ant.
Demands
The build agent must have the following capability:
Apache Ant
YAML snippet
# Ant
# Build with Apache Ant
- task: Ant@1
inputs:
#buildFile: 'build.xml'
#options: # Optional
#targets: # Optional
#publishJUnitResults: true
#testResultsFiles: '**/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#codeCoverageToolOptions: 'None' # Optional. Options: none, cobertura, jaCoCo
#codeCoverageClassFilesDirectories: '.' # Required when codeCoverageToolOptions != None
#codeCoverageClassFilter: # Optional. Comma-separated list of filters to include or exclude classes from
collecting code coverage. For example: +:com.*,+:org.*,-:my.app*.*
#codeCoverageSourceDirectories: # Optional
#codeCoverageFailIfEmpty: false # Optional
#antHomeDirectory: # Optional
#javaHomeOption: 'JDKVersion' # Options: jDKVersion, path
#jdkVersionOption: 'default' # Optional. Options: default, 1.11, 1.10, 1.9, 1.8, 1.7, 1.6
#jdkUserInputDirectory: # Required when javaHomeOption == Path
#jdkArchitectureOption: 'x64' # Optional. Options: x86, x64
Arguments
A RGUM EN T DESC RIP T IO N
antBuildFile (Required) Relative path from the repository root to the Ant
Ant build file build file.
For more information about build files, see Using Apache Ant
Default value: build.xml
Argument aliases: buildFile
A RGUM EN T DESC RIP T IO N
testResultsFiles (Required) Test results files path. Wildcards can be used. For
Test Results Files example, */TEST-.xml for all xml files whose name starts
with TEST-."
Default value: **/TEST-*.xml
testRunTitle (Optional) Assign a title for the JUnit test case results for this
Test Run Title build.
C O DE C O VERA GE
codeCoverageTool (Optional) Select the code coverage tool you want to use.
Code Coverage Tool
If you are using the Microsoft-hosted agents, then the
tools are set up for you. If you are using on-premises
Windows agent, then if you select:
JaCoCo, make sure jacocoant.jar is available in lib
folder of Ant installation. See JaCoCo.
Cobertura, set up an environment variable
COBERTURA_HOME pointing to the Cobertura .jar
files location. See Cobertura.
After you select one of these tools, the following
arguments appear.
Default value: .
Argument aliases: codeCoverageClassFilesDirectories
failIfCoverageEmpty (Optional) Fail the build if code coverage did not produce
Fail when code coverage results are missing any results to publish
A DVA N C ED
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Azure IoT Edge task
11/2/2020 • 3 minutes to read • Edit Online
Use this task to build, test, and deploy applications quickly and efficiently to Azure IoT Edge.
bypassModules (Optional) Specify the module(s) that you do not need to build
Bypass module(s) or push from the list of module names separated by commas
in the .template.json file. For example, if you have two
modules, "SampleModule1,SampleModule2" in your file and
you want to build or push just SampleModule1, specify
SampleModule2 as the bypass module(s). Leave empty to
build or push all the modules in .template.json.
variables:
azureSubscriptionEndpoint: Contoso
azureContainerRegistry: contoso.azurecr.io
steps:
- task: AzureIoTEdge@2
displayName: AzureIoTEdge - Push module images
inputs:
action: Push module images
containerregistrytype: Azure Container Registry
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
templateFilePath: deployment.template.json
defaultPlatform: amd64
steps:
- task: AzureIoTEdge@2
displayName: 'Azure IoT Edge - Deploy to IoT Edge devices'
inputs:
action: 'Deploy to IoT Edge devices'
deploymentFilePath: deployment.template.json
azureSubscription: $(azureSubscriptionEndpoint)
iothubname: iothubname
deviceOption: 'Single Device'
deviceId: deviceId
CMake task
6/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build with the CMake cross-platform build system.
Demands
cmake
IMPORTANT
The Microsoft-hosted agents have CMake installed by default so you don't need to include a demand for CMake in your
azure-pipelines.yml file. If you do include a demand for CMake you may receive an error. To resolve, remove the demand.
YAML snippet
# CMake
# Build with the CMake cross-platform build system
- task: CMake@1
inputs:
#workingDirectory: 'build' # Optional
#cmakeArgs: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
You can also specify a full path outside the repo, and you
can use variables. For example:
$(Build.ArtifactStagingDirectory)\build
If the path you specify does not exist, CMake creates it.
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
How do I enable CMake for Microsoft-hosted agents?
The Microsoft-hosted agents have CMake installed already so you don't need to do anything. You do not need to
add a demand for CMake in your azure-pipelines.yml file.
How do I enable CMake for my on-premises agent?
1. Deploy an agent.
2. Install CMake and make sure to add it to the path of the user that the agent is running as on your agent
machine.
3. In your web browser, navigate to Agent pools:
1. Choose Azure DevOps , Organization settings .
1. Navigate to your project and choose Settings (gear icon) > Agent Queues .
2. Choose Manage pools .
NOTE
Microsoft-hosted agents don't display system capabilities. For a list of software installed on Microsoft-hosted
agents, see Use a Microsoft-hosted agent.
From the Agent pools tab, select the desired agent, and choose the Capabilities tab.
5. Click Add capability and set the fields to cmake and yes .
6. Click Save changes .
How does CMake work? What arguments can I use?
About CMake
CMake Documentation
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Docker task
11/7/2020 • 4 minutes to read • Edit Online
Use this task to build and push Docker images to any container registry using Docker registry service connection.
Overview
Following are the key benefits of using Docker task as compared to directly using docker client binary in script -
Integration with Docker registr y ser vice connection - The task makes it easy to use a Docker registry
service connection for connecting to any container registry. Once logged in, the user can author follow up
tasks to execute any tasks/scripts by leveraging the login already done by the Docker task. For example, you
can use the Docker task to sign in to any Azure Container Registry and then use a subsequent task/script to
build and push an image to this registry.
Metadata added as labels - The task adds traceability-related metadata to the image in the form of the
following labels -
com.azure.dev.image.build.buildnumber
com.azure.dev.image.build.builduri
com.azure.dev.image.build.definitionname
com.azure.dev.image.build.repository.name
com.azure.dev.image.build.repository.uri
com.azure.dev.image.build.sourcebranchname
com.azure.dev.image.build.sourceversion
com.azure.dev.image.release.definitionname
com.azure.dev.image.release.releaseid
com.azure.dev.image.release.releaseweburl
com.azure.dev.image.system.teamfoundationcollectionuri
com.azure.dev.image.system.teamproject
Task Inputs
PA RA M ET ERS DESC RIP T IO N
Dockerfile (Optional) Path to the Dockerfile. The task will use the first
Dockerfile dockerfile it finds to build the image.
Default value: **/Dockerfile
Login
Following YAML snippet showcases container registry login using a Docker registry service connection -
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
In the above snippet, the images contosoRepository:tag1 and contosoRepository:tag2 are built and pushed to the
container registries corresponding to dockerRegistryServiceConnection1 and dockerRegistryServiceConnection2 .
If one wants to build and push to a specific authenticated container registry instead of building and pushing to all
authenticated container registries at once, the containerRegistry input can be explicitly specified along with
command: buildAndPush as shown below -
steps:
- task: Docker@2
displayName: Build and Push
inputs:
command: buildAndPush
containerRegistry: dockerRegistryServiceConnection1
repository: contosoRepository
tags: |
tag1
tag2
Logout
Following YAML snippet showcases container registry logout using a Docker registry service connection -
- task: Docker@2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: dockerRegistryServiceConnection1
Start/stop
This task can also be used to control job and service containers. This usage is uncommon, but occasionally used for
unique circumstances.
resources:
containers:
- container: builder
image: ubuntu:18.04
steps:
- script: echo "I can run inside the container (it starts by default)"
target:
container: builder
- task: Docker@2
inputs:
command: stop
container: builder
# any task beyond this point would not be able to target the buider container
# because it's been stopped
steps:
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
- task: Docker@2
displayName: Build
inputs:
command: build
repository: contosoRepository
tags: tag1
arguments: --secret id=mysecret,src=mysecret.txt
NOTE
The arguments input is evaluated for all commands except buildAndPush . As buildAndPush is a convenience command (
build followed by push ), arguments input is ignored for this command.
Troubleshooting
Why does Docker task ignore arguments passed to buildAndPush command?
Docker task configured with buildAndPush command ignores the arguments passed since they become ambiguous
to the build and push commands that are run internally. You can split your command into separate build and push
steps and pass the suitable arguments. See this stackoverflow post for example.
DockerV2 only supports Docker registry service connection and not support ARM service connection. How can
I use an existing Azure service principal (SPN ) for authentication in Docker task?
You can create a Docker registry service connection using your Azure SPN credentials. Choose the Others from
Registry type and provide the details as follows:
Azure Pipelines
Use this task to build, push or run multi-container Docker applications. This task can be used with a Docker registry
or an Azure Container Registry.
This YAML example specifies the inputs for Azure Container Registry:
variables:
azureContainerRegistry: Contoso.azurecr.io
azureSubscriptionEndpoint: Contoso
steps:
- task: DockerCompose@0
displayName: Container registry login
inputs:
containerregistrytype: Azure Container Registry
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
This YAML example specifies a container registry other than ACR where Contoso is the name of the Docker
registry service connection for the container registry:
- task: DockerCompose@0
displayName: Container registry login
inputs:
containerregistrytype: Container Registry
dockerRegistryEndpoint: Contoso
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true
additionalImageTags (Optional) Additional tags for the Docker images being built or
(Additional Image Tags) pushed.
This YAML example builds the image where the image name is qualified on the basis of the inputs related to Azure
Container Registry:
- task: DockerCompose@0
displayName: Build services
inputs:
action: Build services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
additionalImageTags: $(Build.BuildId)
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true
PA RA M ET ERS DESC RIP T IO N
additionalImageTags (Optional) Additional tags for the Docker images being built or
(Additional Image Tags) pushed.
- task: DockerCompose@0
displayName: Push services
inputs:
action: Push services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
additionalImageTags: $(Build.BuildId)
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true
- task: DockerCompose@0
displayName: Run services
inputs:
action: Run services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.ci.build.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
buildImages: true
abortOnContainerExit: true
detached: false
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true
entrypoint (Optional) Override the default entry point for the specific
(Entry Point Override) service container.
- task: DockerCompose@0
displayName: Run a specific service
inputs:
action: Run a specific service
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
serviceName: myhealth.web
ports: 80
detached: true
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true
removeBuildOptions (Optional) Remove the build options from the output Docker
(Remove Build Options) Compose file.
Default value: false
baseResolveDirectory (Optional) The base directory from which relative paths in the
(Base Resolve Directory) output Docker Compose file should be resolved.
- task: DockerCompose@0
displayName: Lock services
inputs:
action: Lock services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
outputDockerComposeFile: $(Build.StagingDirectory)/docker-compose.yml
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true
- task: DockerCompose@0
displayName: Write service image digests
inputs:
action: Write service image digests
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
imageDigestComposeFile: $(Build.StagingDirectory)/docker-compose.images.yml
Combine configuration
PA RA M ET ERS DESC RIP T IO N
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true
removeBuildOptions (Optional) Remove the build options from the output Docker
(Remove Build Options) Compose file.
Default value: false
baseResolveDirectory (Optional) The base directory from which relative paths in the
(Base Resolve Directory) output Docker Compose file should be resolved.
- task: DockerCompose@0
displayName: Combine configuration
inputs:
action: Combine configuration
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
additionalDockerComposeFiles: docker-compose.override.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
outputDockerComposeFile: $(Build.StagingDirectory)/docker-compose.yml
dockerComposeFile (Docker Compose File) (Required) Path to the primary Docker Compose file to use.
Default value: **/docker-compose.yml
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not otherwise
specified.
Default value: true
- task: DockerCompose@0
displayName: Run a Docker Compose command
inputs:
action: Run a Docker Compose command
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
dockerComposeCommand: rm
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Go task
6/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to get, build, or test a go application, or run a custom go command.
YAML snippet
# Go
# Get, build, or test a Go application, or run a custom Go command
- task: Go@0
inputs:
#command: 'get' # Options: get, build, test, custom
#customCommand: # Required when command == Custom
#arguments: # Optional
workingDirectory:
Arguments
A RGUM EN T DESC RIP T IO N
Example
variables:
GOBIN: '$(GOPATH)/bin' # Go binaries path
GOROOT: '/usr/local/go1.11' # Go installation path
GOPATH: '$(system.defaultWorkingDirectory)/gopath' # Go workspace path
modulePath: '$(GOPATH)/src/github.com/$(build.repository.name)' # Path to the module's code
steps:
- task: GoTool@0
displayName: 'Use Go 1.10'
- task: Go@0
displayName: 'go get'
inputs:
arguments: '-d'
- task: Go@0
displayName: 'go build'
inputs:
command: build
arguments: '-o "$(System.TeamProject).exe"'
- task: ArchiveFiles@2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(Build.Repository.LocalPath)'
includeRootFolder: False
- task: PublishBuildArtifacts@1
displayName: 'Publish artifact'
condition: succeededOrFailed()
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Gradle task
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build using a Gradle wrapper script.
YAML snippet
# Gradle
# Build using a Gradle wrapper script
- task: Gradle@2
inputs:
#gradleWrapperFile: 'gradlew'
#cwd: # Optional
#options: # Optional
#tasks: 'build' # A list of tasks separated by spaces, such as 'build test'
#publishJUnitResults: true
#testResultsFiles: '**/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#codeCoverageToolOption: 'None' # Optional. Options: none, cobertura, jaCoCo
#codeCoverageClassFilesDirectories: 'build/classes/main/' # Required when codeCoverageToolOption == False
#codeCoverageClassFilter: # Optional. Comma-separated list of filters to include or exclude classes from
collecting code coverage. For example: +:com.*,+:org.*,-:my.app*.*
#codeCoverageFailIfEmpty: false # Optional
#javaHomeOption: 'JDKVersion' # Options: jDKVersion, path
#jdkVersionOption: 'default' # Optional. Options: default, 1.11, 1.10, 1.9, 1.8, 1.7, 1.6
#jdkDirectory: # Required when javaHomeOption == Path
#jdkArchitectureOption: 'x64' # Optional. Options: x86, x64
#gradleOptions: '-Xmx1024m' # Optional
#sonarQubeRunAnalysis: false
#sqGradlePluginVersionChoice: 'specify' # Required when sonarQubeRunAnalysis == True# Options: specify,
build
#sonarQubeGradlePluginVersion: '2.6.1' # Required when sonarQubeRunAnalysis == True &&
SqGradlePluginVersionChoice == Specify
#checkStyleRunAnalysis: false # Optional
#findBugsRunAnalysis: false # Optional
#pmdRunAnalysis: false # Optional
Arguments
Default value: true
testResultsFiles (Required) Test results files path. Wildcards can be used. For
Test results files example, */TEST-.xml for all xml files whose name starts
with TEST-."
Default value: **/TEST-*.xml
testRunTitle (Optional) Assign a title for the JUnit test case results for this
Test run title build.
C O DE C O VERA GE
failIfCoverageEmpty (Optional) Fail the build if code coverage did not produce any
Fail when code coverage results are missing results to publish
Default value: false
Argument aliases: codeCoverageFailIfEmpty
A DVA N C ED
C O DE A N A LY SIS
checkstyleAnalysisEnabled (Optional) Run the Checkstyle tool with the default Sun
Run Checkstyle checks. Results are uploaded as build artifacts.
Default value: false
Argument aliases: checkStyleRunAnalysis
findbugsAnalysisEnabled (Optional) Use the FindBugs static analysis tool to look for
Run FindBugs bugs in the code. Results are uploaded as build artifacts
Default value: false
Argument aliases: findBugsRunAnalysis
A RGUM EN T DESC RIP T IO N
pmdAnalysisEnabled (Optional) Use the PMD Java static analysis tool to look for
Run PMD bugs in the code. Results are uploaded as build artifacts
Default value: false
Argument aliases: pmdRunAnalysis
C O N T RO L O P T IO N S
Example
Build your Java app with Gradle
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
How do I generate a wrapper from my Gradle project?
The Gradle wrapper allows the build agent to download and configure the exact Gradle environment that is
checked into the repository without having any software configuration on the build agent itself other than the
JVM.
1. Create the Gradle wrapper by issuing the following command from the root project directory where your
build.gradle resides:
jamal@fabrikam> gradle wrapper
|-- gradle/
`-- wrapper/
`-- gradle-wrapper.jar
`-- gradle-wrapper.properties
|-- src/
|-- .gitignore
|-- build.gradle
|-- gradlew
|-- gradlew.bat
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015.3
Use this task to run Grunt tasks using the JavaScript Task Runner.
Demands
The build agent must have the following capability:
Grunt
YAML snippet
# Grunt
# Run the Grunt JavaScript task runner
- task: Grunt@0
inputs:
#gruntFile: 'gruntfile.js'
#targets: # Optional
#arguments: # Optional
#workingDirectory: # Optional
#gruntCli: 'node_modules/grunt-cli/bin/grunt'
#publishJUnitResults: false # Optional
#testResultsFiles: '**/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#enableCodeCoverage: false # Optional
#testFramework: 'Mocha' # Optional. Options: mocha, jasmine
#srcFiles: # Optional
#testFiles: 'test/*.js' # Required when enableCodeCoverage == True
Arguments
A RGUM EN T DESC RIP T IO N
gruntFile (Required) Relative path from the repo root to the Grunt
Grunt File Path script that you want to run.
Default value: gruntfile.js
testResultsFiles (Required) Test results files path. Wildcards can be used. For
Test Results Files example, **/TEST-.xml for all XML files whose name starts with
TEST-.
Default value: **/TEST-.xml
srcFiles (Optional) Provide the path to your source files which you
Source Files want to hookRequire ()
Example
See Sample Gruntfile.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Gulp task
6/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run gulp tasks using the Node.js streaming task-based build system.
Demands
gulp
YAML snippet
# gulp
# Run the gulp Node.js streaming task-based build system
- task: gulp@1
inputs:
#gulpFile: 'gulpfile.js'
#targets: # Optional
#arguments: # Optional
#workingDirectory: # Optional
#gulpjs: # Optional
#publishJUnitResults: false # Optional
#testResultsFiles: '**/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#enableCodeCoverage: false
#testFramework: 'Mocha' # Optional. Options: mocha, jasmine
#srcFiles: # Optional
#testFiles: 'test/*.js' # Required when enableCodeCoverage == True
Arguments
A RGUM EN T DESC RIP T IO N
gulpFile (Required) Relative path from the repo root of the gulp file
gulp File Path script that you want to run.
Default value: gulpfile.js
testResultsFiles (Required) Test results files path. Wildcards can be used. For
Test Results Files example, **/TEST-*.xml for all XML files whose name starts
with TEST-.
Default value: **/TEST-*.xml
srcFiles (Optional) Provide the path to your source files, that you
Source Files want to hookRequire ()
Example
Run gulp.js
On the Build tab:
Install npm.
Command: install
Package: npm
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Index Sources & Publish Symbols task
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
A symbol server is available with Package Management in Azure Ar tifacts and works best with Visual Studio 2017.4
and newer . Team Foundation Ser ver users and users without the Package Management extension can publish symbols
to a file share using this task.
Use this task to index your source code and optionally publish symbols to the Package Management symbol
server or a file share.
Indexing source code enables you to use your .pdb symbol files to debug an app on a machine other than the one
you used to build the app. For example, you can debug an app built by a build agent from a dev machine that does
not have the source code.
Symbol servers enables your debugger to automatically retrieve the correct symbol files without knowing product
names, build numbers or package names. To learn more about symbols, read the concept page; to publish
symbols, use this task and see the walkthrough.
NOTE
This build task works only:
For code in Git or TFVC stored in Team Foundation Server (TFS) or Azure Repos. It does not work for any other type of
repository.
Demands
None
YAML snippet
# Index sources and publish symbols
# Index your source code and publish symbols to a file share or Azure Artifacts symbol server
- task: PublishSymbols@2
inputs:
#symbolsFolder: '$(Build.SourcesDirectory)' # Optional
#searchPattern: '**/bin/**/*.pdb'
#indexSources: true # Optional
#publishSymbols: true # Optional
#symbolServerType: ' ' # Required when publishSymbols == True# Options: , teamServices, fileShare
#symbolsPath: # Optional
#compressSymbols: false # Required when symbolServerType == FileShare
#detailedLog: true # Optional
#treatNotIndexedAsWarning: false # Optional
#symbolsMaximumWaitTime: # Optional
#symbolsProduct: # Optional
#symbolsVersion: # Optional
#symbolsArtifactName: 'Symbols_$(BuildConfiguration)' # Optional
Arguments
A RGUM EN T DESC RIP T IO N
SymbolsPath (Optional) The file share that hosts your symbols. This
Path to publish symbols value will be used in the call to symstore.exe add as the
/s parameter.
To prepare your SymStore symbol store:
1. Set up a folder on a file-sharing server to store the
symbols. For example, set up \fabrikam-share\symbols.
2. Grant full control permission to the build agent service
account.
If you leave this argument blank, your symbols will be
source indexed but not published. (You can also store
your symbols with your drops. See Publish Build Artifacts).
A DVA N C ED
SymbolsArtifactName (Optional) Specify the artifact name to use for the Symbols
Artifact name artifact. The default is Symbols_$(BuildConfiguration).
Default value: Symbols_$(BuildConfiguration)
For more information about the different types of tasks and their uses, see Task control options.
IMPORTANT
If you want to delete symbols that were published using the Index Sources & Publish Symbols task, you must first
remove the build that generated those symbols. This can be accomplished by using retention policies to clean up your build
or by manually deleting the run. For information about debugging your app, see Use indexed symbols to debug your app,
Debug with symbols in Visual Studio, Debug with symbols in WinDbg.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
How does indexing work?
By choosing to index the sources, an extra section will be injected into the PDB files. PDB files normally contain
references to the local source file paths only. For example, C:\BuildAgent\_work\1\src\MyApp\Program.cs . The extra
section injected into the PDB file contains mapping instructions for debuggers. The mapping information indicates
how to retrieve the server item corresponding to each local path.
The Visual Studio debugger will use the mapping information to retrieve the source file from the server. An actual
command to retrieve the source file is included in the mapping information. You may be prompted by Visual
Studio whether to run the command. For example
tf.exe git view /collection:http://SERVER:8080/tfs/DefaultCollection /teamproject:"93fc2e4d-0f0f-4e40-9825-
01326191395d" /repository:"647ed0e6-43d2-4e3d-b8bf-2885476e9c44"
/commitId:3a9910862e22f442cd56ff280b43dd544d1ee8c9 /path:"/MyApp/Program.cs"
/output:"C:\Users\username\AppData\Local\SOURCE~1\TFS_COMMIT\3a991086\MyApp\Program.cs" /applyfilters
Can I use source indexing on a portable PDB created from a .NET Core assembly?
No, source indexing is currently not enabled for Portable PDBs as SourceLink doesn't support authenticated
source repositories. The workaround at the moment is to configure the build to generate full PDBs. Note that if
you are generating a .NET Standard 2.0 assembly and are generating full PDBs and consuming them in a .NET
Framework (full CLR) application then you will be able to fetch sources from Azure Repos (provided you have
embedded SourceLink information and enabled it in your IDE).
Where can I learn more about symbol stores and debugging?
Symbol Server and Symbol Stores
SymStore
Use the Microsoft Symbol Server to obtain debug symbol files
The Srcsrv.ini File
Source Server
Source Indexing and Symbol Servers: A Guide to Easier Debugging
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
How long are Symbols retained?
When symbols are published to Azure Pipelines they are associated with a build. When the build is deleted either
manually or due to retention policy then the symbols are also deleted. If you want to retain the symbols
indefinitely then you should mark the build as Retain Indefinitely.
Jenkins Queue Job task
6/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to queue a job on a Jenkins server.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Demands
None
YAML snippet
# Jenkins queue job
# Queue a job on a Jenkins server
- task: JenkinsQueueJob@2
inputs:
serverEndpoint:
jobName:
#isMultibranchJob: # Optional
#multibranchPipelineBranch: # Required when isMultibranchJob == True
#captureConsole: true
#capturePipeline: true # Required when captureConsole == True
isParameterizedJob:
#jobParameters: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
jobName (Required) The name of the Jenkins job to queue. This job
Job name name must exactly match the job name on the Jenkins
server.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Maven task
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build your Java code.
Demands
The build agent must have the following capability:
Maven
YAML snippet
# Maven
# Build, test, and deploy with Apache Maven
- task: Maven@3
inputs:
#mavenPomFile: 'pom.xml'
#goals: 'package' # Optional
#options: # Optional
#publishJUnitResults: true
#testResultsFiles: '**/surefire-reports/TEST-*.xml' # Required when publishJUnitResults == True
#testRunTitle: # Optional
#codeCoverageToolOption: 'None' # Optional. Options: none, cobertura, jaCoCo. Enabling code coverage
inserts the `clean` goal into the Maven goals list when Maven runs.
#codeCoverageClassFilter: # Optional. Comma-separated list of filters to include or exclude classes from
collecting code coverage. For example: +:com.*,+:org.*,-:my.app*.*
#codeCoverageClassFilesDirectories: # Optional
#codeCoverageSourceDirectories: # Optional
#codeCoverageFailIfEmpty: false # Optional
#javaHomeOption: 'JDKVersion' # Options: jDKVersion, path
#jdkVersionOption: 'default' # Optional. Options: default, 1.11, 1.10, 1.9, 1.8, 1.7, 1.6
#jdkDirectory: # Required when javaHomeOption == Path
#jdkArchitectureOption: 'x64' # Optional. Options: x86, x64
#mavenVersionOption: 'Default' # Options: default, path
#mavenDirectory: # Required when mavenVersionOption == Path
#mavenSetM2Home: false # Required when mavenVersionOption == Path
#mavenOptions: '-Xmx1024m' # Optional
#mavenAuthenticateFeed: false
#effectivePomSkip: false
#sonarQubeRunAnalysis: false
#sqMavenPluginVersionChoice: 'latest' # Required when sonarQubeRunAnalysis == True# Options: latest, pom
#checkStyleRunAnalysis: false # Optional
#pmdRunAnalysis: false # Optional
#findBugsRunAnalysis: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
testResultsFiles (Required) Specify the path and pattern of test results files to
Test results files publish. Wildcards can be used (more information). For
example, */TEST-.xml for all XML files whose name starts
with TEST- . If no root path is specified, files are matched
beneath the default working directory, the value of which is
available in the variable: $(System.DefaultWorkingDirectory).
For example, a value of ' /TEST- .xml' will actually result
in matching files from
'$(System.DefaultWorkingDirector y)/ /TEST-.xml'.
Default value: **/surefire-reports/TEST-*.xml
failIfCoverageEmpty (Optional) Fail the build if code coverage did not produce any
Fail when code coverage results are missing results to publish.
Default value: false
Argument aliases: codeCoverageFailIfEmpty
sqMavenPluginVersionChoice <>SonarQube scanner for (Required) The SonarQube Maven plugin version to use. You
Maven version can use latest version, or rely on the version in your pom.xml.
Default value: latest
checkstyleAnalysisEnabled (Optional) Run the Checkstyle tool with the default Sun
Run Checkstyle checks. Results are uploaded as build artifacts.
Default value: false
Argument aliases: checkStyleRunAnalysis
pmdAnalysisEnabled (Optional) Use the PMD static analysis tool to look for bugs
Run PMD in the code. Results are uploaded as build artifacts.
Default value: false
Argument aliases: pmdRunAnalysis
findbugsAnalysisEnabled (Optional) Use the FindBugs static analysis tool to look for
Run FindBugs bugs in the code. Results are uploaded as build artifacts.
Default value: false
Argument aliases: findBugsRunAnalysis
C O N T RO L O P T IO N S
IMPORTANT
When using the -q option in your MAVEN_OPTS, an effective pom won't be generated correctly and Azure Artifacts feeds
may not be able to be authenticated.
Example
Build and Deploy your Java application to an Azure Web App
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
MSBuild task
11/2/2020 • 7 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Demands
msbuild
Azure Pipelines: If your team uses Visual Studio 2017 and you want to use the Microsoft-hosted agents,
make sure you select as your default pool the Hosted VS2017 . See Microsoft-hosted agents.
YAML snippet
# MSBuild
# Build with MSBuild
- task: MSBuild@1
inputs:
#solution: '**/*.sln'
#msbuildLocationMethod: 'version' # Optional. Options: version, location
#msbuildVersion: 'latest' # Optional. Options: latest, 16.0, 15.0, 14.0, 12.0, 4.0
#msbuildArchitecture: 'x86' # Optional. Options: x86, x64
#msbuildLocation: # Optional
#platform: # Optional
#configuration: # Optional
#msbuildArguments: # Optional
#clean: false # Optional
#maximumCpuCount: false # Optional
#restoreNugetPackages: false # Optional
#logProjectEvents: false # Optional
#createLogFile: false # Optional
#logFileVerbosity: 'normal' # Optional. Options: quiet, minimal, normal, detailed, diagnostic
Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N
solution (Required) If you want to build a single project, click the ...
Project button and select the project.
If you want to build multiple projects, specify search
criteria. You can use a single-folder wildcard ( * ) and
recursive wildcards ( ** ). For example, **.*proj
searches for all MSBuild project (.*proj) files in all
subdirectories.
Make sure the projects you specify are downloaded by
this build pipeline. On the Repository tab:
If you use TFVC, make sure that the project is a child
of one of the mappings on the Repository tab.
If you use Git, make sure that the project or project is
in your Git repo, in a branch that you're building.
Tip: If you are building a solution, we recommend you use
the Visual Studio build task instead of the MSBuild task.
msbuildLocationMethod (Optional)
MSBuild Default value: version
Tips:
If you are targeting an MSBuild project (.*proj) file
instead of a solution, specify AnyCPU (no whitespace).
Declare a build variable such as BuildPlatform on
the Variables tab (selecting Allow at Queue Time) and
reference it here as $(BuildPlatform) . This way you
can modify the platform when you queue the build
and enable building multiple configurations.
A RGUM EN T DESC RIP T IO N
A DVA N C ED
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Should I use the Visual Studio Build task or the MSBuild task?
If you are building a solution, in most cases you should use the Visual Studio Build task. This task automatically:
Sets the /p:VisualStudioVersion property for you. This forces MSBuild to use a particular set of targets that
increase the likelihood of a successful build.
Specifies the MSBuild version argument.
In some cases, you might need to use the MSBuild task. For example, you should use it if you are building code
projects apart from a solution.
Where can I learn more about MSBuild?
MSBuild reference
MSBuild command-line reference
How do I build multiple configurations for multiple platforms?
1. On the Variables tab, make sure you've got variables defined for your configurations and platforms. To
specify multiple values, separate them with commas.
For example, for a .NET app you could specify:
Name Value
Name Value
2. On the Options tab, select MultiConfiguration and specify the Multipliers, separated by commas. For
example: BuildConfiguration, BuildPlatform
Select Parallel if you want to distribute the jobs (one for each combination of values) to multiple agents in
parallel if they are available.
3. On the Build tab, select this step and specify the Platform and Configuration arguments. For example:
Platform: $(BuildPlatform)
Configuration: $(BuildConfiguration)
Can I build TFSBuild.proj files?
You cannot build TFSBuild.proj files. These kinds of files are generated by TFS 2005 and 2008. These files contain
tasks and targets are supported only using XAML builds.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Troubleshooting
This section provides troubleshooting tips for common issues that a user might encounter when using the
MSBuild task.
Build failed with the following error: An internal failure occurred while running MSBuild
Build failed with the following error: An internal failure occurred while running MSBuild
Possible causes
Troubleshooting suggestions
Possible causes
Change in the MSBuild version.
Issues with a third-party extension.
New updates to Visual Studio that can cause missing assemblies on the build agent.
Moved or deleted some of the necessary NuGet packages.
Troubleshooting suggestions
Run the pipeline with diagnostics to retrieve detailed logs
Try to reproduce the error locally
What else can I do?
R u n t h e p i p e l i n e w i t h d i a g n o st i c s t o r e t r i e v e d e t a i l e d l o g s
One of the available options to diagnose the issue is to take a look at the generated logs. You can view your
pipeline logs by selecting the appropriate task and job in your pipeline run summary.
To get the logs of your pipeline execution Get logs to diagnose problems
You can also setup and download a customized verbose log to assist with your troubleshooting:
Configure verbose logs
View and download logs
In addition to the pipeline diagnostic logs, you can also check these other types of logs that contain more
information to help you debug and solve the problem:
Worker diagnostic logs
Agent diagnostic logs
Other logs (Environment and capabilities)
T r y t o r epr o du c e t h e er r o r l o c al l y
If you are using a hosted build agent, you might want to try to reproduce the error locally. This will help you to
narrow down whether the failure is the result of the build agent or the build task.
Run the same MSBuild command on your local machine using the same arguments. Check out MSBuild command
for reference
TIP
If you can reproduce the problem on your local machine, then your next step is to investigate the MSBuild issue.
At the bottom of this page, check out the GitHub issues in the Open and Closed tabs to see if there is a similar
issue that has been resolved previously by our team.
Some of the MSBuild errors are caused by a change in Visual Studio so you can search on Visual Studio Developer
Community to see if this issue has been reported. We also welcome your questions, suggestions, and feedback.
Visual Studio Build task
11/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Use this task to build with MSBuild and set the Visual Studio version property.
Demands
msbuild, visualstudio
Azure Pipelines: If your team wants to use Visual Studio 2017 with the Microsoft-hosted agents, select
Hosted VS2017 as your default build pool. See Microsoft-hosted agents.
YAML snippet
# Visual Studio build
# Build with MSBuild and set the Visual Studio version property
- task: VSBuild@1
inputs:
#solution: '**\*.sln'
#vsVersion: 'latest' # Optional. Options: latest, 16.0, 15.0, 14.0, 12.0, 11.0
#msbuildArgs: # Optional
#platform: # Optional
#configuration: # Optional
#clean: false # Optional
#maximumCpuCount: false # Optional
#restoreNugetPackages: false # Optional
#msbuildArchitecture: 'x86' # Optional. Options: x86, x64
#logProjectEvents: true # Optional
#createLogFile: false # Optional
#logFileVerbosity: 'normal' # Optional. Options: quiet, minimal, normal, detailed, diagnostic
Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N
vsVersion To avoid problems overall, you must make sure this value
Visual Studio Version matches the version of Visual Studio used to create your
solution.
The value you select here adds the
/p:VisualStudioVersion=
{numeric_visual_studio_version}
argument to the MSBuild command run by the build. For
example, if you select Visual Studio 2015 ,
/p:VisualStudioVersion=14.0 is added to the
MSBuild command.
Azure Pipelines: If your team wants to use Visual
Studio 2017 with the Microsoft-hosted agents, select
Hosted VS2017 as your default build pool. See
Microsoft-hosted agents.
Tips:
If you are targeting an MSBuild project (.*proj) file
instead of a solution, specify AnyCPU (no
whitespace).
Declare a build variable such as BuildPlatform on
the Variables tab (selecting Allow at Queue Time) and
reference it here as $(BuildPlatform) . This way
you can modify the platform when you queue the
build and enable building multiple configurations.
A DVA N C ED
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Should I use the Visual Studio Build task or the MSBuild task?
If you are building a solution, in most cases you should use the Visual Studio Build task. This task automatically:
Sets the /p:VisualStudioVersion property for you. This forces MSBuild to use a particular set of targets
that increase the likelihood of a successful build.
Specifies the MSBuild version argument.
In some cases you might need to use the MSBuild task. For example, you should use it if you are building code
projects apart from a solution.
Where can I learn more about MSBuild?
MSBuild task
MSBuild reference
MSBuild command-line reference
How do I build multiple configurations for multiple platforms?
1. On the Variables tab, make sure you've got variables defined for your configurations and platforms. To
specify multiple values, separate them with commas.
For example, for a .NET app you could specify:
Name Value
Name Value
2. On the Options tab select Parallel if you want to distribute the jobs (one for each combination of values)
to multiple agents in parallel if they are available.
3. On the Build tab, select this step and specify the Platform and Configuration arguments. For example:
Platform: $(BuildPlatform)
Configuration: $(BuildConfiguration)
4. Under the agent job of the assigned task, on the Parallelism tab, select Multi-configuration and specify
the Multipliers separated by commas. For example: BuildConfiguration, BuildPlatform
Can I build TFSBuild.proj files?
You cannot build TFSBuild.proj files. These kinds of files are generated by TFS 2005 and 2008. These files contain
tasks and targets are supported only using XAML builds.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Xamarin.Android task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to build an Android app with Xamarin.
Demands
AndroidSDK, MSBuild, Xamarin.Android
YAML snippet
# Xamarin.Android
# Build an Android app with Xamarin
- task: XamarinAndroid@1
inputs:
#projectFile: '**/*.csproj'
#target: # Optional
#outputDirectory: # Optional
#configuration: # Optional
#createAppPackage: true # Optional
#clean: false # Optional
#msbuildLocationOption: 'version' # Optional. Options: version, location
#msbuildVersionOption: '15.0' # Optional. Options: latest, 15.0, 14.0, 12.0, 4.0
#msbuildFile: # Required when msbuildLocationOption == Location
#msbuildArchitectureOption: 'x86' # Optional. Options: x86, x64
#msbuildArguments: # Optional
#jdkOption: 'JDKVersion' # Options: jDKVersion, path
#jdkVersionOption: 'default' # Optional. Options: default, 1.11, 1.10, 1.9, 1.8, 1.7, 1.6
#jdkDirectory: # Required when jdkOption == Path
#jdkArchitectureOption: 'x64' # Optional. Options: x86, x64
Arguments
A RGUM EN T DESC RIP T IO N
M SB UIL D O P T IO N S
JDK O P T IO N S
C O N T RO L O P T IO N S
Example
Build your Xamarin app
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Xamarin.iOS task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task in a pipeline to build an iOS app with Xamarin on macOS. For more information, see the Xamarin
guidance and Sign your app during CI.
Demands
Xamarin.iOS
YAML snippet
# Xamarin.iOS
# Build an iOS app with Xamarin on macOS
- task: XamariniOS@2
inputs:
#solutionFile: '**/*.sln'
#configuration: 'Release'
#clean: false # Optional
#packageApp: true
#buildForSimulator: false # Optional
#runNugetRestore: false
#args: # Optional
#workingDirectory: # Optional
#mdtoolFile: # Optional
#signingIdentity: # Optional
#signingProvisioningProfileID: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
C O N T RO L O P T IO N S
Example
Build your Xamarin app
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Xcode task
11/2/2020 • 7 minutes to read • Edit Online
Demands
xcode
YAML snippet
# Xcode
# Build, test, or archive an Xcode workspace on macOS. Optionally package an app.
- task: Xcode@5
inputs:
#actions: 'build'
#configuration: '$(Configuration)' # Optional
#sdk: '$(SDK)' # Optional
#xcWorkspacePath: '**/*.xcodeproj/project.xcworkspace' # Optional
#scheme: # Optional
#xcodeVersion: 'default' # Optional. Options: 8, 9, 10, default, specifyPath
#xcodeDeveloperDir: # Optional
packageApp:
#archivePath: # Optional
#exportPath: 'output/$(SDK)/$(Configuration)' # Optional
#exportOptions: 'auto' # Optional. Options: auto, plist, specify
#exportMethod: 'development' # Required when exportOptions == Specify
#exportTeamId: # Optional
#exportOptionsPlist: # Required when exportOptions == Plist
#exportArgs: # Optional
#signingOption: 'nosign' # Optional. Options: nosign, default, manual, auto
#signingIdentity: # Optional
#provisioningProfileUuid: # Optional
#provisioningProfileName: # Optional
#teamId: # Optional
#destinationPlatformOption: 'default' # Optional. Options: default, iOS, tvOS, macOS, custom
#destinationPlatform: # Optional
#destinationTypeOption: 'simulators' # Optional. Options: simulators, devices
#destinationSimulators: 'iPhone 7' # Optional
#destinationDevices: # Optional
#args: # Optional
#workingDirectory: # Optional
#useXcpretty: true # Optional
#publishJUnitResults: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
sdk (Optional) Specify an SDK to use when building the Xcode project or
SDK workspace. From the macOS Terminal application, run
xcodebuild -showsdks to display the valid list of SDKs. The default
value of this field is the variable $(SDK) . When using a variable,
make sure to specify a value (for example, iphonesimulator ) on
the Variables tab.
Default value: $(SDK)
xcWorkspacePath (Optional) Enter a relative path from the root of the repository to
Workspace or project path the Xcode workspace or project. For example,
MyApp/MyApp.xcworkspace or MyApp/MyApp.xcodeproj .
Default value: **/*.xcodeproj/project.xcworkspace
signingIdentity (Optional) Enter a signing identity override with which to sign the
Signing identity build. This may require unlocking the default keychain on the agent
machine. If no value is entered, the Xcode project's setting will be
used.
PA C K A GE O P T IO N S
exportPath (Optional) Specify the destination for the product exported from the
Export path archive.
Default value: output/$(SDK)/$(Configuration)
exportMethod (Required) Enter the method that Xcode should use to export the
Export method archive. For example: app-store , package , ad-hoc ,
enterprise , or development .
Default value: development
exportTeamId (Optional) Enter the 10-character team ID from the Apple Developer
Team ID Portal to use during export.
exportOptionsPlist (Required) Enter the path to the plist file that contains options to
Export options plist use during export.
destinationDevices (Optional) Enter the name of the device to be used for UI testing,
Devices such as Raisa's iPad . Only one device is currently supported.
Note that Apple does not allow apostrophes ( ' ) in device names.
Instead, right single quotation marks ( ' ) can be used.
A DVA N C ED
cwd (Optional) Enter the working directory in which to run the build. If
Working directory no value is entered, the root of the repository will be used.
Argument aliases: workingDirectory
C O N T RO L O P T IO N S
Example
Build your Xcode app
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to generate an .ipa file from Xcode build output.
Deprecated
The Xcode Package iOS task has been deprecated. It is relevant only if you are using Xcode 6.4.
Other wise, use the latest version of the Xcode task .
Demands
xcode
YAML snippet
# Xcode Package iOS
# Generate an .ipa file from Xcode build output using xcrun (Xcode 7 or below)
- task: XcodePackageiOS@0
inputs:
#appName: 'name.app'
#ipaName: 'name.ipa'
provisioningProfile:
#sdk: 'iphoneos'
#appPath: '$(SDK)/$(Configuration)/build.sym/$(Configuration)-$(SDK)'
#ipaPath: '$(SDK)/$(Configuration)/build.sym/$(Configuration)-$(SDK)/output'
Arguments
A RGUM EN T DESC RIP T IO N
Name of .app Name of the .app file, which is sometimes different from the
.ipa file.
Name of .ipa Name of the .ipa file, which is sometimes different from the
.app file.
Provisioning Profile Name Name of the provisioning profile to use when signing.
A DVA N C ED
Path to .app Relative path to the built .app file. The default value is
$(SDK)/$(Configuration)/build.sym/$(Configuration)-
$(SDK)
. Make sure to specify the variable values on the variables tab.
A RGUM EN T DESC RIP T IO N
Path to place .ipa Relative path where the .ipa will be placed. The directory will
be created if it doesn't exist. The default value is
$(SDK)/$(Configuration)/build.sym/$(Configuration)-
$(SDK)/output
. Make sure to specify the variable values on the variables tab.
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Archive Files task
6/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to create an archive file from a source folder. A range of standard archive formats are supported
including .zip, .jar, .war, .ear, .tar, .7z, and more.
Demands
None
YAML snippet
# Archive files
# Compress files into .7z, .tar.gz, or .zip
- task: ArchiveFiles@2
inputs:
#rootFolderOrFile: '$(Build.BinariesDirectory)'
#includeRootFolder: true
#archiveType: 'zip' # Options: zip, 7z, tar, wim
#tarCompression: 'gz' # Optional. Options: gz, bz2, xz, none
#archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
#replaceExistingArchive: true
#verbose: # Optional
#quiet: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
rootFolderOrFile (Required) Enter the root folder or file path to add to the
Root folder or file to archive archive. If a folder, everything under the folder will be added
to the resulting archive
Default value: $(Build.BinariesDirectory)
Default value: gz
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure Network Load Balancer task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to connect or disconnect an Azure virtual machine's network interface to a load balancer's address
pool.
YAML snippet
# Azure Network Load Balancer
# Connect or disconnect an Azure virtual machine's network interface to a Load Balancer's back end address
pool
- task: AzureNLBManagement@1
inputs:
azureSubscription:
resourceGroupName:
loadBalancer:
action: # Options: disconnect, connect
Arguments
A RGUM EN T DESC RIP T IO N
Action (Required)
Action Disconnect : Removes the virtual machine’s primary network
interface from the load balancer’s backend pool. So that it
stops receiving network traffic.
Connect : Adds the virtual machine’s primary network
interface to load balancer backend pool. So that it starts
receiving network traffic based on the load balancing rules for
the load balancer resource
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Bash task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to run a Bash script on macOS, Linux, or Windows.
YAML snippet
# Bash
# Run a Bash script on macOS, Linux, or Windows
- task: Bash@3
inputs:
#targetType: 'filePath' # Optional. Options: filePath, inline
#filePath: # Required when targetType == FilePath
#arguments: # Optional
#script: '# echo Hello world' # Required when targetType == inline
#workingDirectory: # Optional
#failOnStderr: false # Optional
#noProfile: true # Optional
#noRc: true # Optional
Arguments
A RGUM EN T DESC RIP T IO N
failOnStderr (Optional) If this is true, this task will fail if any errors are
Fail on standard error written to stderr.
Default value: false
noRc (Optional) If this is true, the task will not process .bashrc
Don't read the ~/.bashrc file from the user's home directory.
Default value: true
steps:
- task: Bash@3
inputs:
targetType: 'inline'
script: echo $MYSECRET
env:
MYSECRET: $(Foo)
steps:
- script: echo $MYSECRET
env:
MYSECRET: $(Foo)
The Bash task will find the first Bash implementation on your system. Running which bash on Linux/macOS or
where bash on Windows will give you an idea of which one it'll select.
Bash scripts checked into the repo should be set executable ( chmod +x ). Otherwise, the task will show a warning
and source the file instead.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Batch Script task
11/7/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run a Windows .bat or .cmd script. Optionally, allow it to permanently modify environment
variables.
NOTE
This task is not compatible with Windows containers. If you need to run a batch script on a Windows container, use the
command line task instead.
For information on supporting multiple platforms, see cross platform scripting.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
YAML snippet
# Batch script
# Run a Windows command or batch script and optionally allow it to change the environment
- task: BatchScript@1
inputs:
filename:
#arguments: # Optional
#modifyEnvironment: False # Optional
#workingFolder: # Optional
#failOnStandardError: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
failOnStandardError (Optional) If this is true, this task will fail if any errors are
Fail on Standard Error written to the StandardError stream.
Default value: false
Example
Create test.bat at the root of your repo:
@echo off
echo Hello World from %AGENT_NAME%.
echo My ID is %AGENT_ID%.
echo AGENT_WORKFOLDER contents:
@dir %AGENT_WORKFOLDER%
echo AGENT_BUILDDIRECTORY contents:
@dir %AGENT_BUILDDIRECTORY%
echo BUILD_SOURCESDIRECTORY contents:
@dir %BUILD_SOURCESDIRECTORY%
echo Over and out.
Run test.bat.
Path: test.bat
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn Windows commands?
An A-Z Index of the Windows CMD command line
How do I set a variable so that it can be read by subsequent scripts and tasks?
Define and modify your build variables in a script
Define and modify your release variables in a script
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are
some predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Command Line task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run a program from the command prompt.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.
Demands
None
YAML snippet
# Command line
# Run a command line script using Bash on Linux and macOS and cmd.exe on Windows
- task: CmdLine@2
inputs:
script: 'echo Write your commands here.'
#workingDirectory: # Optional
#failOnStderr: false # Optional
IMPORTANT
You may not realize you're running a batch file. For example, npm on Windows, along with any tools that you install
using npm install -g , are actually batch files. Always use call npm <command> to run NPM commands in a
Command Line task on Windows.
Arguments
A RGUM EN T DESC RIP T IO N
failOnStderr If this is true, this task will fail if any errors are written to
Fail on Standard Error stderr
Example
YAML
Classic
steps:
- script: date /t
displayName: Get the date
- script: dir
workingDirectory: $(Agent.BuildDirectory)
displayName: List contents of a folder
- script: |
set MYVAR=foo
set
displayName: Set a variable and then display all
env:
aVarFromYaml: someValue
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn Windows commands?
An A-Z Index of the Windows CMD command line
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Copy and Publish Build Artifacts task
6/12/2020 • 2 minutes to read • Edit Online
TFS 2015
Use this task to copy build artifacts to a staging folder and then publish them to the server or a file share. Files are
copied to the $(Build.ArtifactStagingDirectory) staging folder and then published.
IMPORTANT
If you're using Azure Pipelines, or Team Foundation Server (TFS) 2017 or newer, we recommend that you do NOT use this
deprecated task. Instead, use the Copy Files and Publish Build Ar tifacts tasks. See Artifacts in Azure Pipelines.
IMPORTANT
Are you using Team Foundation Server (TFS) 2015.4? If so, we recommend that you do NOT use this deprecated task.
Instead, use the Copy Files and Publish Build Ar tifacts tasks. See Artifacts in Azure Pipelines.
You should use this task only if you're using Team Foundation Server (TFS) 2015 RTM. In that version of TFS, this task is listed
under the Build category and is named Publish Build Ar tifacts .
Demands
None
Arguments
A RGUM EN T DESC RIP T IO N
Copy Root Folder that contains the files you want to copy. If you
leave it empty, the copying is done from the root folder of
the repo (same as if you had specified
$(Build.SourcesDirectory) ).
Contents Specify pattern filters (one on each line) that you want to
apply to the list of files to be copied. For example:
** copies all files in the root folder.
**\* copies all files in the root folder and all files in
all sub-folders.
**\bin copies files in any sub-folder named bin.
Artifact Name Specify the name of the artifact. For example: drop
A RGUM EN T DESC RIP T IO N
Artifact Type Choose ser ver to store the artifact on your Team
Foundation Server. This is the best and simplest option in
most cases. See Artifacts in Azure Pipelines.
C O N T RO L O P T IO N S
FAQ
Q: This step didn't produce the outcome I was expecting. How can I fix it?
This task has a couple of known issues:
Some minimatch patterns don't work.
It eliminates the most common root path for all paths matched.
You can avoid these issues by instead using the Copy Files task and the Publish Build Artifacts task.
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Copy Files task
11/7/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015.3
Use this task to copy files from a source folder to a target folder using match patterns.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Demands
None
YAML snippet
# Copy files
# Copy files from a source folder to a target folder using patterns matching file paths (not folder paths)
- task: CopyFiles@2
inputs:
#sourceFolder: # Optional
#contents: '**'
targetFolder:
#cleanTargetFolder: false # Optional
#overWrite: false # Optional
#flattenFolders: false # Optional
#preserveTimestamp: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
SourceFolder (Optional) Folder that contains the files you want to copy. If
Source Folder you leave it empty, the copying is done from the root folder
of the repo (same as if you had specified
$(Build.SourcesDirectory) ).
If your build produces artifacts outside of the sources
directory, specify $(Agent.BuildDirectory) to copy files
from the directory created for the pipeline.
A RGUM EN T DESC RIP T IO N
TargetFolder (Required) Target folder or UNC path files will copy to. You
Target Folder can use variables.
Example: $(build.ar tifactstagingdirector y)
CleanTargetFolder (Optional) Delete all existing files in target folder before copy
Clean Target Folder Default value: false
flattenFolders (Optional) Flatten the folder structure and copy all files into
Flatten Folders the specified target folder
Default value: false
preserveTimestamp (Optional) Using the original source file, preserve the target
Preserve Target Timestamp file timestamp.
Default value: false
Notes
If no files are matched, the task will still report success. If a matched file already exists in the target, the task will
report failure unless Overwrite is set to true.
Usage
A typical pattern for using this task is:
Build something
Copy build outputs to a staging directory
Publish staged artifacts
For example:
steps:
- script: ./buildSomething.sh
- task: CopyFiles@2
inputs:
contents: '_buildOutput/**'
targetFolder: $(Build.ArtifactStagingDirectory)
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: $(Build.ArtifactStagingDirectory)
artifactName: MyBuildOutputs
Examples
Copy executables and a readme file
Goal
You want to copy just the readme and the files needed to run this C# console app:
`-- ConsoleApplication1
|-- ConsoleApplication1.sln
|-- readme.txt
`-- ClassLibrary1
|-- ClassLibrary1.csproj
`-- ClassLibrary2
|-- ClassLibrary2.csproj
`-- ConsoleApplication1
|-- ConsoleApplication1.csproj
NOTE
ConsoleApplication1.sln contains a bin folder with .dll and .exe files, see the Results below to see what gets moved!
steps:
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
Contents: |
ConsoleApplication1\ConsoleApplication1\bin\**\*.exe
ConsoleApplication1\ConsoleApplication1\bin\**\*.dll
ConsoleApplication1\readme.txt
TargetFolder: '$(Build.ArtifactStagingDirectory)'
steps:
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
Contents: |
ConsoleApplication1\**\bin\**\!(*.pdb|*.config)
!ConsoleApplication1\**\ClassLibrary*\**
ConsoleApplication1\readme.txt
TargetFolder: '$(Build.ArtifactStagingDirectory)'
`-- ConsoleApplication1
|-- readme.txt
`-- ConsoleApplication1
`-- bin
`-- Release
| -- ClassLibrary1.dll
| -- ClassLibrary2.dll
| -- ConsoleApplication1.exe
Copy everything from the source directory except the .git folder
YAML
Classic
Example with multiple match patterns:
steps:
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
SourceFolder: '$(Build.SourcesDirectory)'
Contents: |
**/*
!.git/**/*
TargetFolder: '$(Build.ArtifactStagingDirectory)'
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about file matching patterns?
File matching patterns reference
How do I use this task to publish artifacts?
See Artifacts in Azure Pipelines.
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are
some predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
cURL Upload Files task
6/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to use cURL to upload files with supported protocols such as FTP, FTPS, SFTP, HTTP, and more.
Demands
curl
YAML snippet
# cURL upload files
# Use cURL's supported protocols to upload files
- task: cURLUploader@2
inputs:
files:
#authType: 'ServiceEndpoint' # Optional. Options: serviceEndpoint, userAndPass
#serviceEndpoint: # Required when authType == ServiceEndpoint
#username: # Optional
#password: # Optional
#url: # Required when authType == UserAndPass
#remotePath: 'upload/$(Build.BuildId)/' # Optional
#options: # Optional
#redirectStderr: true # Optional
Arguments
A RGUM EN T DESC RIP T IO N
serviceEndpoint (Required) The service connection with the credentials for the
Service Connection server authentication.
Use the Generic service connection type for the service
connection
url (Required) URL to the location where you want to upload the
URL files.
If you are uploading to a folder, make sure to end the
argument with a trailing slash.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about file matching patterns?
File matching patterns reference
Where can I learn FTP commands?
List of raw FTP commands
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Decrypt File (OpenSSL) task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to decrypt files using OpenSSL.
YAML snippet
# Decrypt file (OpenSSL)
# Decrypt a file using OpenSSL
- task: DecryptFile@1
inputs:
#cipher: 'des3'
inFile:
passphrase:
#outFile: # Optional
#workingDirectory: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Delay task
6/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task in an agentless job of a release pipeline to pause execution of the pipeline for a fixed delay time.
Demands
Can be used in only an agentless job of a release pipeline.
YAML snippet
# Delay
# Delay further execution of a workflow by a fixed time
jobs:
- job: RunsOnServer
pool: Server
steps:
- task: Delay@1
inputs:
#delayForMinutes: '0'
Arguments
A RGUM EN T S DESC RIP T IO N
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Delete Files task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015.3
Use this task to delete files or folders from the agent working directory.
Demands
None
YAML snippet
# Delete files
# Delete folders, or files matching a pattern
- task: DeleteFiles@1
inputs:
#SourceFolder: # Optional
#Contents: 'myFileShare'
#RemoveSourceFolder: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
SourceFolder (Optional) Folder that contains the files you want to delete. If
Source Folder you leave it empty, the deletions are done from the root
folder of the repo (same as if you had specified
$(Build.SourcesDirector y) ).
If your build produces artifacts outside of the sources
directory, specify $(Agent.BuildDirectory) to delete files
from the build agent working directory.
Examples
Delete several patterns
This example will delete some/file , all files beginning with test , and all files in all subdirectories called bin .
steps:
- task: DeleteFiles@1
displayName: 'Remove unneeded files'
inputs:
contents: |
some/file
test*
**/bin/*
steps:
- task: DeleteFiles@1
displayName: 'Remove unneeded files'
inputs:
contents: |
some/!(two)
steps:
- task: DeleteFiles@1
displayName: 'Remove unneeded files'
inputs:
contents: |
some/{one,four}
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Q: What's a minimatch pattern? How does it work?
A: See:
https://github.com/isaacs/minimatch
https://realguess.net/tags/minimatch/
http://man7.org/linux/man-pages/man3/fnmatch.3.html
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Download Build Artifacts task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to download build artifacts.
YAML snippet
# Download build artifacts
# Download files that were saved as artifacts of a completed build
- task: DownloadBuildArtifacts@0
inputs:
#buildType: 'current' # Options: current, specific
#project: # Required when buildType == Specific
#pipeline: # Required when buildType == Specific
#specificBuildWithTriggering: false # Optional
#buildVersionToDownload: 'latest' # Required when buildType == Specific. Options: latest,
latestFromBranch, specific
#allowPartiallySucceededBuilds: false # Optional
#branchName: 'refs/heads/master' # Required when buildType == Specific && BuildVersionToDownload ==
LatestFromBranch
#buildId: # Required when buildType == Specific && BuildVersionToDownload == Specific
#tags: # Optional
#downloadType: 'single' # Choose whether to download a single artifact or all artifacts of a specific
build. Options: single, specific
#artifactName: # Required when downloadType == Single
#itemPattern: '**' # Optional
#downloadPath: '$(System.ArtifactsDirectory)'
#parallelizationLimit: '8' # Optional
Arguments
A RGUM EN T DESC RIP T IO N
specificBuildWithTriggering (Optional) If true, this build task will try to download artifacts
When appropriate, download artifacts from the triggering from the triggering build. If there is no triggering build from
build the specified pipeline, it will download artifacts from the build
specified in the options below.
Default value: false
A RGUM EN T DESC RIP T IO N
downloadPath (Required) Path on the agent machine where the artifacts will
Destination directory be downloaded
Default value: $(System.ArtifactsDirectory)
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Download Fileshare Artifacts task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to download fileshare artifacts.
YAML snippet
# Download artifacts from file share
# Download artifacts from a file share, like \\share\drop
- task: DownloadFileshareArtifacts@1
inputs:
filesharePath:
artifactName:
#itemPattern: '**' # Optional
#downloadPath: '$(System.ArtifactsDirectory)'
#parallelizationLimit: '8' # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Download path (Required) Path on the agent machine where the artifacts will
be downloaded.
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Download GitHub Release task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task in your pipeline to download assets from your GitHub release as part of your CI/CD pipeline.
Prerequisites
GitHub service connection
This task requires a GitHub service connection with Read permission to the GitHub repository. You can create a
GitHub service connection in your Azure Pipelines project. Once created, use the name of the service connection in
this task's settings.
YAML snippet
# Download GitHub Release
# Downloads a GitHub Release from a repository
- task: DownloadGitHubRelease@0
inputs:
connection:
userRepository:
#defaultVersionType: 'latest' # Options: latest, specificVersion, specificTag
#version: # Required when defaultVersionType != Latest
#itemPattern: '**' # Optional
#downloadPath: '$(System.ArtifactsDirectory)'
Arguments
A RGUM EN T DESC RIP T IO N
connection (Required) Enter the service connection name for your GitHub
GitHub Connection connection. Learn more about service connections here.
defaultVersionType (Required) The version of the GitHub Release from which the
Default version assets are downloaded. The version type can be 'Latest
Release', 'Specific Version' or 'Specific Tag'
Default value: latest
downloadPath (Required) Path on the agent machine where the release assets
Destination directory will be downloaded.
Default value: $(System.ArtifactsDirectory)
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Download Package task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to download a package from a package management feed in Azure Artifacts or TFS. Requires the
Package Management extension.
YAML snippet
# Download package
# Download a package from a package management feed in Azure Artifacts
- task: DownloadPackage@1
inputs:
packageType: # 'nuget' Options: maven, npm, nuget, pypi, upack
feed: # <feedId> for organization-scoped feeds, <projectId>/<feedId> for project-scoped feeds.
#view: ' ' # Optional
definition: # '$(packageName)'
version: # '1.0.0'
#files: '**' # Optional
#extract: true # Optional
downloadPath: # '$(System.ArtifactsDirectory)'
Arguments
A RGUM EN T DESC RIP T IO N
DownloadPath (Required) Path on the agent machine where the package will
be downloaded.
Examples
Download a NuGet package from an organization-scoped feed and extract to destination directory
Download a maven package from a project-scoped feed and download only pom files.
FAQ
How do I find the ID of the feed (or project) I want to download my artifact from
The get feed api can be used to retrieve the feed and project ID for your feed. The api is documented here.
Can I use the project or feed name instead of IDs
Yes, you can use the project or feed name in your definition, however if your project or feed is renamed in the
future, your task will also have to be updated or it might fail.
Open-source
This task is open source on GitHub. Feedback and contributions are welcome.
Download Pipeline Artifacts task
11/2/2020 • 3 minutes to read • Edit Online
Use this task to download pipeline artifacts from earlier stages in this pipeline, or from another pipeline.
NOTE
For more information, including Azure CLI commands, see downloading artifacts.
YAML snippet
# Download pipeline artifacts
# Download build and pipeline artifacts
- task: DownloadPipelineArtifact@2
inputs:
#source: 'current' # Options: current, specific
#project: # Required when source == Specific
#pipeline: # Required when source == Specific
#preferTriggeringPipeline: false # Optional
#runVersion: 'latest' # Required when source == Specific# Options: latest, latestFromBranch, specific
#runBranch: 'refs/heads/master' # Required when source == Specific && RunVersion == LatestFromBranch
#runId: # Required when source == Specific && RunVersion == Specific
#tags: # Optional
#artifact: # Optional
#patterns: '**' # Optional
#path: '$(Pipeline.Workspace)'
Arguments
A RGUM EN T DESC RIP T IO N
runId (Required) The build from which to download the artifacts. For
Build example: 1764
Argument aliases: pipelineId , buildId
NOTE
If you want to consume artifacts as part of CI/CD flow, refer to the download shortcut here.
Examples
Download a specific artifact
# Download an artifact named 'WebApp' to 'bin' in $(Build.SourcesDirectory)
- task: DownloadPipelineArtifact@2
inputs:
artifact: 'WebApp'
path: $(Build.SourcesDirectory)/bin
# Download an artifact named 'WebApp' from a specific build run to 'bin' in $(Build.SourcesDirectory)
- task: DownloadPipelineArtifact@2
inputs:
source: 'specific'
artifact: 'WebApp'
path: $(Build.SourcesDirectory)/bin
project: 'FabrikamFiber'
pipeline: 12
runVersion: 'specific'
runId: 40
FAQ
How can I find the ID of the Pipeline I want to download an artifact from?
You can find the ID of the pipeline in the 'Pipeline variables'. The pipeline ID is the system.definitionId variable.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Download Secure File task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task in a pipeline to download a secure file to the agent machine. When specifying the name of the file
(using the secureFile input) use the name you specified when uploading it rather than the actual filename.
Once downloaded, use the name value that is set on the task (or "Reference name" in the classic editor) to
reference the path to the secure file on the agent machine. For example, if the task is given the name mySecureFile
, its path can be referenced in the pipeline as $(mySecureFile.secureFilePath) . Alternatively, downloaded secure
files can be found in the directory given by $(Agent.TempDirectory) . See a full example below.
When the pipeline job completes, no matter whether it succeeds, fails, or is canceled, the secure file is deleted from
its download location.
It is unnecessary to use this task with the Install Apple Certificate or Install Apple Provisioning Profile tasks
because they automatically download, install, and delete (at the end of the pipeline job) the secure file.
YAML snippet
# Download secure file
# Download a secure file to the agent machine
- task: DownloadSecureFile@1
name: mySecureFile # The name with which to reference the secure file's path on the agent, like
$(mySecureFile.secureFilePath)
inputs:
secureFile: # The file name or GUID of the secure file
#retryCount: 5 # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Example
This example downloads a secure certificate file and installs it to a trusted certificate authority (CA) directory on
Linux:
- task: DownloadSecureFile@1
name: caCertificate
displayName: 'Download CA certificate'
inputs:
secureFile: 'myCACertificate.pem'
- script: |
echo Installing $(caCertificate.secureFilePath) to the trusted CA directory...
sudo chown root:root $(caCertificate.secureFilePath)
sudo chmod a+r $(caCertificate.secureFilePath)
sudo ln -s -t /etc/ssl/certs/ $(caCertificate.secureFilePath)
Extract Files task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to extract files from archives to a target folder using match patterns. A range of standard archive
formats is supported, including .zip, .jar, .war, .ear, .tar, .7z, and more.
Demands
None
YAML snippet
# Extract files
# Extract a variety of archive and compression files such as .7z, .rar, .tar.gz, and .zip
- task: ExtractFiles@1
inputs:
#archiveFilePatterns: '**/*.zip'
destinationFolder:
#cleanDestinationFolder: true
Arguments
A RGUM EN T DESC RIP T IO N
Archive file patterns Patterns to match the archives you want to extract. By
default, patterns start in the root folder of the repo (same
as if you had specified $(Build.SourcesDirectory) ).
Specify pattern filters, one per line, that match the
archives to extract. For example:
test.zip extracts the test.zip file in the root folder.
test/*.zip extracts all .zip files in the test folder.
**/*.tar extracts all .tar files in the root folder and
sub-folders.
**/bin/*.7z extracts all ''.7z'' files in any sub-folder
named bin.
The pattern is used to match only archive file paths, not
folder paths, and not archive contents to be extracted. So
you should specify patterns such as **/bin/** instead
of **/bin .
Destination folder Folder where the archives will be extracted. The default file
path is relative to the root folder of the repo (same as if you
had specified $(Build.SourcesDirectory) ).
Clean destination folder before extracting Select this check box to delete all existing files in the
destination folder before beginning to extract archives.
C O N T RO L O P T IO N S
Examples
Extract all .zip files recursively
This example will extract all .zip files recursively, including both root files and files from sub-folders
steps:
- task: ExtractFiles@1
inputs:
archiveFilePatterns: '**/*.zip'
cleanDestinationFolder: true
steps:
- task: ExtractFiles@1
inputs:
archiveFilePatterns: 'test/*.zip'
cleanDestinationFolder: true
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about file matching patterns?
File matching patterns reference
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
File Transform task
11/2/2020 • 3 minutes to read • Edit Online
Use this task to apply file transformations and variable substitutions on configuration and parameters files. For
details of how translations are processed, see File transforms and variable substitution reference.
File transformations
At present file transformations are supported for only XML files.
To apply XML transformation to configuration files (*.config) you must specify a newline-separated list of
transformation file rules using the syntax:
-transform <path to the transform file> -xml <path to the source file> -result <path to the result file>
File transformations are useful in many scenarios, particularly when you are deploying to an App service
and want to add, remove or modify configurations for different environments (such as Dev, Test, or Prod) by
following the standard Web.config Transformation Syntax.
You can also use this functionality to transform other files, including Console or Windows service
application configuration files (for example, FabrikamService.exe.config).
Config file transformations are run before variable substitutions.
Variable substitution
At present only XML and JSON file formats are supported for variable substitution.
Tokens defined in the target configuration files are updated and then replaced with variable values.
Variable substitutions are run after config file transformations.
Variable substitution is applied for only the JSON keys predefined in the object hierarchy. It does not create
new keys.
Examples
If you need XML transformation to run on all the configuration files named with pattern .Production.config , the
transformation rule should be specified as:
-transform **\*.Production.config -xml **\*.config
If you have a configuration file named based on the stage name in your pipeline, you can use:
-transform **\*.$(Release.EnvironmentName).config -xml **\*.config
To substitute JSON variables that are nested or hierarchical, specify them using JSONPath expressions. For
example, to replace the value of ConnectionString in the sample below, you must define a variable as
Data.DefaultConnection.ConnectionString in the build or release pipeline (or in a stage within the release pipeline).
{
"Data": {
"DefaultConnection": {
"ConnectionString": "Server=(localdb)\SQLEXPRESS;Database=MyDB;Trusted_Connection=True"
}
}
}
NOTE
Only custom variables defined in build and release pipelines are used in substitution. Default and system pipeline variables
are excluded.
Here's a list of currently excluded prefixes:
'agent.'
'azure_http_user_agent'
'build.'
'common.'
'release.'
'system.'
'tf_'
If the same variables are defined in both the release pipeline and in a stage, the stage-defined variables supersede the
pipeline-defined variables.
Demands
None
YAML snippet
# File transform
# Replace tokens with variable values in XML or JSON configuration files
- task: FileTransform@1
inputs:
#folderPath: '$(System.DefaultWorkingDirectory)/**/*.zip'
#enableXmlTransform: # Optional
#xmlTransformationRules: '-transform **\*.Release.config -xml **\*.config-transform
**\*.$(Release.EnvironmentName).config -xml **\*.config' # Optional
#fileType: # Optional. Options: xml, json
#targetFiles: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Package or folder File path to the package or a folder. Variables ( Build | Release
folderPath ), wildcards are supported. For example,
$(System.DefaultWorkingDirectory)/*/.zip . For zipped
folders, the contents are extracted to the TEMP location,
transformations executed, and the results zipped in original
artifact location.
A RGUM EN T DESC RIP T IO N
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FTP Upload task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to upload files to a remote machine using the File Transfer Protocol (FTP), or securely with FTPS.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Demands
None
YAML snippet
# FTP upload
# Upload files using FTP
- task: FtpUpload@2
inputs:
#credentialsOption: 'serviceEndpoint' # Options: serviceEndpoint, inputs
#serverEndpoint: # Required when credentialsOption == ServiceEndpoint
#serverUrl: # Required when credentialsOption == Inputs
#username: # Required when credentialsOption == Inputs
#password: # Required when credentialsOption == Inputs
rootDirectory:
#filePatterns: '**'
#remoteDirectory: '/upload/$(Build.BuildId)/'
#clean: false
#cleanContents: false # Required when clean == False
#preservePaths: false
#trustSSL: false
Arguments
A RGUM EN T DESC RIP T IO N
serverEndpoint (Required) Select the service connection for your FTP server. To
FTP Service Connection create one, click the Manage link and create a new Generic
service connection, enter the FTP server URL for the server
URL, Example, ftp://server.example.com, and required
credentials.
Secure connections will always be made regardless of the
specified protocol (ftp:// or ftps://) if the target server
supports FTPS. To allow only secure connections, use the
ftps:// protocol. For example, ftps://ser ver.example.com .
Connections to servers not supporting FTPS will fail if ftps://
is specified.
serverUrl (Required)
Server URL
username (Required)
Username
password (Required)
Password
trustSSL (Required) Selecting this option results in the FTP server's SSL
Trust server certificate certificate being trusted with ftps://, even if it is self-signed or
cannot be validated by a Certificate Authority (CA).
Default value: false
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about file matching patterns?
File matching patterns reference
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
GitHub Release task
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines
Use this task in your pipeline to create, edit, or discard a GitHub release.
Prerequisites
GitHub service connection
This task requires a GitHub service connection with Write permission to the GitHub repository. You can create a
GitHub service connection in your Azure Pipelines project. Once created, use the name of the service connection in
this task's settings.
YAML snippet
# GitHub Release
# Create, edit, or delete a GitHub release
- task: GitHubRelease@0
inputs:
gitHubConnection:
#repositoryName: '$(Build.Repository.Name)'
#action: 'create' # Options: create, edit, delete
#target: '$(Build.SourceVersion)' # Required when action == Create || Action == Edit
#tagSource: 'auto' # Required when action == Create# Options: auto, manual
#tagPattern: # Optional
#tag: # Required when action == Edit || Action == Delete || TagSource == Manual
#title: # Optional
#releaseNotesSource: 'file' # Optional. Options: file, input
#releaseNotesFile: # Optional
#releaseNotes: # Optional
#assets: '$(Build.ArtifactStagingDirectory)/*' # Optional
#assetUploadMode: 'delete' # Optional. Options: delete, replace
#isDraft: false # Optional
#isPreRelease: false # Optional
#addChangeLog: true # Optional
#compareWith: 'lastFullRelease' # Required when addChangeLog == True. Options: lastFullRelease,
lastRelease, lastReleaseByTag
#releaseTag: # Required when compareWith == LastReleaseByTag
Arguments
A RGUM EN T DESC RIP T IO N
GitHub Connection (Required) Enter the service connection name for your GitHub
connection. Learn more about service connections here.
Target (Required) This is the commit SHA for which the GitHub
release will be created. E.g.
48b11d8d6e92a22e3e9563a3f643699c16fd6e27 . You can also
use variables here.
Tag source (Required) Configure the tag to be used for release creation.
The 'Git tag' option automatically takes the tag which is
associated with this commit. Use the 'User specified tag'
option in case you want to manually provide a tag.
Tag (Required) Specify the tag for which you want to create, edit,
or discard a release. You can also use variables here. E.g.
$(tagName) .
Release title (Optional) Specify the title of the GitHub release. If left empty,
the tag will be used as the release title.
Release notes source (Optional) Specify the description of the GitHub release. Use
the 'Release notes file' option to use the contents of a file as
release notes. Use the 'Inline release notes' option to manually
enter the release notes.
Release notes file path (Optional) Select the file which contains the release notes.
Asset upload mode (Optional) Use the 'Delete existing assets' option to first delete
any existing assets in the release and then upload all assets.
Use the 'Replace existing assets' option to replace any assets
that have the same name.
C O N T RO L O P T IO N S
Examples
Create a GitHub release
The following YAML creates a GitHub release every time the task runs. The build number is used as the tag version
for the release. All .exe files and README.txt files in the $(Build.ArtifactStagingDirectory) folder are uploaded as
assets. By default, the task also generates a change log (a list of commits and issues that are part of this release)
and publishes it as release notes.
- task: GithubRelease@0
displayName: 'Create GitHub Release'
inputs:
gitHubConnection: zenithworks
repositoryName: zenithworks/javaAppWithMaven
tagSource: manual
tag: $(Build.BuildNumber)
assets: |
$(Build.ArtifactStagingDirectory)/*.exe
$(Build.ArtifactStagingDirectory)/README.txt
You can also control the creation of the release based on repository tags. The following YAML creates a GitHub
release only when the commit that triggers the pipeline has a Git tag associated with it. The GitHub release is
created with the same tag version as the associated Git tag.
- task: GithubRelease@0
displayName: 'Create GitHub Release'
inputs:
gitHubConnection: zenithworks
repositoryName: zenithworks/javaAppWithMaven
assets: $(Build.ArtifactStagingDirectory)/*.exe
You may also want to use the task in conjunction with task conditions to get even finer control over when the task
runs, thereby restricting the creation of releases. For example, in the following YAML the task runs only when the
pipeline is triggered by a Git tag matching the pattern 'refs/tags/release-v*'.
- task: GithubRelease@0
displayName: 'Create GitHub Release'
condition: startsWith(variables['Build.SourceBranch'], 'refs/tags/release-v')
inputs:
gitHubConnection: zenithworks
repositoryName: zenithworks/javaAppWithMaven
assets: $(Build.ArtifactStagingDirectory)/*.exe
- task: GithubRelease@0
displayName: 'Edit GitHub Release'
inputs:
gitHubConnection: zenithworks
repositoryName: zenithworks/javaAppWithMaven
action: edit
tag: $(myDraftReleaseVersion)
isDraft: false
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Install Apple Certificate task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
Use this task to install an Apple certificate that is required to build on a macOS agent. You can use this task to
install an Apple certificate that is stored as a secure file on the server.
Demands
xcode
YAML snippet
# Install Apple certificate
# Install an Apple certificate required to build on a macOS agent machine
- task: InstallAppleCertificate@2
inputs:
certSecureFile:
#certPwd: # Optional
#keychain: 'temp' # Options: default, temp, custom
#keychainPassword: # Required when keychain == Custom || Keychain == Default
#customKeychainPath: # Required when keychain == Custom
#deleteCert: # Optional
#deleteCustomKeychain: # Optional
#signingIdentity: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Certificate (P12) Select the certificate (.p12) that was uploaded to Secure
Files to install on the macOS agent.
Certificate (P12) Password Password to the Apple certificate (.p12). Use a new build
variable with its lock enabled on the Variables tab to encrypt
this value.
Advanced - Keychain Select the keychain in which to install the Apple certificate.
You can choose to install the certificate in a temporary
keychain (default), the default keychain or a custom keychain.
A temporary keychain will always be deleted after the build or
release is complete.
Advanced - Keychain Password Password to unlock the keychain. Use a new build variable
with its lock enabled on the Variables tab to encrypt this
value. A password is generated for the temporary keychain if
not specified.
Advanced - Delete Certificate from Keychain Select to delete the certificate from the keychain after the
build or release is complete. This option is visible when
custom keychain or default keychain are selected.
A RGUM EN T DESC RIP T IO N
Advanced - Custom Keychain Path Full path to a custom keychain file. The keychain will be
created if it does not exist. This option is visible when a
custom keychain is selected.
Advanced - Delete Custom Keychain Select to delete the custom keychain from the agent after the
build or release is complete. This option is visible when a
custom keychain is selected.
Install Apple Provisioning Profile task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
Use this task to install an Apple provisioning profile that is required to build on a macOS agent. You can use this
task to install provisioning profiles needed to build iOS Apps, Apple WatchKit Apps and App Extensions.
You can install an Apple provisioning profile that is:
Stored as a secure file on the server.
(Azure Pipelines ) Committed to the source repository or copied to a local path on the macOS agent. We
recommend encrypting the provisioning profiles if you are committing them to the source repository. The
Decr ypt File task can be used to decrypt them during a build or release.
Demands
xcode
YAML snippet
# Install Apple provisioning profile
# Install an Apple provisioning profile required to build on a macOS agent machine
- task: InstallAppleProvisioningProfile@1
inputs:
#provisioningProfileLocation: 'secureFiles' # Options: secureFiles, sourceRepository
#provProfileSecureFile: # Required when provisioningProfileLocation == SecureFiles
#provProfileSourceRepository: # Required when provisioningProfileLocation == SourceRepository
#removeProfile: true # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Provisioning Profile Location (Azure Pipelines ) Select the location of the provisioning profile to install. The
provisioning profile can be uploaded to Secure Files or
stored in your source repository or a local path on the agent.
Provisioning Profile Select the provisioning profile that was uploaded to Secure
Files to install on the macOS agent (or) Select the
provisioning profile from the source repository or specify the
local path to a provisioning profile on the macOS agent.
Remove Profile After Build Select to specify that the provisioning profile should be
removed from the agent after the build or release is complete.
Install SSH Key task
11/7/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task in a pipeline to install an SSH key prior to a build or release step.
YAML snippet
# Install SSH key
# Install an SSH key prior to a build or deployment
- task: InstallSSHKey@0
inputs:
knownHostsEntry:
sshPublicKey:
#sshPassphrase: # Optional
sshKeySecureFile:
Arguments
A RGUM EN T DESC RIP T IO N
Known Hosts Entry (Required) The entry for this SSH key for the known_hosts file.
SSH Public Key (Optional) The contents of the public SSH key.
SSH Passphrase (Optional) The passphrase for the SSH key, if any.
SSH Key (Secure File) (Required) Select the SSH key that was uploaded to
Secure Files to install on the agent.
C O N T RO L O P T IO N S
Prerequisites
GitBash for Windows
b. Enter a name for the SSH key pair. In our example, we use myKey .
c. (Optional) You can enter a passphrase to encrypt your private key. This step is optional. Using a
passphrase is more secure than not using one.
The SSH key pairs are created and the following success message appears:
2. Add the public key to the GitHub repository. (The public key ends in ".pub"). To do this, go the following URL
in your browser: https://github.com/(organization-name)/(repository-name)/settings/keys .
a. Select Add deploy key .
b. In the Add new dialog box, enter a title, and then copy and paste the SSH key:
c. Select Add key .
3. Upload your private key to Azure DevOps:
a. In Azure DevOps, in the left menu, select Pipelines > Librar y .
4. Recover your "Known Hosts Entry". In GitBash, enter the following command:
ssh-keyscan github.com
Your "Known Hosts Entry" is the displayed value that doesn't begin with # in the GitBash results:
- task: InstallSSHKey@0
inputs:
knownHostsEntry: #{Enter your Known Hosts Entry Here}
sshPublicKey: #{Enter your Public key Here}
sshKeySecureFile: #{Enter the name of your key in "Secure Files" Here}
Now, the SSH keys are installed and you can proceed with the script to connect by using SSH, and not the default
HTTPS.
steps:
- task: InstallSSHKey@0
displayName: 'Install an SSH key'
inputs:
knownHostsEntry: 'SHA256:1Hyr55tsxGifESBMc0s+2NtutnR/4+LOkVwrOGrIp8U johndoe@contoso'
sshPublicKey: '$(myPubKey)'
sshKeySecureFile: 'id_rsa'
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Invoke Azure Function task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task in an agentless job of a release pipeline to invoke an HTTP triggered function in an Azure function
app and parse the response.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Demands
Can be used in only an agentless job of a release pipeline.
YAML snippet
# Invoke Azure Function
# Invoke an Azure Function
- task: AzureFunction@1
inputs:
function:
key:
#method: 'POST' # Options: OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, PATCH
#headers: '{Content-Type:application/json, PlanUrl: $(system.CollectionUri), ProjectId:
$(system.TeamProjectId), HubName: $(system.HostType), PlanId: $(system.PlanId), JobId: $(system.JobId),
TimelineId: $(system.TimelineId), TaskInstanceId: $(system.TaskInstanceId), AuthToken: $(system.AccessToken)}'
#queryParameters: # Optional
#body: # Required when method != GET && Method != HEAD
#waitForCompletion: 'false' # Options: true, false
#successCriteria: # Optional
Arguments
PA RA M ET ER C O M M EN T S
Azure function URL Required. The URL of the Azure function to be invoked.
Function key Required. The value of the available function or the host key
for the function to be invoked. Should be secured by using a
hidden variable.
Method Required. The HTTP method with which the function will be
invoked.
Body Optional. The request body for the Azure function call in
JSON format.
Completion Event Required. How the task reports completion. Can be API
response (the default) - completion is when function returns
success and success criteria evaluates to true, or Callback -
the Azure function makes a callback to update the timeline
record.
Success criteria Optional. How to parse the response body for success.
Succeeds if the function returns success and the response body parsing is successful, or when the function
updates the timeline record with success.
For more information about using this task, see Approvals and gates overview.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where should a task signal completion when Callback is chosen as the completion event?
To signal completion, the Azure function should POST completion data to the following pipelines REST endpoint.
{planUri}/{projectId}/_apis/distributedtask/hubs/{hubName}/plans/{planId}/events?api-version=2.0-preview.1
**Request Body**
{ "name": "TaskCompleted", "taskId": "taskInstanceId", "jobId": "jobId", "result": "succeeded" }
See this simple cmdline application for specifics. In addition, a C# helper library is available to enable live logging
and managing task status for agentless tasks. Learn more
Why does the task failed within 1 minute when the timeout is longer?
In case the Azure Function executes for more than 1 minute, then you'll need to use the Callback completion
event. API Response completion option is supported for requests that complete within 60 seconds.
Invoke REST API task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
Use this task to invoke an HTTP API and parse the response.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
This task is available in both builds and releases in TFS 2018.2 In TFS 2018 RTM, this task is available only in
releases.
Demands
This task can be used in only an agentless job.
YAML snippet
# Invoke REST API
# Invoke a REST API as a part of your pipeline.
- task: InvokeRESTAPI@1
inputs:
#connectionType: 'connectedServiceName' # Options: connectedServiceName, connectedServiceNameARM
#serviceConnection: # Required when connectionType == ConnectedServiceName
#azureServiceConnection: # Required when connectionType == ConnectedServiceNameARM
#method: 'POST' # Options: OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, PATCH
#headers: '{Content-Type:application/json, PlanUrl: $(system.CollectionUri), ProjectId:
$(system.TeamProjectId), HubName: $(system.HostType), PlanId: $(system.PlanId), JobId: $(system.JobId),
TimelineId: $(system.TimelineId), TaskInstanceId: $(system.TaskInstanceId), AuthToken: $(system.AccessToken)}'
#body: # Required when method != GET && Method != HEAD
#urlSuffix: # Optional
#waitForCompletion: 'false' # Options: true, false
#successCriteria: # Optional
Arguments
PA RA M ET ER C O M M EN T S
Generic ser vice connection Required. Generic service connection that provides the
baseUrl for the call and the authorization to use.
Method Required. The HTTP method with which the API will be
invoked; for example, GET , PUT , or UPDATE .
Body Optional. The request body for the function call in JSON
format.
URL suffix and parameters The string to append to the baseUrl from the Generic service
connection while making the HTTP call
Wait for completion Required. How the task reports completion. Can be API
response (the default) - completion is when the function
returns success within 20 seconds and the success criteria
evaluates to true, or Callback - the external service makes a
callback to update the timeline record.
Success criteria Optional. How to parse the response body for success. By
default, the task passes when 200 OK is returned from the
call. Additionally, the success criteria - if specified - is
evaluated.
Succeeds if the API returns success and the response body parsing is successful, or when the API updates the
timeline record with success.
The Invoke REST API task does not perform deployment actions directly. Instead, it allows you to invoke any
generic HTTP REST API as part of the automated pipeline and, optionally, wait for it to be completed.
For more information about using this task, see Approvals and gates overview.
Open source
Also see this task on GitHub.
FAQ
What base URLs are used when invoking Azure Management APIs?
Azure management APIs are invoked using ResourceManagerEndpoint of the selected environment. For example
https://management.Azure.com is used when the subscription is in AzureCloud environment.
Where should a task signal completion when Callback is chosen as the completion event?
To signal completion, the external service should POST completion data to the following pipelines REST endpoint.
{planUri}/{projectId}/_apis/distributedtask/hubs/{hubName}/plans/{planId}/events?api-version=2.0-preview.1
**Request Body**
{ "name": "TaskCompleted", "taskId": "taskInstanceId", "jobId": "jobId", "result": "succeeded" }
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to download artifacts produced by a Jenkins job.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
YAML snippet
# Jenkins download artifacts
# Download artifacts produced by a Jenkins job
- task: JenkinsDownloadArtifacts@1
inputs:
jenkinsServerConnection:
jobName:
#jenkinsJobType: # Optional
#saveTo: 'jenkinsArtifacts'
#jenkinsBuild: 'LastSuccessfulBuild' # Options: lastSuccessfulBuild, buildNumber
#jenkinsBuildNumber: '1' # Required when jenkinsBuild == BuildNumber
#itemPattern: '**' # Optional
#downloadCommitsAndWorkItems: # Optional
#startJenkinsBuildNumber: # Optional
#artifactDetailsFileNameSuffix: # Optional
#propagatedArtifacts: false # Optional
#artifactProvider: 'azureStorage' # Required when propagatedArtifacts == NotValid# Options: azureStorage
#connectedServiceNameARM: # Required when propagatedArtifacts == True
#storageAccountName: # Required when propagatedArtifacts == True
#containerName: # Required when propagatedArtifacts == True
#commonVirtualPath: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Jenkins service connection (Required) Select the service connection for your Jenkins
instance. To create one, click the Manage link and create a new
Jenkins service connection.
Job name (Required) The name of the Jenkins job to download artifacts
from. This must exactly match the job name on the Jenkins
server.
Download artifacts produced by (Required) Download artifacts produced by the last successful
build, or from a specific build instance.
Download Commits and WorkItems (Optional) Enables downloading the commits and workitem
details associated with the Jenkins Job
Download commits and workitems from (Optional) Optional start build number for downloading
commits and work items. If provided, all commits and work
items between start build number and build number given as
input to download artifacts will be downloaded.
Commit and WorkItem FileName (Optional) Optional file name suffix for commits and workitem
attachment. Attachment will be created with
commits_{suffix}.json and workitem_{suffix}.json. If this input is
not provided attachments will be create with the name
commits.json and workitems.json
Artifacts are propagated to Azure (Optional) Check this if Jenkins artifacts were propagated to
Azure. To upload Jenkins artifacts to azure, refer to this Jenkins
plugin
Storage Account Name (Required) Azure Classic and Resource Manager storage
accounts are listed. Select the Storage account name in which
the artifacts are propagated.
Common Virtual Path (Optional) Path to the artifacts inside the Azure storage
container.
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Manual Intervention task
2/26/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task in a release pipeline to pause an active deployment within a stage, typically to perform some manual
steps or actions, and then continue the automated deployment tasks.
Demands
Can be used in only an agentless job of a release pipeline. This task is supported only in classic release pipelines.
Arguments
PA RA M ET ER C O M M EN T S
Instructions Optional. The instruction text to display to the user when the
task is activated.
Notify users Optional. The list of users that will be notified that the task
has been activated.
The Manual Inter vention task does not perform deployment actions directly. Instead, it allows you to pause an
active deployment within a stage, typically to perform some manual steps or actions, and then continue the
automated deployment tasks. For example, the user may need to edit the details of the current release before
continuing; perhaps by entering the values for custom variables used by the tasks in the release.
The Manual Inter vention task configuration includes an Instructions parameter that can be used to provide
related information, or to specify the manual steps the user should execute during the agentless job. You can
configure the task to send email notifications to users and user groups when it is awaiting intervention, and
specify the automatic response (reject or resume the deployment) after a configurable timeout occurs.
You can use built-in and custom variables to generate portions of your instructions.
When the Manual Intervention task is activated during a deployment, it sets the deployment state to IN
PROGRESS and displays a message bar containing a link that opens the Manual Intervention dialog containing
the instructions. After carrying out the manual steps, the administrator or user can choose to resume the
deployment, or reject it. Users with Manage deployment permission on the stage can resume or reject the
manual intervention.
For more information about using this task, see Approvals and gates overview.
PowerShell task
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run a PowerShell script.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Demands
DotNetFramework
YAML snippet
# PowerShell
# Run a PowerShell script on Linux, macOS, or Windows
- task: PowerShell@2
inputs:
#targetType: 'filePath' # Optional. Options: filePath, inline
#filePath: # Required when targetType == FilePath
#arguments: # Optional
#script: '# Write your PowerShell commands here.Write-Host Hello World' # Required when targetType ==
Inline
#errorActionPreference: 'stop' # Optional. Options: stop, continue, silentlyContinue
#failOnStderr: false # Optional
#ignoreLASTEXITCODE: false # Optional
#pwsh: false # Optional
#workingDirectory: # Optional
Both of these resolve to the PowerShell@2 task. powershell runs Windows PowerShell and will only work on a
Windows agent. pwsh runs PowerShell Core, which must be installed on the agent or container.
NOTE
Each PowerShell session lasts only for the duration of the job in which it runs. Tasks that depend on what has been
bootstrapped must be in the same job as the bootstrap.
Arguments
A RGUM EN T DESC RIP T IO N
failOnStderr (Optional) If this is true, this task will fail if any errors are
Fail on Standard Error written to the error pipeline, or if any data is written to the
Standard Error stream. Otherwise the task will rely on the
exit code to determine failure
Default value: false
pwsh (Optional) If this is true, then on Windows the task will use
Use PowerShell Core pwsh.exe from your PATH instead of powershell.exe
Default value: false
A RGUM EN T DESC RIP T IO N
Examples
Hello World
Create test.ps1 at the root of your repo:
TA SK A RGUM EN T S
Run test.ps1.
Utility: PowerShell Script filename : test.ps1
Write a warning
Add the PowerShell task, set the Type to inline , and paste in this script:
Write an error
Add the PowerShell task, set the Type to inline , and paste in this script:
exit 1
ApplyVersionToAssemblies.ps1
Use a script to customize your build pipeline
Call PowerShell script with multiple arguments
Create PowerShell script test2.ps1 :
- task: PowerShell@2
inputs:
targetType: 'filePath'
filePath: $(System.DefaultWorkingDirectory)\test2.ps1
arguments: > # Use this to avoid newline characters in multiline string
-input1 "Hello"
-input2 "World"
displayName: 'Print Hello World'
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn about PowerShell scripts?
Scripting with Windows PowerShell
Microsoft Script Center (the Scripting Guys)
Windows PowerShell Tutorial
PowerShell.org
How do I set a variable so that it can be read by subsequent scripts and tasks?
Define and modify your build variables in a script
Define and modify your release variables in a script
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are
some predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Publish Build Artifacts task
11/2/2020 • 2 minutes to read • Edit Online
TIP
Looking to get started working with build artifacts? See Artifacts in Azure Pipelines.
Demands
None
YAML snippet
# Publish build artifacts
# Publish build artifacts to Azure Pipelines or a Windows file share
- task: PublishBuildArtifacts@1
inputs:
#pathToPublish: '$(Build.ArtifactStagingDirectory)'
#artifactName: 'drop'
#publishLocation: 'Container' # Options: container, filePath
#targetPath: # Required when publishLocation == FilePath
#parallel: false # Optional
#parallelCount: # Optional
#fileCopyOptions: #Optional
Arguments
A RGUM EN T DESC RIP T IO N
ArtifactName Specify the name of the artifact that you want to create. It
Artifact name can be whatever you want. For example: drop
TargetPath Specify the path to the file share where you want to copy
File share path the files. The path must be a fully-qualified path or a valid
path relative to the root directory of your repository.
Publishing artifacts from a Linux or macOS agent to a file
share is not supported.
A RGUM EN T DESC RIP T IO N
Control options
NOTE
You cannot use Bin , App_Data and other folder names reserved by IIS as an artifact name because this content is not
served in response to Web requests. Please see ASP.NET Web Project Folder Structure for more details.
Usage
A typical pattern for using this task is:
Build something
Copy build outputs to a staging directory
Publish staged artifacts
For example:
steps:
- script: ./buildSomething.sh
- task: CopyFiles@2
inputs:
contents: '_buildOutput/**'
targetFolder: $(Build.ArtifactStagingDirectory)
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: $(Build.ArtifactStagingDirectory)
artifactName: MyBuildOutputs
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use.
Variables are available in expressions as well as scripts; see variables to learn more about how to use them.
There are some predefined build and release variables you can also rely on.
Publish Pipeline Artifacts task
6/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task in a pipeline to publish artifacts for the Azure Pipeline (note that publishing is NOT supported in
release pipelines. It is supported in multi stage pipelines, build pipelines, and yaml pipelines).
TIP
Looking to get started working with build artifacts? See Artifacts in Azure Pipelines.
Demand
None
YAML snippet
# Publish pipeline artifacts
# Publish (upload) a file or directory as a named artifact for the current run
- task: PublishPipelineArtifact@1
inputs:
#targetPath: '$(Pipeline.Workspace)'
#artifactName: # 'drop'
Arguments
A RGUM EN T DESC RIP T IO N
targetPath Path to the folder or file you want to publish. The path must
be a fully-qualified path or a valid path relative to the root
directory of your repository. See Artifacts in Azure Pipelines.
artifactName Specify the name of the artifact that you want to create. It can
be whatever you want. For example: drop
Control options
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Publish To Azure Service Bus task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task in an agentless job of a release pipeline to send a message to an Azure Service Bus using a service
connection and without using an agent.
Demands
Can be used in only an agentless job of a release pipeline.
YAML snippet
# Publish To Azure Service Bus
# Sends a message to Azure Service Bus using a service connection (no agent is required)
- task: PublishToAzureServiceBus@1
inputs:
azureSubscription:
#messageBody: # Optional
#sessionId: # Optional
#signPayload: false
#certificateString: # Required when signPayload == True
#signatureKey: 'signature' # Optional
#waitForCompletion: false
Arguments
PA RA M ET ER C O M M EN T S
Azure Ser vice Bus Connection Required. An existing service connection to an Azure Service
Bus.
Message body Required. The text of the message body to send to the Service
Bus.
Wait for Task Completion Optional. Set this option to force the task to halt until a
response is received.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You do not need an agent to run this task. This task can be used in only an agentless job of a release pipeline.
Where should a task signal completion?
To signal completion, the external service should POST completion data to the following pipelines REST endpoint.
{planUri}/{projectId}/_apis/distributedtask/hubs/{hubName}/plans/{planId}/events?api-version=2.0-preview.1
**Request Body**
{ "name": "TaskCompleted", "taskId": "taskInstanceId", "jobId": "jobId", "result": "succeeded" }
Azure Pipelines
Use this task to run a Python script.
YAML snippet
# Python script
# Run a Python file or inline script
- task: PythonScript@0
inputs:
#scriptSource: 'filePath' # Options: filePath, inline
#scriptPath: # Required when scriptSource == filePath
#script: # Required when scriptSource == inline
#arguments: # Optional
#pythonInterpreter: # Optional
#workingDirectory: # Optional
#failOnStderr: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
workingDirectory (Optional)
Working directory
failOnStderr (Optional) If true, this task will fail if any text is written to
Fail on standard error stderr .
Control options
Remarks
By default, this task will invoke python from the system path. Run Use Python Version to put the version you want
in the system path.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Query Azure Monitor Alerts task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task in an agentless job of a release pipeline to observe the configured Azure monitor rules for active
alerts.
Can be used in only an agentless job of a release pipeline.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Demands
None
YAML snippet
# Query Azure Monitor alerts
# Observe the configured Azure Monitor rules for active alerts
- task: AzureMonitor@1
inputs:
connectedServiceNameARM:
resourceGroupName:
#filterType: 'none' # Options: resource, alertrule, none
#resource: # Required when filterType == Resource
#alertRule: # Required when filterType == Alertrule
#severity: 'Sev0,Sev1,Sev2,Sev3,Sev4' # Optional. Options: sev0, sev1, sev2, sev3, sev4
#timeRange: '1h' # Optional. Options: 1h, 1d, 7d, 30d
#alertState: 'Acknowledged,New' # Optional. Options: new, acknowledged, closed
#monitorCondition: 'Fired' # Optional. Options: fired , resolved
Arguments
PA RA M ET ER C O M M EN T S
Resource type Required. Select the resource type in the selected group.
Resource name Required. Select the resources of the chosen types in the
selected group.
PA RA M ET ER C O M M EN T S
Aler t rules Required. Select from the currently configured alert rules to
query for status.
Succeeds if none of the alert rules are activated at the time of sampling.
For more information about using this task, see Approvals and gates overview.
Also see this task on GitHub.
Query Work Items task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task in an agentless job of a release pipeline to ensure the number of matching items returned by a work
item query is within the configured thresholds.
Can be used in only an agentless job of a release pipeline.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Demands
None
YAML snippet
# Query work items
# Execute a work item query and check the number of items returned
- task: queryWorkItems@0
inputs:
queryId:
#maxThreshold: '0'
#minThreshold: '0'
Arguments
PA RA M ET ER C O M M EN T S
Quer y Required. Select a work item query within the current project.
Can be a built-in or custom query.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Service Fabric PowerShell task
6/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to run a PowerShell script within the context of an Azure Service Fabric cluster connection. Runs any
PowerShell command or script in a PowerShell session that has a Service Fabric cluster connection initialized.
Prerequisites
Service Fabric
This task uses a Service Fabric installation to connect and deploy to a Service Fabric cluster.
Azure Service Fabric Core SDK on the build agent.
YAML snippet
# Service Fabric PowerShell
# Run a PowerShell script in the context of an Azure Service Fabric cluster connection
- task: ServiceFabricPowerShell@1
inputs:
clusterConnection:
#scriptType: 'FilePath' # Options: filePath, inlineScript
#scriptPath: # Optional
#inline: '# You can write your PowerShell scripts inline here. # You can also pass predefined and custom variables
to this script using arguments' # Optional
#scriptArguments: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Cluster Connection The Azure Service Fabric service connection to use to connect and
authenticate to the cluster.
Script Type Specify whether the script is provided as a file or inline in the task.
Script Path Path to the PowerShell script to run. Can include wildcards and
variables. Example:
$(system.defaultworkingdirectory)/**/drop/projectartifacts/**/docker-
compose.yml
. Note : combining compose files is not supported as part of this
task.
Inline Script The PowerShell commands to run on the build agent. More
information
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Shell Script task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run a shell script using bash.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
Demands
sh
YAML snippet
- task: ShellScript@2
inputs:
scriptPath:
#args: '' # Optional
#disableAutoCwd: false # Optional
#cwd: '' # Optional
#failOnStandardError: false
Arguments
A RGUM EN T DESC RIP T IO N
Script Path Relative path from the repo root to the shell script file that
you want to run.
A DVA N C ED
Working Directory Working directory in which you want to run the script. If you
leave it empty it is folder where the script is located.
Fail on Standard Error Select if you want this task to fail if any errors are written to
the StandardError stream.
C O N T RO L O P T IO N S
Example
Create test.sh at the root of your repo. We recommend creating this file from a Linux environment (such as a
real Linux machine or Windows Subsystem for Linux) so that line endings are correct. Also, don't forget to
chmod +x test.sh before you commit it.
#!/bin/bash
echo "Hello World"
echo "AGENT_WORKFOLDER is $AGENT_WORKFOLDER"
echo "AGENT_WORKFOLDER contents:"
ls -1 $AGENT_WORKFOLDER
echo "AGENT_BUILDDIRECTORY is $AGENT_BUILDDIRECTORY"
echo "AGENT_BUILDDIRECTORY contents:"
ls -1 $AGENT_BUILDDIRECTORY
echo "SYSTEM_HOSTTYPE is $SYSTEM_HOSTTYPE"
echo "Over and out."
Run test.bat.
Script Path: test.sh
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn about Bash scripts?
Beginners/BashScripting to get started.
Awesome Bash to go deeper.
How do I set a variable so that it can be read by subsequent scripts and tasks?
Define and modify your build variables in a script
Define and modify your release variables in a script
Q: I'm having problems. How can I troubleshoot them?
A: Try this:
1. On the variables tab, add system.debug and set it to true . Select to allow at queue time.
2. In the explorer tab, view your completed build and click the build step to view its output.
The control options arguments described above can also be useful when you're trying to isolate a problem.
Q: How do variables work? What variables are available for me to use in the arguments?
A: $(Build.SourcesDirectory) and $(Agent.BuildDirectory) are just a few of the variables you can use. Variables
are available in expressions as well as scripts; see variables to learn more about how to use them. There are some
predefined build and release variables you can also rely on.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Update Service Fabric Manifests task
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In TFS 2017 this task is named Update Ser vice Fabric App Versions task .
Use this task in a build pipeline to automatically update the versions of a packaged Service Fabric app. This task
appends a version suffix to all service and app versions, specified in the manifest files, in an Azure Service Fabric
app package.
NOTE
This task is not yet available in release pipelines.
Demands
None
YAML snippet
# Update Service Fabric manifests
# Automatically update portions of application and service manifests in a packaged Azure Service Fabric
application
- task: ServiceFabricUpdateManifests@2
inputs:
#updateType: 'Manifest versions' # Options: manifest Versions, docker Image Settings
applicationPackagePath:
#versionSuffix: '.$(Build.BuildNumber)' # Required when updateType == Manifest Versions
#versionBehavior: 'Append' # Optional. Options: append, replace
#updateOnlyChanged: false # Required when updateType == Manifest Versions
#pkgArtifactName: # Required when updateType == Manifest versions && updateOnlyChanged == true
#logAllChanges: true # Optional
#compareType: 'LastSuccessful' # Optional. Options: lastSuccessful, specific
#buildNumber: # Optional
#overwriteExistingPkgArtifact: true # Optional
#imageNamesPath: # Optional
#imageDigestsPath: # Required when updateType == Docker Image Settings
Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N
Version Value The value appended to the versions in the manifest files.
Default is `.$(Build.BuildNumber)`.
**Tip:** You can modify the [build number format]
(../../process/run-number.md) directly or use a [logging
command](https://go.microsoft.com/fwlink/?
LinkId=821347) to dynamically set a variable in any
format. For example, you can use `$(VersionSuffix)` defined
in a PowerShell task:
`$versionSuffix =
".$([DateTimeOffset]::UtcNow.ToString('yyyyMMdd.HHmm
ss'))"`
`Write-Host "##vso[task.setvariable
variable=VersionSuffix;]$versionSuffix"`
Update only if changed Select this check box if you want to append the new
version suffix to only the packages that have changed
from a previous build. If no changes are found, the
version suffix from the previous build will be appended.
**Note:** By default, the compiler will create different
outputs even if you made no changes. Use the
[deterministic compiler flag]
(https://go.microsoft.com/fwlink/?LinkId=808668) to
ensure builds with the same inputs produce the same
outputs.
Package Artifact Name The name of the artifact containing the application
package from the previous build.
Log all changes Select this check box to compare all files in every package
and log if the file was added, removed, or if its content
changed. Otherwise, compare files in a package only until
the first change is found for potentially faster
performance.
C O N T RO L O P T IO N S
Task Inputs
PA RA M ET ERS DESC RIP T IO N
logAllChanges Compare all files in every package and log if the file was
Log all changes added, removed, or if its content changed. Otherwise,
compare files in a package only until the first change is found
for faster performance
Default value: true
imageNamesPath Path to a text file that contains the names of the Docker
Image Names Path images associated with the Service Fabric application that
should be updated with digests. Each image name must be on
its own line and must be in the same order as the digests in
Image Digests file. If the images are created by the Service
Fabric project, this file is generated as part of the Package
target and its output location is controlled by the property
BuiltDockerImagesFilePath
imageDigestsPath (Required) Path to a text file that contains the digest values of
Image Digests Path the Docker images associated with the Service Fabric
application. This file can be output by the [Docker task]
(../../index.yml) when using the push action. The file should
contain lines of text in the format of
'registry/image_name@digest_value'
Example
Also see: Service Fabric Application Deployment task
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
App Center Test task
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
This task lets you run test suites against an application binary ( .apk or .ipa file) using App Center Test.
Sign up with App Center first.
For details about using this task, see the App Center documentation article Using Azure DevOps for UI Testing.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
YAML snippet
# App Center test
# Test app packages with Visual Studio App Center
- task: AppCenterTest@1
inputs:
appFile:
#artifactsDirectory: '$(Build.ArtifactStagingDirectory)/AppCenterTest'
#prepareTests: true # Optional
#frameworkOption: 'appium' # Required when prepareTests == True# Options: appium, espresso, calabash,
uitest, xcuitest
#appiumBuildDirectory: # Required when prepareTests == True && Framework == Appium
#espressoBuildDirectory: # Optional
#espressoTestApkFile: # Optional
#calabashProjectDirectory: # Required when prepareTests == True && Framework == Calabash
#calabashConfigFile: # Optional
#calabashProfile: # Optional
#calabashSkipConfigCheck: # Optional
#uiTestBuildDirectory: # Required when prepareTests == True && Framework == Uitest
#uitestStorePath: # Optional
#uiTestStorePassword: # Optional
#uitestKeyAlias: # Optional
#uiTestKeyPassword: # Optional
#uiTestToolsDirectory: # Optional
#signInfo: # Optional
#xcUITestBuildDirectory: # Optional
#xcUITestIpaFile: # Optional
#prepareOptions: # Optional
#runTests: true # Optional
#credentialsOption: 'serviceEndpoint' # Required when runTests == True# Options: serviceEndpoint, inputs
#serverEndpoint: # Required when runTests == True && CredsType == ServiceEndpoint
#username: # Required when runTests == True && CredsType == Inputs
#password: # Required when runTests == True && CredsType == Inputs
#appSlug: # Required when runTests == True
#devices: # Required when runTests == True
#series: 'master' # Optional
#dsymDirectory: # Optional
#localeOption: 'en_US' # Required when runTests == True# Options: da_DK, nl_NL, en_GB, en_US, fr_FR,
de_DE, ja_JP, ru_RU, es_MX, es_ES, user
#userDefinedLocale: # Optional
#loginOptions: # Optional
#runOptions: # Optional
#skipWaitingForResults: # Optional
#cliFile: # Optional
#showDebugOutput: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
app (Required) Relative path from the repo root to the APK or IPA
Binary application file path file that you want to test.
Argument alias: appFile
espressoTestApkPath (Optional) Path to APK file with Espresso tests. If not set,
Test APK path (Espresso) build-dir is used to discover it. Wildcard is allowed.
Argument alias: espressoTestApkFile
uitestStorePath (Optional) Path to the store file used to sign the app.
Store file (Xamarin UI Test)
uitestStorePass (Optional) Password of the store file used to sign the app. Use
Store password (Xamarin UI Test) a new variable with its lock enabled on the Variables tab to
encrypt this value.
Argument alias: uiTestStorePassword
uitestKeyAlias (Optional) Enter the alias that identifies the public/private key
Key alias (Xamarin UI Test) pair used in the store file
uitestKeyPass (Optional) Enter the key password for the alias and store file.
Key password (Xamarin UI Test) Use a new variable with its lock enabled on the Variables tab
to encrypt this value.
Argument alias: uiTestKeyPassword
signInfo (Optional) Use Signing Information for signing the test server.
Signing information (Calabash/Xamarin UI Test
xcuitestTestIpaPath (Optional) Path to the *.ipa file with the XCUITest tests.
Test IPA path (XCUITest) Argument alias: xcUITestIpaFile
devices (Required) String to identify what devices this test will run
Devices against. Copy and paste this string when you define a new
test run from App Center Test beacon.
series (Optional) The series name for organizing test runs (e.g.
Test series master, production, beta).
Default value: master
cliLocationOverride (Optional) Path to the App Center CLI on the build or release
App Center CLI location agent.
Argument alias: cliFile
debug (Optional) Add --debug to the App Center CLI for verbose
Enable debug output output.
Argument alias: showDebugOutput
Example
This example runs Espresso tests on an Android app using the App Center Test task.
steps:
- task: AppCenterTest@1
displayName: 'Espresso Test - Synchronous'
inputs:
appFile: 'Espresso/espresso-app.apk'
artifactsDirectory: '$(Build.ArtifactStagingDirectory)/AppCenterTest'
frameworkOption: espresso
espressoBuildDirectory: Espresso
serverEndpoint: 'myAppCenterServiceConnection'
appSlug: 'xplatbg1/EspressoTests'
devices: a84c93af
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Cloud-based Apache JMeter Load Test task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Cau t i on
The cloud-based load testing service is deprecated. More information about the deprecation, the service
availability, and alternative services can be found here.
Use this task to run Apache JMeter load tests in the cloud.
Demands
The agent must have the following capability:
Azure PowerShell
YAML snippet
# Cloud-based Apache JMeter load test
# Run an Apache JMeter load test in the cloud
- task: ApacheJMeterLoadTest@1
inputs:
#connectedServiceName: # Optional
testDrop:
#loadTest: 'jmeter.jmx'
#agentCount: '1' # Options: 1, 2, 3, 4, 5
#runDuration: '60' # Options: 60, 120, 180, 240, 300
#geoLocation: 'Default' # Optional. Options: default, australia East, australia Southeast, brazil South,
central India, central US, east Asia, east US 2, east US, japan East, japan West, north Central US, north
Europe, south Central US, south India, southeast Asia, west Europe, west US
#machineType: '0' # Optional. Options: 0, 2
Arguments
A RGUM EN T DESC RIP T IO N
Apache JMeter test files folder (Required) Relative path from repo root where the load test
files are available.
Apache JMeter file (Required) The Apache JMeter test filename to be used under
the load test folder specified above.
Agent Count (Required) Number of test agents (dual-core) used in the run.
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Cloud-based Load Test task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Cau t i on
The cloud-based load testing service is deprecated. More information about the deprecation, the service
availability, and alternative services can be found here.
Use this task to run a load test in the cloud, to understand, test, and validate your app's performance. The task uses
the Cloud-based Load Test Service based in Microsoft Azure and can be used to test your app's performance by
generating load on it.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Demands
The agent must have the following capability:
Azure PowerShell
YAML snippet
# Cloud-based load test
# Run a load test in the cloud with Azure Pipelines
- task: CloudLoadTest@1
inputs:
#connectedServiceName: # Optional
#testDrop: '$(System.DefaultWorkingDirectory)'
loadTest:
#activeRunSettings: 'useFile' # Optional. Options: useFile, changeActive
#runSettingName: # Required when activeRunSettings == ChangeActive
#testContextParameters: # Optional
#testSettings: # Optional
#thresholdLimit: # Optional
#machineType: '0' # Options: 0, 2
#resourceGroupName: 'default' # Optional
#numOfSelfProvisionedAgents: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N
Azure Pipelines connection The name of a Generic service connection that references the
Azure DevOps organization you will be running the load test
from and publishing the results to.
- Required for builds and releases on TFS and must specify a
connection to the Azure DevOps organization where the load
test will run.
- Optional for builds and releases on Azure Pipelines. In this
case, if not provided, the current Azure Pipelines connection is
used.
- See Generic service connection.
Test settings file Required. The path relative to the repository root of the test
settings file that specifies the files and data required for the
load test such as the test settings, any deployment items, and
setup/clean-up scripts. The task will search this path and any
subfolders.
Load test files folder Required. The path of the load test project. The task looks here
for the files required for the load test, such as the load test file,
any deployment items, and setup/clean-up scripts. The task
will search this path and any subfolders.
Load test file Required. The name of the load test file (such as
myfile.loadtest ) to be executed as part of this task. This
allows you to have more than one load test file and choose
the one to execute based on the deployment environment or
other factors.
Number of permissible threshold violations Optional. The number of critical violations that must occur for
the load test to be deemed unsuccessful, aborted, and marked
as failed.
Examples
Scheduling Load Test Execution
More Information
Cloud-based Load Testing
Source code for this task
Build your Visual Studio solution
Cloud-based Load Testing Knowledge Base
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
How do I use a Test Settings file?
The Test settings file references any setup and cleanup scripts required to execute the load test. For more details
see: Using Setup and Cleanup Script in Cloud Load Test
When should I specify the number of permissible threshold violations?
Use the Number of permissible threshold violations setting if your load test is not already configured with
information about how many violations will cause a failure to be reported. For more details, see: How to: Analyze
Threshold Violations Using the Counters Panel in Load Test Analyzer.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Cau t i on
The cloud-based load testing service is deprecated. More information about the deprecation, the service
availability, and alternative services can be found here.
Use this task to run the Quick Web Performance Test to easily verify your web application exists and is responsive.
The task generates load against an application URL using the Azure Pipelines Cloud-based Load Test Service based
in Microsoft Azure.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Demands
The agent must have the following capability:
Azure PowerShell
YAML snippet
# Cloud-based web performance test
# Run a quick web performance test in the cloud with Azure Pipelines
- task: QuickPerfTest@1
inputs:
#connectedServiceName: # Optional
websiteUrl:
testName:
#vuLoad: '25' # Options: 25, 50, 100, 250
#runDuration: '60' # Options: 60, 120, 180, 240, 300
#geoLocation: 'Default' # Optional. Options: default, australia East, australia Southeast, brazil South,
central India, central US, east Asia, east US 2, east US, japan East, japan West, north Central US, north
Europe, south Central US, south India, southeast Asia, west Europe, west US
#machineType: '0' # Options: 0, 2
#resourceGroupName: 'default' # Optional
#numOfSelfProvisionedAgents: # Optional
#avgResponseTimeThreshold: '0' # Optional
Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N
Azure Pipelines connection The name of a Generic service connection that references the
Azure DevOps organization you will be running the load test
from and publishing the results to.
- Required for builds and releases on TFS and must specify a
connection to the Azure DevOps organization where the load
test will run.
- Optional for builds and releases on Azure Pipelines. In this
case, if not provided, the current Azure Pipelines connection is
used.
- See Generic service connection.
Test Name Required. A name for this load test, used to identify it for
reporting and for comparison with other test runs.
Run Duration (sec) Required. The duration of this test in seconds. Select a value
from the drop-down list.
Load Location The location from which the load will be generated. Select a
global Azure location, or Default to generate the load from
the location associated with your Azure DevOps organization.
Run load test using Select Automatically provisioned agents if you want the
cloud-based load testing service to automatically provision
agents for running the load tests. The application URL must
be accessible from the Internet.
Select Self-provisioned agents if you want to test apps
behind the firewall. You must provision agents and register
them against your Azure DevOps organization when using
this option. See Testing private/intranet applications using
Cloud-based load testing.
Fail test if Avg. Response Time (ms) exceeds Specify a threshold for the average response time in
milliseconds. If the observed response time during the load
test exceeds this threshold, the task will fail.
More Information
Cloud-based Load Testing
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Help and support
See our troubleshooting page
Get advice on Stack Overflow, and get support via our Support page
Container Structure Tests
4/20/2020 • 2 minutes to read • Edit Online
The Container Structure Tests provide a powerful framework to validate the structure of a container image. These
tests can be used to check the output of commands in an image, as well as verify metadata and contents of the
filesystem. Tests can be run either through a standalone binary, or through a Docker image.
Tests within this framework are specified through a YAML or JSON config file. Multiple config files may be specified
in a single test run. The config file will be loaded in by the test runner, which will execute the tests in order. Within
this config file, four types of tests can be written:
Command Tests (testing output/error of a specific command issued)
File Existence Tests (making sure a file is, or isn't, present in the file system of the image)
File Content Tests (making sure files in the file system of the image contain, or do not contain, specific contents)
Metadata Test, singular (making sure certain container metadata is correct)
NOTE
This is an early preview feature. More upcoming features will be rolled out in upcoming sprints.
Arguments
A RGUM EN T DESC RIP T IO N
tag The tag is used in pulling the image from docker registry
Tag service connection
Default value: $(Build.BuildId)
failTaskOnFailedTests (Optional) Fail the task if there are any test failures. Check this
Fail task if there are test failures option to fail the task if test failures are detected.
Build, Test and Publish Test
The container structure test task can be added in the classic pipeline as well as in unified pipeline (multi-stage) &
YAML based pipelines.
YAML
Classic
In the new YAML based unified pipeline, you can search for task in the window.
Once the task is added, you need to set the config file path, docker registory service connection, container
repository and tag, if required. Task input in the yaml based pipeline is created.
YAML file
Sample YAML
steps:
- task: ContainerStructureTest@0
displayName: 'Container Structure Test '
inputs:
dockerRegistryServiceConnection: 'Container_dockerHub'
repository: adma/hellodocker
tag: v1
configFile: /home/user/cstfiles/fileexisttest.yaml
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task in a build pipeline to publish code coverage results produced when running tests to Azure Pipelines or TFS in
order to obtain coverage reporting. The task supports popular coverage result formats such as Cobertura and JaCoCo.
This task can only be used in Build pipelines and is not supported in Release pipelines.
Tasks such as Visual Studio Test, .NET Core, Ant, Maven, Gulp, Grunt also provide the option to publish code coverage data
to the pipeline. If you are using these tasks, you do not need a separate Publish Code Coverage Results task in the pipeline.
Demands
To generate the HTML code coverage report you need dotnet 2.0.0 or later on the agent. The dotnet folder needs to be in
the environment path. If there are multiple folders containing dotnet, the one with version 2.0.0 must be before any others
in the path list.
YAML snippet
# Publish code coverage results
# Publish Cobertura or JaCoCo code coverage results from a build
- task: PublishCodeCoverageResults@1
inputs:
#codeCoverageTool: 'JaCoCo' # Options: cobertura, jaCoCo
summaryFileLocation:
#pathToSources: # Optional
#reportDirectory: # Optional
#additionalCodeCoverageFiles: # Optional
#failIfCoverageEmpty: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
failIfCoverageEmpty (Optional) Fail the task if code coverage did not produce any
Fail if code coverage results are missing results to publish.
Docker
For apps using docker, build and tests may run inside the container, generating code coverage results within the container.
In order to publish the results to the pipeline, the resulting artifacts should be to be made available to the Publish Code
Coverage Results task. For reference you can see a similar example for publishing test results under Build, test, and
publish results with a Docker file section for Docker .
View results
In order to view the code coverage results in the pipeline, see Review code coverage results
Related tasks
Publish Test Results
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Is code coverage data merged when multiple files are provided as input to the task or multiple tasks are used in the
pipeline?
At present, the code coverage reporting functionality provided by this task is limited and it does not merge coverage data.
If you provide multiple files as input to the task, only the first match is considered. If you use multiple publish code
coverage tasks in the pipeline, the summary and report is shown for the last task. Any previously uploaded data is ignored.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments, and
jobs are called phases.
This task publishes test results to Azure Pipelines or TFS when tests are executed to provide a comprehensive
test reporting and analytics experience. You can use the test runner of your choice that supports the results
format you require. Supported results formats include CTest, JUnit (including PHPUnit), NUnit 2, NUnit 3,
Visual Studio Test (TRX), and xUnit 2.
Other built-in tasks such as Visual Studio Test task and Dot NetCore CLI task automatically publish test results
to the pipeline, while tasks such as Ant, Maven, Gulp, Grunt, .NET Core and Xcode provide publishing results
as an option within the task, or build libraries such as Cobertura and JaCoCo. If you are using any of these
tasks, you do not need a separate Publish Test Results task in the pipeline.
The published test results are displayed in the Tests tab in the pipeline summary and help you to measure
pipeline quality, review traceability, troubleshoot failures, and drive failure ownership.
The following example shows the task configured to publish test results.
You can also use this task in a build pipeline to publish code coverage results produced when running
tests to Azure Pipelines or TFS in order to obtain coverage reporting.
Check prerequisites
If you're using a Windows self-hosted agent, be sure that your machine has this prerequisite installed:
.NET Framework 4.6.2 or a later version
Demands
[none]
YAML snippet
# Publish Test Results
# Publish test results to Azure Pipelines
- task: PublishTestResults@2
inputs:
#testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit, cTest
#testResultsFiles: '**/TEST-*.xml'
#searchFolder: '$(System.DefaultWorkingDirectory)' # Optional
#mergeTestResults: false # Optional
#failTaskOnFailedTests: false # Optional
#testRunTitle: # Optional
#buildPlatform: # Optional
#buildConfiguration: # Optional
#publishRunAttachments: true # Optional
The default option uses JUnit format to publish test results. When using VSTest as the testRunner , the
testResultsFiles option should be changed to **/TEST-*.trx .
testResultsFormat is an alias for the testRunner input name. The results files can be produced by multiple
runners, not just a specific runner. For example, jUnit results format is supported by many runners and not
just jUnit.
To publish test results for Python using YAML, see Python in the Ecosystems section of these topics, which
also includes examples for other languages.
Arguments
NOTE
Options specified below are applicable to the latest version of the task.
testRunner (Required) Specify the format of the results files you want
Test result format to publish. The following formats are supported:
- CTest, JUnit, NUnit 2, NUnit 3, Visual Studio Test (TRX) and
xUnit 2
Default value: JUnit
Argument alias: testResultsFormat
A RGUM EN T DESC RIP T IO N
testResultsFiles (Required) Use this to specify one or more test results files.
Test results files - You can use a single-folder wildcard ( * ) and recursive
wildcards ( ** ). For example, **/TEST-*.xml searches for
all the XML files whose names start with TEST- in all
subdirectories. If using VSTest as the test result format, the
file type should be changed to .trx e.g. **/TEST-*.trx
- Multiple paths can be specified, separated by a semicolon.
- Additionally accepts minimatch patterns.
For example, !TEST[1-3].xml excludes files named
TEST1.xml , TEST2.xml , or TEST3.xml .
Default value: **/TEST-*.xml
mergeTestResults When this option is selected, test results from all the files
Merge test results will be reported against a single test run. If this option is
not selected, a separate test run will be created for each
test result file.
Note: Use merge test results to combine files from same
test framework to ensure results mapping and duration are
calculated correctly.
Default value: false
failTaskOnFailedTests (Optional) When selected, the task will fail if any of the tests
Fail if there are test failures in the results file is marked as failed. The default is false,
which will simply publish the results from the results file.
Default value: false
testRunTitle (Optional) Use this option to provide a name for the test
Test run title run against which the results will be reported. Variable
names declared in the build or release pipeline can be used.
platform (Optional) Build platform against which the test run should
Build Platform be reported.
For example, x64 or x86 . If you have defined a variable
for the platform in your build task, use that here.
Argument alias: buildPlatform
publishRunAttachments (Optional) When selected, the task will upload all the test
Upload test results files result files as attachments to the test run.
Default value: true
Duration1 /TestRun/Results/UnitTestResult.Attrib
utes["duration "].Value Or
/TestRun/Results/WebTestResult.Attrib
utes["duration "].Value Or
/TestRun/Results/TestResultAggregatio
n.Attributes["duration "].Value
Owner /TestRun/TestDefinitions/UnitTest/Own
ers/Owner.Attributes["name "].Value
SC O P E F IEL D VISUA L ST UDIO T EST ( T RX)
Outcome /TestRun/Results/UnitTestResult.Attrib
utes["outcome "].Value Or
/TestRun/Results/WebTestResult.Attrib
utes["outcome "].Value Or
/TestRun/Results/TestResultAggregatio
n.Attributes["outcome "].Value
Priority /TestRun/TestDefinitions/UnitTest.Attri
butes["priority "].Value
1 Duration is used only when Date star ted and Date completed are not available.
2 The fully Qualified name format is Namespace.Testclass.Methodname with a character limit of 512. If the
test is data driven and has parameters, the character limit will include the parameters.
Docker
For Docker based apps there are many ways to build your application and run tests:
Build and test in a build pipeline: build and tests execute in the pipeline and test results are published using
the Publish Test Results task.
Build and test with a multi-stage Dockerfile: build and tests execute inside the container using a multi-
stage Docker file, as such test results are not published back to the pipeline.
Build, test, and publish results with a Dockerfile: build and tests execute inside the container and results are
published back to the pipeline. See the example below.
Build, test, and publish results with a Docker file
In this approach, you build your code and run tests inside the container using a Docker file. The test results
are then copied to the host to be published to the pipeline. To publish the test results to Azure Pipelines, you
can use the Publish Test Results task. The final image will be published to Docker or Azure Container
Registry
Get the code
1. Import into Azure DevOps or fork into GitHub the following repository. This sample code includes a
Dockerfile file at the root of the repository along with .vsts-ci.docker.yml file.
https://github.com/MicrosoftDocs/pipelines-dotnet-core
2. Create a Dockerfile.build file at the root of the directory with the following:
This file contains the instructions to build code and run tests. The tests are then copied to a file
testresults.trx inside the container.
3. To make the final image as small as possible, containing only the runtime and deployment artifacts,
replace the contents of the existing Dockerfile with the following:
- task: PublishTestResults@2
inputs:
testRunner: VSTest
testResultsFiles: '**/*.trx'
failTaskOnFailedTests: true
- script: |
docker build -f Dockerfile -t $(dockerId)/dotnetcore-sample:$BUILD_BUILDID .
docker login -u $(dockerId) -p $pswd
docker push $(dockerId)/dotnetcore-sample:$BUILD_BUILDID
env:
pswd: $(dockerPassword)
Alternatively, if you configure an Azure Container Registry and want to push the image to that registry,
replace the contents of the .vsts-ci.yml file with the following:
# Build Docker image for this app to be published to Azure Container Registry
pool:
vmImage: 'ubuntu-16.04'
variables:
buildConfiguration: 'Release'
steps:
- script: |
docker build -f Dockerfile.build -t $(dockerId)/dotnetcore-build:$BUILD_BUILDID .
docker run --name dotnetcoreapp --rm -d $(dockerId)/dotnetcore-build:$BUILD_BUILDID
docker cp dotnetcoreapp:app/dotnetcore-tests/TestResults $(System.DefaultWorkingDirectory)
docker cp dotnetcoreapp:app/dotnetcore-sample/out $(System.DefaultWorkingDirectory)
docker stop dotnetcoreapp
- task: PublishTestResults@2
inputs:
testRunner: VSTest
testResultsFiles: '**/*.trx'
failTaskOnFailedTests: true
- script: |
docker build -f Dockerfile -t $(dockerId).azurecr.io/dotnetcore-sample:$BUILD_BUILDID .
docker login -u $(dockerId) -p $pswd $(dockerid).azurecr.io
docker push $(dockerId).azurecr.io/dotnetcore-sample:$BUILD_BUILDID
env:
pswd: $(dockerPassword)
Attachments support
The Publish Test Results task provides support for attachments for both test run and test results for the
following formats. For public projects, we support 2GB of total attachments.
Visual Studio Test (TRX )
SC O P E TYPE PAT H
NUnit 3
SC O P E PAT H
NOTE
The option to upload the test results file as an attachment is a default option in the task, applicable to all formats.
Related tasks
Visual Studio Test
Publish Code Coverage Results
FAQ
What is the maximum permissible limit of FQN?
The maximum FQN limit is 512 characters.
Does the FQN Character limit also include properties and their values in case of data driven tests?
Yes, the FQN character limit includes properties and their values.
Will the FQN be different for sub-results?
Currently, sub-results from the data driven tests will not show up in the corresponding data.
Example: I have a Test case: Add product to cart Data 1: Product = Apparel Data 2: Product = Footwear
All test sub-result published will only have the test case name and the data of the first row.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are
called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
This task is deprecated in Azure Pipelines and TFS 2018 and later. Use version 2.x or higher of the Visual Studio Test task
together with jobs to run unit and functional tests on the universal agent.
For more details, see Testing with unified agents and jobs.
YAML snippet
# Run functional tests
# Deprecated: This task and it’s companion task (Visual Studio Test Agent Deployment) are deprecated. Use the 'Visual
Studio Test' task instead. The VSTest task can run unit as well as functional tests. Run tests on one or more agents
using the multi-agent job setting. Use the 'Visual Studio Test Platform' task to run tests without needing Visual
Studio on the agent. VSTest task also brings new capabilities such as automatically rerunning failed tests.
- task: RunVisualStudioTestsusingTestAgent@1
inputs:
testMachineGroup:
dropLocation:
#testSelection: 'testAssembly' # Options: testAssembly, testPlan
#testPlan: # Required when testSelection == TestPlan
#testSuite: # Required when testSelection == TestPlan
#testConfiguration: # Required when testSelection == TestPlan
#sourcefilters: '**\*test*.dll' # Required when testSelection == TestAssembly
#testFilterCriteria: # Optional
#runSettingsFile: # Optional
#overrideRunParams: # Optional
#codeCoverageEnabled: false # Optional
#customSlicingEnabled: false # Optional
#testRunTitle: # Optional
#platform: # Optional
#configuration: # Optional
#testConfigurations: # Optional
#autMachineGroup: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N
Test Drop Location Required. The location on the test machine(s) where the test
binaries have been copied by a Windows Machine File Copy or
Azure File Copy task. System stage variables from the test agent
machines can be used to specify the drop location. Examples:
c:\tests and %systemdrive%\Tests
Test Selection Required. Whether the tests are to be selected from test
assemblies or from a test plan.
Test Assembly Required when Test Selection is set to Test Assembly . The test
assemblies from which the tests should be executed. Paths are
relative to the sources directory.
- Separate multiple paths with a semicolon.
- Default is **\*test*.dll
- For JavaScript tests, enter the path and name of the .js files
containing the tests.
- Wildcards can be used. Example:
**\commontests\*test*.dll; **\frontendtests\*test*.dll
Test Filter criteria Optional when Test Selection is set to Test Assembly . A filter to
specify the tests to execute within the test assembly files. Works
the same way as the /TestCaseFilter option of
vstest.console.exe. Example: Priority=1 |
Name=MyTestMethod
Test Plan Required if Test Suite is not specified when Test Selection is set
to Test Plan . Select a test plan already configured for this
organization.
Test Suite Required if Test Plan is not specified when Test Selection is set
to Test Plan . Select a test suite from the selected test plan.
Test Configuration Optional when Test Selection is set to Test Plan . Select a test
configuration from the selected test plan.
Override Test Run Parameters Optional. A string containing parameter overrides for parameters
defined in the TestRunParameters section of the .runsettings
file. Example: Platform=$(platform);Port=8080
Code Coverage Enabled When set, the task will collect code coverage information during
the run and upload the results to the server. Supported for .NET
and C++ projects only.
Distribute tests by number of machines When checked, distributes tests based on the number of
machines, instead of distributing tests at the assembly level,
irrespective of the container assemblies passed to the task.
A RGUM EN T DESC RIP T IO N
Test Run Title Optional. A name for this test run, used to identify it for reporting
and in comparison with other test runs.
Platform Optional. The build platform against which the test run should be
reported. Used only for reporting.
- If you are using the Build - Visual Studio template, this is
automatically defined, such as x64 or x86
- If you have defined a variable for platform in your build task, use
that here.
Configuration Optional. The build configuration against which the test run
should be reported. Used only for reporting.
- If you are using the Build - Visual Studio template, this is
automatically defined, such as Debug or Release
- If you have defined a variable for configuration in your build
task, use that here.
Test Configurations Optional. A string that contains the filter(s) to report the
configuration on which the test case was run. Used only for
reporting with Microsoft Test Manager.
- Syntax: {expression for test method name(s)} : {configuration ID
from Microsoft Test Manager}
- Example: FullyQualifiedName~Chrome:12 to report all test
methods that have Chrome in the Fully Qualified Name and
map them to configuration ID 12 defined in Microsoft Test
Manager.
- Use DefaultTestConfiguration:{Id} as a catch-all.
Application Under Test Machines A list of the machines on which the Application Under Test (AUT) is
deployed, or on which a specific process such as W3WP.exe is
running. Used to collect code coverage data from these machines.
Use this in conjunction with the Code Coverage Enabled
setting. The list can be a comma-delimited list of machine names
or an output variable from an earlier task.
Scenarios
Typical scenarios include:
Tests that require additional installations on the test machines, such as different browsers for Selenium tests
Coded UI tests
Tests that require a specific operating system configuration
To execute a large number of unit tests more quickly by using multiple test machines
Use this task to:
Run automated tests against on-premises standard environments
Run automated tests against existing Azure environments
Run automated tests against newly provisioned azure environments
You can run unit tests, integration tests, functional tests - in fact any test that you can execute using the Visual Studio test
runner (vstest).
Using multiple machines in a Machine Group enables the task to run parallel distributed execution of tests. Parallelism is at
the test assembly level, not at individual test level.
These scenarios are supported for:
TFS on-premises and Azure Pipelines
Build agents
Hosted and on-premises agents.
The build agent must be able to communicate with all test machines. If the test machines are on-premises
behind a firewall, the hosted build agents cannot be used.
The build agent must have access to the Internet to download test agents. If this is not the case, the test agent
must be manually downloaded and deployed to a network location that is accessible by the build agent, and a
Visual Studio Test Agent Deployment task used with an appropriate path for the Test Agent Location
parameter. Automatic checking for new test agent versions is not supported in this topology.
CI/CD workflow
The Build-Deploy-Test (BDT) tasks are supported in both build and release pipelines.
Machine group configuration
Only Windows machines are supported when using BDT tasks inside a Machine Group. Using Linux, macOS, or
other platforms inside a Machine Group with BDT tasks is not supported.
Installing any version or release of Visual Studio on any of the test machines is not supported.
Installing an older version of the test agent on any of the test machines is not supported.
Test machine topologies
Azure-based test machines are fully supported, both existing test machines and newly provisioned machines.
Domain-joined test machines are supported.
Workgroup-joined test machines must have HTTPS authentication enabled and configured during creation of
the Machine Group.
Test agent machines must have network access to the Team Foundation Server instance. Test machines isolated
on the network are not supported.
Usage Error Conditions
Running tests across different Machine Groups, and running builds (with any BDT tasks) in parallel against these
Machine Groups is not supported.
Cancelling an in-progress build or release with BDT tasks is not supported. If you do so, subsequent builds may
not behave as expected.
Cancelling an in-progress test run queued through BDT tasks is not supported.
Configuring a test agent and running tests under a non-administrative account or under a service account is not
supported.
More information
Using the Visual Studio Agent Deployment task on machines not connected to the internet
Run continuous tests with your builds
Testing in Continuous Integration and Continuous Deployment Workflows
Related tasks
Deploy Azure Resource Group
Azure File Copy
Windows Machine File Copy
PowerShell on Target Machines
Visual Studio Test Agent Deployment
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
How do I create an Azure Resource Group for testing?
See Using the Azure Portal to manage your Azure resources and Azure Resource Manager - Creating a Resource Group
and a VNET.
Where can I get more information about the Run Settings file?
See Configure unit tests by using a .runsettings file
Where can I get more information about overriding settings in the Run Settings file?
See Supplying Run Time Parameters to Tests
How can I customize code coverage analysis and manage inclusions and exclusions
See Customize Code Coverage Analysis
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
::: moniker-end
Azure Pipelines
Use this task to run unit and functional tests (Selenium, Appium, Coded UI test, and more) using the Visual
Studio Test Runner. Other than MSTest-based tests, test frameworks that have a Visual Studio test adapter,
such as xUnit, NUnit, Chutzpah, can also be executed.
Tests that target the .NET core framework can be executed by specifying the appropriate target framework
value in the .runsettings file.
Tests can be distributed on multiple agents using version 2 of this task. For more information, see Run tests
in parallel using the Visual Studio Test task.
Check prerequisites
If you're using a Windows self-hosted agent, be sure that your machine has this prerequisite installed:
.NET Framework 4.6.2 or a later version
Demands
The agent must have the following capability:
vstest
The vstest demand can be satisfied in two ways:
1. Visual Studio is installed on the agent machine.
2. By using the Visual Studio Test Platform Installer task in the pipeline definition.
YAML snippet
# Visual Studio Test
# Run unit and functional tests (Selenium, Appium, Coded UI test, etc.) using the Visual Studio Test
(VsTest) runner. Test frameworks that have a Visual Studio test adapter such as MsTest, xUnit, NUnit,
Chutzpah (for JavaScript tests using QUnit, Mocha and Jasmine), etc. can be run. Tests can be
distributed on multiple agents using this task (version 2).
- task: VSTest@2
inputs:
#testSelector: 'testAssemblies' # Options: testAssemblies, testPlan, testRun
#testAssemblyVer2: | # Required when testSelector == TestAssemblies
# **\*test*.dll
# !**\*TestAdapter.dll
# !**\obj\**
#testPlan: # Required when testSelector == TestPlan
#testSuite: # Required when testSelector == TestPlan
#testConfiguration: # Required when testSelector == TestPlan
#tcmTestRun: '$(test.RunId)' # Optional
#searchFolder: '$(System.DefaultWorkingDirectory)'
#testFiltercriteria: # Optional
#runOnlyImpactedTests: False # Optional
#runAllTestsAfterXBuilds: '50' # Optional
#uiTests: false # Optional
#vstestLocationMethod: 'version' # Optional. Options: version, location
#vsTestVersion: 'latest' # Optional. Options: latest, 16.0, 15.0, 14.0, toolsInstaller
#vstestLocation: # Optional
#runSettingsFile: # Optional
#overrideTestrunParameters: # Optional
#pathtoCustomTestAdapters: # Optional
#runInParallel: False # Optional
#runTestsInIsolation: False # Optional
#codeCoverageEnabled: False # Optional
#otherConsoleOptions: # Optional
#distributionBatchType: 'basedOnTestCases' # Optional. Options: basedOnTestCases,
basedOnExecutionTime, basedOnAssembly
#batchingBasedOnAgentsOption: 'autoBatchSize' # Optional. Options: autoBatchSize, customBatchSize
#customBatchSizeValue: '10' # Required when distributionBatchType == BasedOnTestCases &&
BatchingBasedOnAgentsOption == CustomBatchSize
#batchingBasedOnExecutionTimeOption: 'autoBatchSize' # Optional. Options: autoBatchSize,
customTimeBatchSize
#customRunTimePerBatchValue: '60' # Required when distributionBatchType == BasedOnExecutionTime &&
BatchingBasedOnExecutionTimeOption == CustomTimeBatchSize
#dontDistribute: False # Optional
#testRunTitle: # Optional
#platform: # Optional
#configuration: # Optional
#publishRunAttachments: true # Optional
#failOnMinTestsNotRun: false # Optional
#minimumExpectedTests: '1' # Optional
#diagnosticsEnabled: false # Optional
#collectDumpOn: 'onAbortOnly' # Optional. Options: onAbortOnly, always, never
#rerunFailedTests: False # Optional
#rerunType: 'basedOnTestFailurePercentage' # Optional. Options: basedOnTestFailurePercentage,
basedOnTestFailureCount
#rerunFailedThreshold: '30' # Optional
#rerunFailedTestCasesMaxLimit: '5' # Optional
#rerunMaxAttempts: '3' # Optional
Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N
testAssemblyVer2 (Required) Run tests from the specified files. Ordered tests
Test files and webtests can be run by specifying the .orderedtest
and .webtest files respectively. To run .webtest , Visual
Studio 2017 Update 4 or higher is needed. The file paths
are relative to the search folder. Supports multiple lines of
minimatch patterns. More Information
Default value:
**\\*test*.dll\n!**\\*TestAdapter.dll\n!**\\obj\\**
rerunFailedTests (Optional) Selecting this option will rerun any failed tests
Rerun failed tests until they pass or the maximum # of attempts is reached.
Default value: False
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
How can I run tests that use TestCase as a data source?
To run automated tests that use TestCase as a data source, the following is needed:
1. You must have Visual Studio 2017.6 or higher on the agent machine. Visual Studio Test Platform
Installer task cannot be used to run tests that use TestCase as a data source.
2. Create a PAT that is authorized for the scope “Work Items (full)”.
3. Add a secure Build or Release variable called Test.TestCaseAccessToken with the value set to the PAT
created in the previous step.
I am running into issues when running data-driven xUnit and NUnit tests with some of the task options.
Are there known limitations?
Data-driven tests that use xUnit and NUnit test frameworks have some known limitations and cannot be
used with the following task options:
1. Rerun failed tests.
2. Distributing tests on multiple agents and batching options.
3. Test Impact Analysis
The above limitations are because of how the adapters for these test frameworks discover and report data-
driven tests.
Visual Studio Test Agent Deployment task
11/2/2020 • 8 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are
called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
This task is deprecated in Azure Pipelines and TFS 2018 and later. Use version 2.x or higher of the Visual Studio Test task
together with jobs to run unit and functional tests on the universal agent. For more details, see Testing with unified agents
and jobs.
YAML snippet
# Visual Studio test agent deployment
# Deprecated: Instead, use the 'Visual Studio Test' task to run unit and functional tests
- task: DeployVisualStudioTestAgent@2
inputs:
testMachines:
adminUserName:
adminPassword:
#winRmProtocol: 'Http' # Options: http, https
#testCertificate: true # Optional
machineUserName:
machinePassword:
#runAsProcess: false # Optional
#isDataCollectionOnly: false # Optional
#testPlatform: '14.0' # Optional. Options: 15.0, 14.0
#agentLocation: # Optional
#updateTestAgent: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Password The password for the administrative account specified above. This
parameter is required when used with a list of machines. It is
optional when specifying a machine group and, if specified,
overrides the credential settings defined for the machine group.
Consider using a secret variable global to the build or release
pipeline to hide the password. Example: $(passwordVariable)
Protocol The protocol that will be used to connect to the target host, either
HTTP or HTTPS.
Agent Configuration - Username Required. The username that the test agent will use. Must be an
account on the test machines that has administrative permissions.
- Formats such as username , domain\username , machine-
name\username , and .\username are supported.
- UPN formats such as [email protected] and built-in
system accounts such as NT Authority\System are not
supported.
Agent Configuration - Password Required. The password for the Username for the test agent. To
protect the password, create a variable and use the "padlock" icon
to hide it.
A RGUM EN T DESC RIP T IO N
Agent Configuration - Run UI tests When set, the test agent will run as an interactive process. This is
required when interacting with UI elements or starting
applications during the tests. For example, Coded UI or Selenium
tests that are running on full fidelity browsers will require this
option to be set.
Agent Configuration - Enable data collection only When set, the test agent will return previously collected data and
not re-run the tests. At present this is only available for Code
Coverage. Also see FAQ section below.
Advanced - Test agent version The version of the test agent to use.
Advanced - Test agent location Optional. The path to the test agent (vstf_testagent.exe) if
different from the default path.
- If you use a copy of the test agent located on your local
computer or network, specify the path to that instance.
- The location must be accessible by either the build agent (using
the identity it is running under) or the test agent (using the
identity configured above).
- For Azure test machines, the web location can be used.
Advanced - Update test agent If set, and the test agent is already installed on the test machines,
the task will check if a new version of the test agent is available.
Supported scenarios
Use this task for:
Running automated tests against on-premises standard environments
Running automated tests against existing Azure environments
Running automated tests against newly provisioned Azure environments
The supported options for these scenarios are:
TFS
On-premises and Azure Pipelines
Build and release agents
Hosted and on-premises agents are supported.
The agent must be able to communicate with all test machines. If the test machines are on-premises behind a
firewall, an Azure Pipelines Microsoft-hosted agent cannot be used because it will not be able to communicate
with the test machines.
The agent must have Internet access to download test agents. If this is not the case, the test agent must be
manually downloaded, uploaded to a network location accessible to the agent, and the Test Agent Location
parameter used to specify the location. The user must manually check for new versions of the agent and update
the test machines.
Continuous integration/continuous deployment workflows
Build/deploy/test tasks are supported in both build and release workflows.
Machine group configuration
Only Windows-based machines are supported inside a machine group for build/deploy/test tasks. Linux,
macOS, or other platforms are not supported inside a machine group.
Installing any version of Visual Studio on any of the test machines is not supported.
Installing any older version of the test agent on any of the test machines is not supported.
Test machine topologies
Azure-based test machines are fully supported, both existing test machines and newly provisioned test
machines.
Machines with the test agent installed must have network access to the TFS instance in use. Network-isolated
test machines are not supported.
Domain-joined test machines are supported.
Workgroup-joined test machines must use HTTPS authentication configured during machine group creation.
Usage Error Conditions
Using the same test machines across different machine groups, and running builds (with any build/deploy/test
tasks) in parallel against those machine groups is not supported.
Cancelling an in-progress build or release that contains any build/deploy/test tasks is not supported. If you do
cancel, behavior of subsequent builds may be unpredictable.
Cancelling an ongoing test run queued through build/deploy/test tasks is not supported.
Configuring the test agent and running tests as a non-administrator, or by using a service account, is not
supported.
Running tests for Universal Windows Platform apps is not supported. Use the Visual Studio Test task to run
these tests.
Example
Testing in Continuous Integration and Continuous Deployment Workflows
More information
Using the Visual Studio Agent Deployment task on machines not connected to the internet
Set up automated testing for your builds
Source code for this task
Related tasks
Visual Studio Test
Azure File Copy
Windows Machine File Copy
PowerShell on Target Machines
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
When would I use the Enable Data Collection Only option?
An example would be in a client-server application model, where you deploy the test agent on the servers and use another
task to deploy the test agent to test machines. This enables you to collect data from both server and client machines
without triggering the execution of tests on the server machines.
How do I create an Azure Resource Group for testing?
See Using the Azure Portal to manage your Azure resources and Azure Resource Manager - Creating a Resource Group
and a VNET.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to run CocoaPods pod install.
CocoaPods is the dependency manager for Swift and Objective-C Cocoa projects. This task optionally runs
pod repo update and then runs pod install .
Demands
None
YAML snippet
# CocoaPods
# Install CocoaPods dependencies for Swift and Objective-C Cocoa projects
- task: CocoaPods@0
inputs:
#workingDirectory: # Optional
forceRepoUpdate:
#projectDirectory: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
forceRepoUpdate (Required) Selecting this option will force running 'pod repo
Force repo update update' before install.
Default value: false
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
What other kinds of apps can I build?
Build and deploy your app examples
What other kinds of build tasks are available?
Build and release tasks catalog
How do we protect our codebase from build breaks?
Git: Improve code quality with branch policies with an option to require that code builds before it can be
merged to a branch. For GitHub repositories, similar policies are available in GitHub's repository settings
under Branches.
TFVC: Use gated check-in.
How do I modify other parts of my build pipeline?
Build and release tasks to run tests, scripts, and a wide range of other processes.
Specify build options such as specifying how completed builds are named, building multiple configurations,
creating work items on failure.
Supported source repositories to pick the source of the build and modify options such as how the agent
workspace is cleaned.
Set build triggers to modify how your CI builds run and to specify scheduled builds.
Specify build retention policies to automatically delete old builds.
I selected parallel multi-configuration, but only one build is running at a time.
If you're using Azure Pipelines, you might need more parallel jobs. See Parallel jobs in Azure Pipelines.
How do I see what has changed in my build pipeline?
View the change history of your build pipeline
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Conda Environment task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to create and activate a Conda environment.
NOTE
This task has been deprecated. Use conda directly in the bash task or batch script task as an alternative.
This task will create a Conda environment and activate it for subsequent build tasks.
If the task finds an existing environment with the same name, the task will simply reactivate it. This is possible on
self-hosted agents. To recreate the environment and reinstall any of its packages, set the "Clean the environment"
option.
Running with the "Update to the latest Conda" option will attempt to update Conda before creating or activating
the environment. If you are running a self-hosted agent and have configured a Conda installation to work with the
task, this may result in your Conda installation being updated.
NOTE
Microsoft-hosted agents won't have Conda in their PATH by default. You will need to run this task in order to use Conda.
After running this task, PATH will contain the binary directory for the activated environment, followed by the
binary directories for the Conda installation itself. You can run scripts as subsequent build tasks that run Python,
Conda, or the command-line utilities from other packages you install. For example, you can run tests with pytest or
upload a package to Anaconda Cloud with the Anaconda client.
TIP
After running this task, the environment will be "activated," and packages you install by calling conda install will get
installed to this environment.
Demands
None
Prerequisites
A Microsoft-hosted agent, or a self-hosted agent with Anaconda or Miniconda installed.
If using a self-hosted agent, you must either add the conda executable to PATH or set the CONDA environment
variable to the root of the Conda installation.
YAML snippet
# Conda environment
# This task is deprecated. Use `conda` directly in script to work with Anaconda environments.
- task: CondaEnvironment@1
inputs:
#createCustomEnvironment: # Optional
#environmentName: # Required when createCustomEnvironment == True
#packageSpecs: 'python=3' # Optional
#updateConda: true # Optional
#installOptions: # Optional
#createOptions: # Optional
#cleanEnvironment: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
How can I configure a self-hosted agent to use this task?
You can use this task either with a full Anaconda installation or a Miniconda installation. If using a self-hosted agent,
you must add the conda executable to PATH . Alternatively, you can set the CONDA environment variable to the root
of the Conda installation -- that is, the directory you specify as the "prefix" when installing Conda.
Package: Maven Authenticate
11/2/2020 • 2 minutes to read • Edit Online
Provides credentials for Azure Artifacts feeds and external Maven repositories in the current user's settings.xml file.
YAML snippet
# Provides credentials for Azure Artifacts feeds and external Maven repositories.
- task: MavenAuthenticate@0
#inputs:
#artifactsFeeds: MyFeedInOrg1, MyFeedInOrg2 # Optional
#mavenServiceConnections: serviceConnection1, serviceConnection2 # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Examples
Authenticate Maven feeds inside your organization
In this example, we authenticate two Azure Artifacts feeds within our organization.
Task definition
- task: MavenAuthenticate@0
displayName: 'Maven Authenticate'
inputs:
artifactsFeeds: MyFeedInOrg1,MyFeedInOrg2
The MavenAuthenticate task updates the settings.xml file present in the agent user's .m2 directory located at
{user.home}/.m2/settings.xml to add two entries inside the <servers> element.
settings.xml
<servers>
<server>
<id>MyFeedInOrg1</id>
<username>AzureDevOps</username>
<password>****</password>
</server>
<server>
<id>MyFeedInOrg2</id>
<username>AzureDevOps</username>
<password>****</password>
</server>
</servers>
You should set the repositories in your project's pom.xml to have the same <id> as the name specified in the task
for Maven to be able to correctly authenticate the task.
pom.xml
Project scoped feed
<repository>
<id>MyFeedInOrg1</id>
<url>https://pkgs.dev.azure.com/OrganzationName/ProjectName/_packaging/MyProjectScopedFeed1/Maven/v1</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>MyFeedInOrg1</id>
<url>https://pkgs.dev.azure.com/OrganzationName/_packaging/MyOrgScopedFeed1/Maven/v1</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
The Artifacts feed URL may or may not contain the project. An URL for a project scoped feed must contain the
project and a URL for a organization scoped feed must not contain the project. Learn more.
Authenticate Maven feeds outside your organization.
In this example, we authenticate two external Maven repositories.
Task definition
- task: MavenAuthenticate@0
displayName: 'Maven Authenticate'
inputs:
MavenServiceConnections: central,MavenOrg
The MavenAuthenticate task updates the settings.xml file present in the agent users' .m2 directory located at
{user.home}/.m2/settings.xml to add two entries inside the <servers> element.
settings.xml
<servers>
<server>
<id>central</id>
<username>centralUsername</username>
<password>****</password>
</server>
<server>
<id>MavenOrg</id>
<username>mavenOrgUsername</username>
<password>****</password>
</server>
</servers>
You should set the repositories in your project's pom.xml to have the same <id> as the name specified in the task
for Maven to be able to correctly authenticate the task.
pom.xml
<repository>
<id>central</id>
<url>https://repo1.maven.org/maven2/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where is the settings.xml file which contains the authenticated repositories located?
The Maven Authenticate task searches for the settings.xml in the current user's home directory. For Linux and Mac,
the path is $HOME/.m2/settings.xml , for Windows the path is %USERPROFILE%\.m2\settings.xml . If the settings.xml file
doesn't exist a new one will be created at that path.
We use the mvn -s switch to specify our own settings.xml file, how do we authenticate Azure Artifacts feeds
there?
The Maven Authenticate task doesn't have access to the custom settings.xml file specified using a -m switch. To add
Azure Artifacts authentication for your custom settings.xml, add a server element inside your settings.xml like this:
<server>
<id>feedName</id> <!-- Set this to the id of the <repository> element inside your pom.xml file. -->
<username>AzureDevOps</username>
<password>${env.SYSTEM_ACCESSTOKEN}</password>
</server>
The access token variable can be set in your pipelines using these instructions.
npm task
11/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to install and publish npm packages.
NOTE
Moving forward, the npm Authenticate task is the recommended way to use authenticated feeds within a pipeline.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.
YAML snippet
# npm
# Install and publish npm packages, or run an npm command. Supports npmjs.com and authenticated registries
like Azure Artifacts.
- task: Npm@1
inputs:
#command: 'install' # Options: install, publish, custom
#workingDir: # Optional
#verbose: # Optional
#customCommand: # Required when command == Custom
#customRegistry: 'useNpmrc' # Optional. Options: useNpmrc, useFeed
#customFeed: # Required when customRegistry == UseFeed
#customEndpoint: # Optional
#publishRegistry: 'useExternalRegistry' # Optional. Options: useExternalRegistry, useFeed
#publishFeed: # Required when publishRegistry == UseFeed
#publishPackageMetadata: true # Optional
#publishEndpoint: # Required when publishRegistry == UseExternalRegistry
customRegistries You can either commit a .npmrc file to your source code
Registries to use repository and set its path or select a registry from Azure
Artifacts.
useNpmrc
Select this option to use feeds specified in a .npmrc file
you've checked into source control. If no .npmrc file is
present, the task will default to using packages directly from
npmjs.
Credentials for registries outside this
organization/collection can be used to inject credentials
you've provided as an npm service connection into your
.npmrc as the build runs.
useFeed
Select this option to use one Azure Artifacts feed in the
same organization/collection as the build.
customRegistries You can either commit a .npmrc file to your source code
Registries to use repository and set its path or select a registry from Azure
Artifacts.
useNpmrc
Select this option to use feeds specified in a .npmrc file
you've checked into source control. If no .npmrc file is
present, the task will default to using packages directly from
npmjs.
Credentials for registries outside this
organization/collection can be used to inject credentials
you've provided as an npm service connection into your
.npmrc as the build runs.
useFeed
Select this option to use one Azure Artifacts feed in the
same organization/collection as the build.
customRegistries You can either commit a .npmrc file to your source code
Registries to use repository and set its path or select a registry from Azure
Artifacts.
useNpmrc
Select this option to use feeds specified in a .npmrc file
you've checked into source control. If no .npmrc file is
present, the task will default to using packages directly from
npmjs.
Credentials for registries outside this
organization/collection can be used to inject credentials
you've provided as an npm service connection into your
.npmrc as the build runs.
useFeed
Select this option to use one Azure Artifacts feed in the
same organization/collection as the build.
Examples
Build: gulp
Build your Node.js app with gulp
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn npm commands and arguments?
npm docs
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Package: npm Authenticate task (for task runners)
6/2/2020 • 4 minutes to read • Edit Online
Azure Pipelines
Use this task to provide npm credentials to an .npmrc file in your repository for the scope of the build. This
enables npm, as well as npm task runners like gulp and Grunt, to authenticate with private registries.
YAML snippet
# npm authenticate
# Don't use this task if you're also using the npm task. Provides npm credentials to an .npmrc file in your
repository for the scope of the build. This enables npm and npm task runners like gulp and Grunt to
authenticate with private registries.
- task: npmAuthenticate@0
inputs:
#workingFile:
#customEndpoint: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
workingFile Path to the .npmrc file that specifies the registries you want to
.npmrc file to authenticate work with. Select the file, not the folder.
For example /packages/mypackage.npmrc"
Examples
Restore npm packages for your project from a registry within your organization
If the only authenticated registries you use are Azure Artifacts registries in your organization, you only need to
specify the path to an .npmrc file to the npmAuthenticate task.
.npmrc
registry=https://pkgs.dev.azure.com/{organization}/_packaging/{feed}/npm/registry/
always-auth=true
npm
- task: npmAuthenticate@0
inputs:
workingFile: .npmrc
- script: npm ci
# ...
- script: npm publish
registry=https://pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed}/npm/registry/
@{scope}:registry=https://pkgs.dev.azure.com/{otherorganization}/_packaging/{feed}/npm/registry/
@{otherscope}:registry=https://{thirdPartyRepository}/npm/registry/
always-auth=true
The registry URL pointing to an Azure Artifacts feed may or may not contain the project. An URL for a project
scoped feed must contain the project, and the URL for a organization scoped feed must not contain the project.
Learn more.
npm
- task: npmAuthenticate@0
inputs:
workingFile: .npmrc
customEndpoint: OtherOrganizationNpmConnection, ThirdPartyRepositoryNpmConnection
- script: npm ci
# ...
- script: npm publish -registry https://pkgs.dev.azure.com/{otherorganization}/_packaging/{feed}/npm/registry/
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
How does this task work?
This task searches the specified .npmrc file for registry entries, then appends authentication details for the
discovered registries to the end of the file. For all registries in the current organization/collection, the build's
credentials are used. For registries in a different organization or hosted by a third-party, the registry URIs will be
compared to the URIs of the npm service connections specified by the customEndpoint input, and the
corresponding credentials will be used. The .npmrc file will be reverted to its original state at the end of the
pipeline execution.
When in my pipeline should I run this task?
This task must run before you use npm, or an npm task runner, to install or push packages to an authenticated
npm repository such as Azure Artifacts. There are no other ordering requirements.
I have multiple npm projects. Do I need to run this task for each .npmrc file?
This task will only add authentication details to one .npmrc file at a time. If you need authentication for multiple
.npmrc files, you can run the task multiple times, once for each .npmrc file. Alternately, consider creating an
.npmrc file that specifies all registries used by your projects, running npmAuthenticate on this .npmrc file, then
setting an environment variable to designate this .npmrc file as the npm per-user configuration file.
- task: npmAuthenticate@0
inputs:
workingFile: $(agent.tempdirectory)/.npmrc
- script: echo ##vso[task.setvariable variable=NPM_CONFIG_USERCONFIG]$(agent.tempdirectory)/.npmrc
- script: npm ci
workingDirectory: project1
- script: npm ci
workingDirectory: project2
My agent is behind a web proxy. Will npmAuthenticate set up npm/gulp/Grunt to use my proxy?
The answer is no. While this task itself will work behind a web proxy your agent has been configured to use, it
does not configure npm or npm task runners to use the proxy.
To do so, you can either:
Set the environment variables http_proxy / https_proxy and optionally no_proxy to your proxy settings.
See npm config for details. Note that these are commonly used variables which other non-npm tools (e.g.
curl) may also use.
Add the proxy settings to the npm configuration, either manually, by using npm config set, or by setting
environment variables prefixed with NPM_CONFIG_ .
Caution:
npm task runners may not be compatible with all methods of proxy configuration supported by npm.
Specify the proxy with a command line flag when calling npm
If your proxy requires authentication, you may need to add an additional build step to construct an authenticated
proxy uri.
Version 2.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
The NuGet Authenticate task is the new recommended way to authenticate with Azure Artifacts and other NuGet
repositories.
Use this task to install and update NuGet package dependencies, or package and publish NuGet packages. Uses
NuGet.exe and works with .NET Framework apps. For .NET Core and .NET Standard apps, use the .NET Core task.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
If your code depends on NuGet packages, make sure to add this step before your Visual Studio Build step. Also
make sure to clear the deprecated Restore NuGet Packages checkbox in that step.
If you are working with .NET Core or .NET Standard, use the .NET Core task, which has full support for all package
scenarios and it's currently supported by dotnet.
TIP
This version of the NuGet task uses NuGet 4.1.0 by default. To select a different version of NuGet, use the Tool Installer.
YAML snippet
# NuGet
# Restore, pack, or push NuGet packages, or run a NuGet command. Supports NuGet.org and authenticated feeds
like Azure Artifacts and MyGet. Uses NuGet.exe and works with .NET Framework apps. For .NET Core and .NET
Standard apps, use the .NET Core task.
- task: NuGetCommand@2
inputs:
#command: 'restore' # Options: restore, pack, push, custom
#restoreSolution: '**/*.sln' # Required when command == Restore
#feedsToUse: 'select' # Options: select, config
#vstsFeed: # Required when feedsToUse == Select
#includeNuGetOrg: true # Required when feedsToUse == Select
#nugetConfigPath: # Required when feedsToUse == Config
#externalFeedCredentials: # Optional
#noCache: false
#disableParallelProcessing: false
restoreDirectory:
#verbosityRestore: 'Detailed' # Options: quiet, normal, detailed
#packagesToPush:
'$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg' # Required
when command == Push
#nuGetFeedType: 'internal' # Required when command == Push# Options: internal, external
#publishVstsFeed: # Required when command == Push && NuGetFeedType == Internal
#publishPackageMetadata: true # Optional
#allowPackageConflicts: # Optional
#publishFeedCredentials: # Required when command == Push && NuGetFeedType == External
#verbosityPush: 'Detailed' # Options: quiet, normal, detailed
#packagesToPack: '**/*.csproj' # Required when command == Pack
#configuration: '$(BuildConfiguration)' # Optional
#packDestination: '$(Build.ArtifactStagingDirectory)' # Optional
#versioningScheme: 'off' # Options: off, byPrereleaseNumber, byEnvVar, byBuildNumber
#includeReferencedProjects: false # Optional
#versionEnvVar: # Required when versioningScheme == ByEnvVar
#majorVersion: '1' # Required when versioningScheme == ByPrereleaseNumber
#minorVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#patchVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#packTimezone: 'utc' # Required when versioningScheme == ByPrereleaseNumber# Options: utc, local
#includeSymbols: false # Optional
#toolPackage: # Optional
#buildProperties: # Optional
#basePath: # Optional, specify path to nuspec files
#verbosityPack: 'Detailed' # Options: quiet, normal, detailed
#arguments: # Required when command == Custom
Arguments
A RGUM EN T DESC RIP T IO N
feedsToUse You can either select a feed from Azure Artifacts and/or
Feeds to use NuGet.org, or commit a nuget.config file to your source code
repository and set its path here. Options: select , config .
publishVstsFeed Select a feed hosted in this account. You must have Azure
Target feed Artifacts installed and licensed to select a feed here.
packTimezone Specifies the desired time zone used to produce the version of
Time zone the package. Selecting UTC is recommended if you're using
hosted build agents as their date and time might differ.
Options: utc , local
basePath The base path of the files defined in the nuspec file.
Base path
Control options
Versioning schemes
For byPrereleaseNumber , the version will be set to whatever you choose for major, minor, and patch, plus the
date and time in the format yyyymmdd-hhmmss .
For byEnvVar , the version will be set as whatever environment variable, e.g. MyVersion (no $ , just the environment
variable name), you provide. Make sure the environment variable is set to a proper SemVer e.g. 1.2.3 or
1.2.3-beta1 .
For byBuildNumber , the version will be set to the build number, ensure that your build number is a proper
SemVer e.g. 1.0.$(Rev:r) . If you select byBuildNumber , the task will extract a dotted version, 1.2.3.4 and use
only that, dropping any label. To use the build number as is, you should use byEnvVar as described above, and set
the environment variable to BUILD_BUILDNUMBER .
Examples
Restore
Restore all your solutions with packages from a selected feed.
Package
Create a NuGet package in the destination folder.
# Package a project
- task: NuGetCommand@2
inputs:
command: 'pack'
packagesToPack: '**/*.csproj'
packDestination: '$(Build.ArtifactStagingDirectory)'
Push
NOTE
Pipeline artifacts are downloaded to System.ArtifactsDirectory directory. packagesToPush value can be set to
$(System.ArtifactsDirectory)/**/*.nupkg in your release pipeline.
# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg'
feedsToUse: 'config'
nugetConfigPath: '$(Build.WorkingDirectory)/NuGet.config'
# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
feedsToUse: 'select'
vstsFeed: 'my-project/my-project-scoped-feed'
publishVstsFeed: 'myTestFeed'
Custom
Run any other NuGet command besides the default ones: pack, push and restore.
Open source
Check out the Azure Pipelines and Team Foundation Server out-of-the-box tasks on GitHub. Feedback and
contributions are welcome.
FAQ
Why should I check in a NuGet.Config?
Checking a NuGet.Config into source control ensures that a key piece of information needed to build your project,
the location of its packages, is available to every developer that checks out your code.
However, for situations where a team of developers works on a large range of projects, it's also possible to add an
Azure Artifacts feed to the global NuGet.Config on each developer's machine. In these situations, using the "Feeds I
select here" option in the NuGet task replicates this configuration.
Where can I learn about Azure Artifacts?
Azure Artifacts Documentation
Where can I learn more about NuGet?
NuGet Docs Overview
NuGet Create Packaging and publishing
NuGet Consume Setting up a solution to get dependencies
What other kinds of apps can I build?
Build and deploy your app examples
What other kinds of build tasks are available?
Build and release tasks catalog
How do we protect our codebase from build breaks?
Git: Improve code quality with branch policies with an option to require that code builds before it can be
merged to a branch. For GitHub repositories, similar policies are available in GitHub's repository settings
under Branches.
TFVC: Use gated check-in.
How do I modify other parts of my build pipeline?
Build and release tasks to run tests, scripts, and a wide range of other processes.
Specify build options such as specifying how completed builds are named, building multiple configurations,
creating work items on failure.
Supported source repositories to pick the source of the build and modify options such as how the agent
workspace is cleaned.
Set build triggers to modify how your CI builds run and to specify scheduled builds.
Specify build retention policies to automatically delete old builds.
I selected parallel multi-configuration, but only one build is running at a time.
If you're using Azure Pipelines, you might need more parallel jobs. See Parallel jobs in Azure Pipelines.
How do I see what has changed in my build pipeline?
View the change history of your build pipeline
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Package: NuGet Authenticate
11/2/2020 • 7 minutes to read • Edit Online
Azure Pipelines
Configure NuGet tools to authenticate with Azure Artifacts and other NuGet repositories.
IMPORTANT
This task is only compatible with NuGet >= 4.8.0.5385, dotnet >= 2.1.400, or MSBuild >= 15.8.166.59604
YAML snippet
# Authenticate nuget.exe, dotnet, and MSBuild with Azure Artifacts and optionally other repositories
- task: NuGetAuthenticate@0
#inputs:
#nuGetServiceConnections: MyOtherOrganizationFeed, MyExternalPackageRepository # Optional
#forceReinstallCredentialProvider: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Control options
Examples
Restore and push NuGet packages within your organization
If all of the Azure Artifacts feeds you use are in the same organization as your pipeline, you can use the
NuGetAuthenticate task without specifying any inputs. For project scoped feeds that are in a different project than
where the pipeline is running in, you must manually give the project and the feed access to the pipeline's project's
build service.
nuget.config
<configuration>
<packageSources>
<!--
Any Azure Artifacts feeds within your organization will automatically be authenticated. Both
dev.azure.com and visualstudio.com domains are supported.
Project scoped feed URL includes the project, organization scoped feed URL does not.
-->
<add key="MyProjectFeed1"
value="https://pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed}/nuget/v3/index.json" />
<add key="MyProjectFeed2"
value="https://{organization}.pkgs.visualstudio.com/{project}/_packaging/{feed}/nuget/v3/index.json" />
<add key="MyOtherProjectFeed1"
value="https://pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed@view}/nuget/v3/index.json" />
<add key="MyOrganizationFeed1"
value="https://pkgs.dev.azure.com/{organization}/_packaging/{feed}/nuget/v3/index.json" />
</packageSources>
</configuration>
nuget.exe
- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: OtherOrganizationFeedConnection, ThirdPartyRepositoryConnection
- task: NuGetToolInstaller@1 # Optional if nuget.exe >= 4.8.5385 is already on the path
inputs:
versionSpec: '*'
checkLatest: true
- script: nuget restore
# ...
- script: nuget push -ApiKey AzureArtifacts -Source "MyProjectFeed1" MyProject.*.nupkg
dotnet
- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: OtherOrganizationFeedConnection, ThirdPartyRepositoryConnection
- task: UseDotNet@2 # Optional if the .NET Core SDK is already installed
- script: dotnet restore
# ...
- script: dotnet nuget push --api-key AzureArtifacts --source
https://pkgs.dev.azure.com/{organization}/_packaging/{feed1}/nuget/v3/index.json MyProject.*.nupkg
nuget.exe
- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: OtherOrganizationFeedConnection, ThirdPartyRepositoryConnection
- task: NuGetToolInstaller@1 # Optional if nuget.exe >= 4.8.5385 is already on the path
inputs:
versionSpec: '*'
checkLatest: true
- script: nuget restore
# ...
- script: nuget push -ApiKey AzureArtifacts -Source "MyProjectFeed1" MyProject.*.nupkg
dotnet
- task: NuGetAuthenticate@0
inputs:
nuGetServiceConnections: OtherOrganizationFeedConnection, ThirdPartyRepositoryConnection
- task: UseDotNet@2 # Optional if the .NET Core SDK is already installed
- script: dotnet restore
# ...
- script: dotnet nuget push --api-key AzureArtifacts --source "MyProjectFeed1" MyProject.*.nupkg
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
What tools are compatible with this task?
This task will configure tools that support NuGet cross platform plugins. Today, that includes nuget.exe, dotnet, and
recent versions of MSBuild with built-in support for restoring NuGet packages.
Specifically, this task will configure:
nuget.exe, version 4.8.5385 or higher
dotnet / .NET Core SDK, version 2.1.400 or higher
MSBuild, version 15.8.166.59604 or higher
However, upgrading to the latest stable version is recommended if you encounter any issues.
I get "A task was canceled" errors during a package restore. What should I do?
Known issues in NuGet and in the Azure Artifacts Credential Provider can cause this type of error and updating to
the latest nuget may help.
A known issue in some versions of nuget/dotnet can cause this error, especially during large restores on resource
constrained machines. This issue is fixed in NuGet 5.2, as well as .NET Core SDK 2.1.80X and 2.2.40X. If you are
using an older version, try upgrading your version of NuGet or dotnet. The .NET Core Tool Installer task can be
used to install a newer version of the .NET Core SDK.
There are also known issues with the Azure Artifacts Credential Provider (installed by this task), including artifacts-
credprovider/#77 and artifacts-credprovider/#108. If you experience these issues, ensure you have the latest
credential provider by setting the input forceReinstallCredentialProvider to true in the NuGet Authenticate task.
This will also ensure your credential provider is automatically updated as issues are resolved.
If neither of the above resolves the issue, please enable Plugin Diagnostic Logging and report the issue to NuGet
and the Azure Artifacts Credential Provider.
How is this task different than the NuGetCommand and DotNetCoreCLI tasks?
This task configures nuget.exe, dotnet, and MSBuild to authenticate with Azure Artifacts or other repositories that
require authentication. After this task runs, you can then invoke the tools in a later step (either directly or via a
script) to restore or push packages.
The NuGetCommand and DotNetCoreCLI tasks require using the task to restore or push packages, as
authentication to Azure Artifacts is only configured within the lifetime of the task. This can prevent you from
restoring or pushing packages within your own script. It may also prevent you from passing specific command
line arguments to the tool.
The NuGetAuthenticate task is the recommended way to use authenticated feeds within a pipeline.
When in my pipeline should I run this task?
This task must run before you use a NuGet tool to restore or push packages to an authenticated package source
such as Azure Artifacts. There are no other ordering requirements. For example, this task can safely run either
before or after a NuGet or .NET Core tool installer task.
How do I configure a NuGet package source that uses ApiKey ("NuGet API keys"), such as nuget.org?
Some package sources such as nuget.org use API keys for authentication when pushing packages, rather than
username/password credentials. Due to limitations in NuGet, this task cannot be used to set up a NuGet service
connection that uses an API key.
Instead:
1. Configure a secret variable containing the ApiKey
2. Perform the package push using nuget push -ApiKey $(myNuGetApiKey) or
dotnet nuget push --api-key $(myNuGetApiKey) , assuming you named the variable myNuGetApiKey
My agent is behind a web proxy. Will NuGetAuthenticate set up nuget.exe, dotnet, and MSBuild to use my
proxy?
No. While this task itself will work behind a web proxy your agent has been configured to use, it does not
configure NuGet tools to use the proxy.
To do so, you can either:
Set the environment variable http_proxy and optionally no_proxy to your proxy settings. See NuGet CLI
environment variables for details. Please understand that these are commonly used variables which other
non-NuGet tools (e.g. curl) may also use.
Caution:
The http_proxy and no_proxy variables are case-sensitive on Linux and Mac operating systems and
must be lowercase. Attempting to use an Azure Pipelines variable to set the environment variable will
not work, as it will be converted to uppercase. Instead, set the environment variables on the self-hosted
agent's machine and restart the agent.
Add the proxy settings to the user-level nuget.config file, either manually or using nuget config -set as
described in the nuget.config reference documentation.
Caution:
The proxy settings (such as http_proxy ) must be added to the user-level config. They will be ignored if
specified in a different nuget.config file.
Azure Pipelines
Use this task to create and upload an sdist or wheel to a PyPI-compatible index using Twine.
This task builds an sdist package by running python setup.py sdist using the Python instance in PATH . It can
optionally build a universal wheel in addition to the sdist. Then, it will upload the package to a PyPI index using
twine . The task will install the wheel and twine packages with python -m pip install --user .
Deprecated
WARNING
The PyPI Publisher task has been deprecated. You can now publish PyPI packages using twine authentication and custom
scripts.
Demands
None
Prerequisites
A generic service connection for a PyPI index.
TIP
To configure a new generic service connection, go to Settings -> Services -> New service connection -> Generic.
Connection Name : A friendly connection name of your choice
Ser ver URL : PyPI package server (for example: https://upload.pypi.org/legacy/)
User name : username for your PyPI account
Password/Token Key : password for your PyPI account
YAML snippet
# PyPI publisher
# Create and upload an sdist or wheel to a PyPI-compatible index using Twine
- task: PyPIPublisher@0
inputs:
pypiConnection:
packageDirectory:
#alsoPublishWheel: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Python package directory The directory of the Python package to be created and
published, where setup.py is present.
Also publish a wheel Select whether to create and publish a universal wheel
package (platform independent) in addition to an sdist
package.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Package: Python Pip Authenticate
11/2/2020 • 4 minutes to read • Edit Online
YAML snippet
# Python pip authenticate V1
# Authentication task for the pip client used for installing Python distributions
- task: PipAuthenticate@1
inputs:
#artifactFeeds: 'MyFeed, MyTestFeed' # Optional
#pythonDownloadServiceConnections: pypiOrgFeed, OtherOrganizationFeed # Optional
#onlyAddExtraIndex: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
onlyAddExtraIndex (Optional) Boolean value, if set to true will force pip to get
Don't set primary index URL distributions from official python registry first. By default, it's
false
C O N T RO L O P T IO N S
Examples
Download python distributions from Azure Artifacts feeds without consulting official python registry
In this example, we are setting authentication for downloading from private Azure Artifacts feeds. The authenticate
task creates environment variables PIP_INDEX_URL and PIP_EXTRA_INDEX_URL that are required to download the
distributions. The task sets the variables with auth credentials the task generates for the provided Artifacts feeds.
'HelloTestPackage' has to be present in either 'myTestFeed1' or 'myTestFeed2', otherwise install will fail hard.
For project scoped feeds that are in a different project than where the pipeline is running in, you must manually
give the project and the feed access to the pipeline's project's build service.
- task: PipAuthenticate@1
displayName: 'Pip Authenticate'
inputs:
# Provide list of feed names which you want to authenticate.
# Project scoped feeds must include the project name in addition to the feed name.
artifactFeeds: 'project1/myTestFeed1, myTestFeed2'
Download python distributions from Azure Artifacts feeds consulting official python registry first
In this example, we are setting authentication for downloading from private Azure Artifacts feedbut pypi is
consulted first. The authenticate task creates an environment variable PIP_EXTRA_INDEX_URL which contain auth
credentials required to download the distributions. 'HelloTestPackage' will be downloaded from the authenticated
feeds only if it's not present in pypi.
For project scoped feeds that are in a different project than where the pipeline is running in, you must manually
give the project and the feed access to the pipeline's project's build service.
- task: PipAuthenticate@1
displayName: 'Pip Authenticate'
inputs:
# Provide list of feed names which you want to authenticate.
# Project scoped feeds must include the project name in addition to the feed name.
artifactFeeds: 'project1/myTestFeed1, myTestFeed2'
# Setting this variable to "true" will force pip to get distributions from official python registry first
and fallback to feeds mentioned above if distributions are not found there.
onlyAddExtraIndex: true
- task: PipAuthenticate@1
displayName: 'Pip Authenticate'
inputs:
# In this case, name of the service connection is "pypitest".
pythonDownloadServiceConnections: pypitest
Task versions
Task: Pip Authenticate
TA SK VERSIO N A Z URE P IP EL IN ES T FS
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
When in my pipeline should I run this task?
This task must run before you use pip to download python distributions to an authenticated package source such
as Azure Artifacts. There are no other ordering requirements. Multiple invocation of this task will not stack
credentials. Every run of the task will erase any previously stored credentials.
My agent is behind a web proxy. Will PipAuthenticate set up pip to use my proxy?
No. While this task itself will work behind a web proxy your agent has been configured to use, it does not configure
pip to use the proxy.
To do so, you can either:
Set the environment variable http_proxy , https_proxy and optionally no_proxy to your proxy settings. See Pip
official guidelines for details. These are commonly used variables which other non-Python tools (e.g. curl) may
also use.
Caution: The http_proxy and no_proxy variables are case-sensitive on Linux and Mac operating
systems and must be lowercase. Attempting to use an Azure Pipelines variable to set the environment
variable will not work, as it will be converted to uppercase. Instead, set the environment variables on the
self-hosted agent's machine and restart the agent.
Add the proxy settings to the pip config file file using proxy key.
Use the --proxy command-line option to specify proxy in the form [user:passwd@]proxy.server:port .
Package: Python Twine Upload Authenticate
11/2/2020 • 3 minutes to read • Edit Online
YAML snippet
# Python twine upload authenticate V1
# Authenticate for uploading Python distributions using twine. Add '-r FeedName/EndpointName --config-file
$(PYPIRC_PATH)' to your twine upload command. For feed present in this organization, use the feed name as the
repository (-r). Otherwise, use the endpoint name defined in the service connection.
- task: TwineAuthenticate@1
inputs:
#artifactFeed: MyTestFeed # Optional
#pythonUploadServiceConnection: OtherOrganizationFeed # Optional
Arguments
A RGUM EN T DESC RIP T IO N
C O N T RO L O P T IO N S
Examples
Publish python distribution to Azure Artifacts feed
In this example, we are setting authentication for publishing to a private Azure Artifacts Feed. The authenticate task
creates a .pypirc file which contains the auth credentials required to publish a distribution to the feed.
# Install python distributions like wheel, twine etc
- script: |
pip install wheel
pip install twine
- task: TwineAuthenticate@1
displayName: 'Twine Authenticate'
inputs:
# In this case, name of the feed is 'myTestFeed' in the project 'myTestProject'. Project is needed because
the feed is project scoped.
artifactFeed: myTestProject/myTestFeed
# Use command line script to 'twine upload', use -r to pass the repository name and --config-file to pass the
environment variable set by the authenticate task.
- script: |
python -m twine upload -r myTestFeed --config-file $(PYPIRC_PATH) dist/*.whl
The 'artifactFeed' input will contain the project and the feed name if the feed is project scoped. If the feed is
organization scoped, only the feed name must be provided. Learn more.
Publish python distribution to official python registry
In this example, we are setting authentication for publishing to official python registry. Create a twine service
connection entry for pypi. The authenticate task uses that service connection to create a .pypirc file which
contains the auth credentials required to publish the distribution.
- task: TwineAuthenticate@1
displayName: 'Twine Authenticate'
inputs:
# In this case, name of the service connection is "pypitest".
pythonUploadServiceConnection: pypitest
# Use command line script to 'twine upload', use -r to pass the repository name and --config-file to pass the
environment variable set by the authenticate task.
- script: |
python -m twine upload -r "pypitest" --config-file $(PYPIRC_PATH) dist/*.whl
Task versions
Task: Twine Authenticate
TA SK VERSIO N A Z URE P IP EL IN ES T FS
FAQ
When in my pipeline should I run this task?
This task must run before you use twine to upload python distributions to an authenticated package source such
as Azure Artifacts. There are no other ordering requirements. Multiple invocation of this task will not stack
credentials. Every run of the task will erase any previously stored credentials.
My agent is behind a web proxy. Will TwineAuthenticate set up twine to use my proxy?
No. While this task itself will work behind a web proxy your agent has been configured to use, it does not
configure twine to use the proxy.
Universal Package task
6/2/2020 • 6 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
Use this task to download, or package and publish Universal Packages.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
YAML snippet
# Universal packages
# Download or publish Universal Packages
- task: UniversalPackages@0
inputs:
#command: 'download' # Options: download, publish
#downloadDirectory: '$(System.DefaultWorkingDirectory)' # Required when command == Download
#feedsToUse: 'internal' # Options: internal, external
#externalFeedCredentials: # Optional
#vstsFeed: # Required when feedsToUse == Internal
#vstsFeedPackage: # Required when feedsToUse == Internal
#vstsPackageVersion: # Required when feedsToUse == Internal
#feedDownloadExternal: # Required when feedsToUse == External
#packageDownloadExternal: # Required when feedsToUse == External
#versionDownloadExternal: # Required when feedsToUse == External
#publishDirectory: '$(Build.ArtifactStagingDirectory)' # Required when command == Publish
#feedsToUsePublish: 'internal' # Options: internal, external
#publishFeedCredentials: # Required when feedsToUsePublish == External
#vstsFeedPublish: # Required when feedsToUsePublish == Internal
#publishPackageMetadata: true # Optional
#vstsFeedPackagePublish: # Required when feedsToUsePublish == Internal
#feedPublishExternal: # Required when feedsToUsePublish == External
#packagePublishExternal: # Required when feedsToUsePublish == External
#versionOption: 'patch' # Options: major, minor, patch, custom
#versionPublish: # Required when versionOption == Custom
packagePublishDescription:
#verbosity: 'None' # Options: none, trace, debug, information, warning, error, critical
#publishedPackageVar: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
feedsToUse You can select a feed from either this collection or any other
Feed location collection in Azure Artifacts.
Options: internal , external
vstsFeed Include the selected feed. You must have Azure Artifacts
Use packages from this Azure Artifacts/TFS feed installed and licensed to select a feed here.
feedsToUsePublish You can select a feed from either this collection or any other
Feed location collection in Azure Artifacts.
Options: internal , external
publishedPackageVar Provide a name for the variable that contains the published
Package Output Variable package name and version.
Control options
Example
The simplest way to get started with the Universal Package task is to use the Pipelines task editor to generate the
YAML. You can then copy the generated code into your project's azure-pipelines.yml file. In this example, the
sample demonstrates how to quickly generate the YAML using a pipeline that builds a GatsbyJS progressive web
app (PWA).
Universal Packages are a useful way to both encapsulate and version a web app. Packaging a web app into a
Universal Package enables quick rollbacks to a specific version of your site and eliminates the need to build the site
in the deployment pipeline.
This example pipeline demonstrates how to fetch a tool from a feed within your project. The Universal Package task
is used to download the tool, run a build, and again uses the Universal Package task to publish the entire compiled
GatsbyJS PWA to a feed as a versioned Universal Package.
Download a package with the Universal Package task
The second task in the sample project uses the Universal Package task to fetch a tool, imagemagick, from a feed
that is within a different project in the same organization. The tool, imagemagick, is required by the subsequent
build step to resize images.
1. Add the Universal Package task by clicking the plus icon, typing "universal" in the search box, and clicking the
"Add" button to add the task to your pipeline.
2. Click the newly added Universal Package task and the Command to Download .
3. Choose the Destination director y to use for the tool download.
4. Select a source Feed that contains the tool, set the Package name , and choose Version of the imagemagick
tool from the source Feed*.
5. After completing the fields, click View YAML to see the generated YAML.
6. The Universal Package task builder generates simplified YAML that contains non-default values. Copy the
generated YAML into your azure-pipelines.yml file at the root of your project's git repo as defined here.
1. Add another Universal Package task to the end of the pipeline by clicking the plus icon, typing "universal" in the
search box, and clicking the "Add" button to add the task to your pipeline. This task gathers all of the
production-ready assets produced by the Run gatsby build step, produce a versioned Universal Package, and
publish the package to a feed.
This example demonstrated how to use the Pipelines task builder to quickly generate the YAML for the Universal
Package task, which can then be placed into your azure-pipelines.yml file. The Universal Package task builder
supports all of the advanced configurations that can be created with Universal Package task's arguments.
NOTE
All feeds created through the classic user interface are project-scoped feeds. For the vstsFeedPublish parameter, you can
also use the project and feed's names instead of their GUIDs like the following: '<projectName>/<feedName>' . See Publish
your Universal packages for more details.
Open-source on GitHub
These tasks are open source on GitHub. Feedback and contributions are welcome.
NuGet Installer task version 0.*
4/10/2020 • 2 minutes to read • Edit Online
Demands
If your code depends on NuGet packages, make sure to add this task before your Visual Studio Build task. Also
make sure to clear the deprecated Restore NuGetPackages checkbox in that task.
Arguments
A RGUM EN T DESC RIP T IO N
Path to Solution Copy the value from the Solution argument in your Visual
Studio Build task and paste it here.
Path to NuGet.config If you are using a package source other than NuGet.org, you
must check in a NuGet.config file and specify the path to it
here.
Disable local cache Equivalent to nuget restore with the -NoCache option.
A DVA N C ED
C O N T RO L O P T IO N S
Examples
Install NuGet dependencies
You're building a Visual Studio solution that depends on a NuGet feed.
`-- ConsoleApplication1
|-- ConsoleApplication1.sln
|-- NuGet.config
`-- ConsoleApplication1
|-- ConsoleApplication1.csproj
Build tasks
Install your NuGet package dependencies.
Path to Solution: *.sln
Package: NuGet Installer Path to NuGet.config:
ConsoleApplication1/NuGet.config
Version 2.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
The NuGet Authenticate task is the new recommended way to authenticate with Azure Artifacts and other NuGet
repositories.
Use this task to install and update NuGet package dependencies, or package and publish NuGet packages. Uses
NuGet.exe and works with .NET Framework apps. For .NET Core and .NET Standard apps, use the .NET Core task.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
If your code depends on NuGet packages, make sure to add this step before your Visual Studio Build step. Also
make sure to clear the deprecated Restore NuGet Packages checkbox in that step.
If you are working with .NET Core or .NET Standard, use the .NET Core task, which has full support for all package
scenarios and it's currently supported by dotnet.
TIP
This version of the NuGet task uses NuGet 4.1.0 by default. To select a different version of NuGet, use the Tool Installer.
YAML snippet
# NuGet
# Restore, pack, or push NuGet packages, or run a NuGet command. Supports NuGet.org and authenticated feeds
like Azure Artifacts and MyGet. Uses NuGet.exe and works with .NET Framework apps. For .NET Core and .NET
Standard apps, use the .NET Core task.
- task: NuGetCommand@2
inputs:
#command: 'restore' # Options: restore, pack, push, custom
#restoreSolution: '**/*.sln' # Required when command == Restore
#feedsToUse: 'select' # Options: select, config
#vstsFeed: # Required when feedsToUse == Select
#includeNuGetOrg: true # Required when feedsToUse == Select
#nugetConfigPath: # Required when feedsToUse == Config
#externalFeedCredentials: # Optional
#noCache: false
#disableParallelProcessing: false
restoreDirectory:
#verbosityRestore: 'Detailed' # Options: quiet, normal, detailed
#packagesToPush:
'$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg' # Required
when command == Push
#nuGetFeedType: 'internal' # Required when command == Push# Options: internal, external
#publishVstsFeed: # Required when command == Push && NuGetFeedType == Internal
#publishPackageMetadata: true # Optional
#allowPackageConflicts: # Optional
#publishFeedCredentials: # Required when command == Push && NuGetFeedType == External
#verbosityPush: 'Detailed' # Options: quiet, normal, detailed
#packagesToPack: '**/*.csproj' # Required when command == Pack
#configuration: '$(BuildConfiguration)' # Optional
#packDestination: '$(Build.ArtifactStagingDirectory)' # Optional
#versioningScheme: 'off' # Options: off, byPrereleaseNumber, byEnvVar, byBuildNumber
#includeReferencedProjects: false # Optional
#versionEnvVar: # Required when versioningScheme == ByEnvVar
#majorVersion: '1' # Required when versioningScheme == ByPrereleaseNumber
#minorVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#patchVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#packTimezone: 'utc' # Required when versioningScheme == ByPrereleaseNumber# Options: utc, local
#includeSymbols: false # Optional
#toolPackage: # Optional
#buildProperties: # Optional
#basePath: # Optional, specify path to nuspec files
#verbosityPack: 'Detailed' # Options: quiet, normal, detailed
#arguments: # Required when command == Custom
Arguments
A RGUM EN T DESC RIP T IO N
feedsToUse You can either select a feed from Azure Artifacts and/or
Feeds to use NuGet.org, or commit a nuget.config file to your source code
repository and set its path here. Options: select , config .
publishVstsFeed Select a feed hosted in this account. You must have Azure
Target feed Artifacts installed and licensed to select a feed here.
packTimezone Specifies the desired time zone used to produce the version of
Time zone the package. Selecting UTC is recommended if you're using
hosted build agents as their date and time might differ.
Options: utc , local
basePath The base path of the files defined in the nuspec file.
Base path
Control options
Versioning schemes
For byPrereleaseNumber , the version will be set to whatever you choose for major, minor, and patch, plus the
date and time in the format yyyymmdd-hhmmss .
For byEnvVar , the version will be set as whatever environment variable, e.g. MyVersion (no $ , just the environment
variable name), you provide. Make sure the environment variable is set to a proper SemVer e.g. 1.2.3 or
1.2.3-beta1 .
For byBuildNumber , the version will be set to the build number, ensure that your build number is a proper
SemVer e.g. 1.0.$(Rev:r) . If you select byBuildNumber , the task will extract a dotted version, 1.2.3.4 and use
only that, dropping any label. To use the build number as is, you should use byEnvVar as described above, and set
the environment variable to BUILD_BUILDNUMBER .
Examples
Restore
Restore all your solutions with packages from a selected feed.
Package
Create a NuGet package in the destination folder.
# Package a project
- task: NuGetCommand@2
inputs:
command: 'pack'
packagesToPack: '**/*.csproj'
packDestination: '$(Build.ArtifactStagingDirectory)'
Push
NOTE
Pipeline artifacts are downloaded to System.ArtifactsDirectory directory. packagesToPush value can be set to
$(System.ArtifactsDirectory)/**/*.nupkg in your release pipeline.
# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg'
feedsToUse: 'config'
nugetConfigPath: '$(Build.WorkingDirectory)/NuGet.config'
# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
feedsToUse: 'select'
vstsFeed: 'my-project/my-project-scoped-feed'
publishVstsFeed: 'myTestFeed'
Custom
Run any other NuGet command besides the default ones: pack, push and restore.
Open source
Check out the Azure Pipelines and Team Foundation Server out-of-the-box tasks on GitHub. Feedback and
contributions are welcome.
FAQ
Why should I check in a NuGet.Config?
Checking a NuGet.Config into source control ensures that a key piece of information needed to build your project,
the location of its packages, is available to every developer that checks out your code.
However, for situations where a team of developers works on a large range of projects, it's also possible to add an
Azure Artifacts feed to the global NuGet.Config on each developer's machine. In these situations, using the "Feeds I
select here" option in the NuGet task replicates this configuration.
Where can I learn about Azure Artifacts?
Azure Artifacts Documentation
Where can I learn more about NuGet?
NuGet Docs Overview
NuGet Create Packaging and publishing
NuGet Consume Setting up a solution to get dependencies
What other kinds of apps can I build?
Build and deploy your app examples
What other kinds of build tasks are available?
Build and release tasks catalog
How do we protect our codebase from build breaks?
Git: Improve code quality with branch policies with an option to require that code builds before it can be
merged to a branch. For GitHub repositories, similar policies are available in GitHub's repository settings
under Branches.
TFVC: Use gated check-in.
How do I modify other parts of my build pipeline?
Build and release tasks to run tests, scripts, and a wide range of other processes.
Specify build options such as specifying how completed builds are named, building multiple configurations,
creating work items on failure.
Supported source repositories to pick the source of the build and modify options such as how the agent
workspace is cleaned.
Set build triggers to modify how your CI builds run and to specify scheduled builds.
Specify build retention policies to automatically delete old builds.
I selected parallel multi-configuration, but only one build is running at a time.
If you're using Azure Pipelines, you might need more parallel jobs. See Parallel jobs in Azure Pipelines.
How do I see what has changed in my build pipeline?
View the change history of your build pipeline
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
NuGet Packager task version 0.*
4/10/2020 • 4 minutes to read • Edit Online
Azure Pipelines (deprecated) | TFS 2017 Update 2 and below (deprecated in TFS 2018)
Use this task to create a NuGet package from either a .csproj or .nuspec file.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are
called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Demands
None
YAML snippet
# NuGet packager
# Deprecated: use the “NuGet” task instead. It works with the new Tool Installer framework so you can easily use new
versions of NuGet without waiting for a task update, provides better support for authenticated feeds outside this
organization/collection, and uses NuGet 4 by default.
- task: NuGetPackager@0
inputs:
#searchPattern: '**\*.csproj'
#outputdir: # Optional
#includeReferencedProjects: false # Optional
#versionByBuild: 'false' # Options: false, byPrereleaseNumber, byEnvVar, true
#versionEnvVar: # Required when versionByBuild == ByEnvVar
#requestedMajorVersion: '1' # Required when versionByBuild == ByPrereleaseNumber
#requestedMinorVersion: '0' # Required when versionByBuild == ByPrereleaseNumber
#requestedPatchVersion: '0' # Required when versionByBuild == ByPrereleaseNumber
#configurationToPack: '$(BuildConfiguration)' # Optional
#buildProperties: # Optional
#nuGetAdditionalArgs: # Optional
#nuGetPath: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
A RGUM EN T DESC RIP T IO N
Path/Pattern to nuspec files Specify .csproj files (for example, ***.csproj ) for simple
projects. In this case:
The packager compiles the .csproj files for packaging.
You must specify Configuration to Package (see below).
You do not have to check in a .nuspec file. If you do check
one in, the packager honors its settings and replaces
tokens such as $id$ and $description$.
Specify .nuspec files (for example, *.nuspec ) for more
complex projects, such as multi-platform scenarios in which
you need to compile and package in separate steps. In this
case:
The packager does not compile the .csproj files for
packaging.
Each project is packaged only if it has a .nuspec file
checked in.
The packager does not replace tokens in the .nuspec file
(except the <version/> element, see Use build
number to version package , below). You must supply
values for elements such as <id/> and <description/>
. The most common way to do this is to hardcode the
values in the .nuspec file.
To package a single file, click the ... button and select the file.
To package multiple files, use single-folder wildcards ( * ) and
recursive wildcards ( ). For example, specify *.csproj to
package all .csproj files in all subdirectories in the repo.
You can use multiple patterns separated by a semicolon to
create more complex queries. You can negate a pattern by
prefixing it with "-:". For example, specify
*.csproj;-:***Tests.csproj to package all .csproj files
except those ending in 'Tests' in all subdirectories in the repo.
Use build number to version package Select if you want to use the build number to version your
package. If you select this option, for the pipeline options, set the
build number format to something like
$(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r)
The build number format must be
{some_characters}_0.0.0.0 . The characters and the
underscore character are omitted from the output. The
version number at the end must be a unique number in a
format such as 0.0.0.0 that is higher than the last
published number.
The version number is passed to nuget pack with the
-Version option.
Package Folder (Optional) Specify the folder where you want to put the packages.
You can use a variable such as
$(Build.StagingDirectory)\packages
If you leave it empty, the package will be created in the same
directory that contains the .csproj or .nuspec file.
A DVA N C ED
A RGUM EN T DESC RIP T IO N
Configuration to Package If you are packaging a .csproj file, you must specify a configuration
that you are building and that you want to package. For example:
Release
Additional build properties Semicolon delimited list of properties used to build the package.
For example, you could replace
<description>$description$</description> in the .nuspec file
this way: Description="This is a great package"
Using this argument is equivalent to supplying properties
from nuget pack with the -Properties option.
Path to NuGet.exe (Optional) Path to your own instance of NuGet.exe. If you specify
this argument, you must have your own strategy to handle
authentication.
C O N T RO L O P T IO N S
Examples
You want to package and publish some projects in a C# class library to your Azure Artifacts feed.
`-- Message
|-- Message.sln
`-- ShortGreeting
|-- ShortGreeting.csproj
|-- Class1.cs
`-- Properties
|-- AssemblyInfo.cs
`-- LongGreeting
|-- LongGreeting.csproj
|-- Class1.cs
`-- Properties
|-- AssemblyInfo.cs
Prepare
AssemblyInfo.cs
Make sure your AssemblyInfo.cs files contain the information you want shown in your packages. For example,
AssemblyCompanyAttribute will be shown as the author, and AssemblyDescriptionAttribute will be shown as the
description.
Variables tab
NAME VA L UE
$(BuildConfiguration) release
Options
SET T IN G VA L UE
Publish to NuGet.org
Make sure you've prepared the build as described above.
Register with NuGet.org
If you haven't already, register with NuGet.org.
Build tasks
Azure Pipelines (deprecated) | TFS 2017 Update 2 and below (deprecated in TFS 2018)
Use this task to publish your NuGet package to a server and update your feed.
Demands
None
YAML snippet
# NuGet publisher
# Deprecated: use the “NuGet” task instead. It works with the new Tool Installer framework so you can easily use new
versions of NuGet without waiting for a task update, provides better support for authenticated feeds outside this
organization/collection, and uses NuGet 4 by default.
- task: NuGetPublisher@0
inputs:
#searchPattern: '**/*.nupkg;-:**/packages/**/*.nupkg;-:**/*.symbols.nupkg'
#nuGetFeedType: 'external' # Options: external, internal
#connectedServiceName: # Required when nuGetFeedType == External
#feedName: # Required when nuGetFeedType == Internal
#nuGetAdditionalArgs: # Optional
#verbosity: '-' # Options: -, quiet, normal, detailed
#nuGetVersion: '3.3.0' # Options: 3.3.0, 3.5.0.1829, 4.0.0.2283, custom
#nuGetPath: # Optional
#continueOnEmptyNupkgMatch: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
A DVA N C ED
Path to NuGet.exe (Optional) Path to your own instance of NuGet.exe. If you specify
this argument, you must have your own strategy to handle
authentication.
C O N T RO L O P T IO N S
Examples
You want to package and publish some projects in a C# class library to your Azure Artifacts feed.
`-- Message
|-- Message.sln
`-- ShortGreeting
|-- ShortGreeting.csproj
|-- Class1.cs
`-- Properties
|-- AssemblyInfo.cs
`-- LongGreeting
|-- LongGreeting.csproj
|-- Class1.cs
`-- Properties
|-- AssemblyInfo.cs
Prepare
AssemblyInfo.cs
Make sure your AssemblyInfo.cs files contain the information you want shown in your packages. For example,
AssemblyCompanyAttribute will be shown as the author, and AssemblyDescriptionAttribute will be shown as the
description.
Variables tab
NAME VA L UE
$(BuildConfiguration) release
Options
SET T IN G VA L UE
Publish to NuGet.org
Make sure you've prepared the build as described above.
Register with NuGet.org
If you haven't already, register with NuGet.org.
Build tasks
Version 2.
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018
NOTE
The NuGet Authenticate task is the new recommended way to authenticate with Azure Artifacts and other NuGet
repositories.
Use this task to install and update NuGet package dependencies, or package and publish NuGet packages. Uses
NuGet.exe and works with .NET Framework apps. For .NET Core and .NET Standard apps, use the .NET Core task.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
If your code depends on NuGet packages, make sure to add this step before your Visual Studio Build step. Also
make sure to clear the deprecated Restore NuGet Packages checkbox in that step.
If you are working with .NET Core or .NET Standard, use the .NET Core task, which has full support for all package
scenarios and it's currently supported by dotnet.
TIP
This version of the NuGet task uses NuGet 4.1.0 by default. To select a different version of NuGet, use the Tool Installer.
YAML snippet
# NuGet
# Restore, pack, or push NuGet packages, or run a NuGet command. Supports NuGet.org and authenticated feeds
like Azure Artifacts and MyGet. Uses NuGet.exe and works with .NET Framework apps. For .NET Core and .NET
Standard apps, use the .NET Core task.
- task: NuGetCommand@2
inputs:
#command: 'restore' # Options: restore, pack, push, custom
#restoreSolution: '**/*.sln' # Required when command == Restore
#feedsToUse: 'select' # Options: select, config
#vstsFeed: # Required when feedsToUse == Select
#includeNuGetOrg: true # Required when feedsToUse == Select
#nugetConfigPath: # Required when feedsToUse == Config
#externalFeedCredentials: # Optional
#noCache: false
#disableParallelProcessing: false
restoreDirectory:
#verbosityRestore: 'Detailed' # Options: quiet, normal, detailed
#packagesToPush:
'$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg' #
Required when command == Push
#nuGetFeedType: 'internal' # Required when command == Push# Options: internal, external
#publishVstsFeed: # Required when command == Push && NuGetFeedType == Internal
#publishPackageMetadata: true # Optional
#allowPackageConflicts: # Optional
#publishFeedCredentials: # Required when command == Push && NuGetFeedType == External
#verbosityPush: 'Detailed' # Options: quiet, normal, detailed
#packagesToPack: '**/*.csproj' # Required when command == Pack
#configuration: '$(BuildConfiguration)' # Optional
#packDestination: '$(Build.ArtifactStagingDirectory)' # Optional
#versioningScheme: 'off' # Options: off, byPrereleaseNumber, byEnvVar, byBuildNumber
#includeReferencedProjects: false # Optional
#versionEnvVar: # Required when versioningScheme == ByEnvVar
#majorVersion: '1' # Required when versioningScheme == ByPrereleaseNumber
#minorVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#patchVersion: '0' # Required when versioningScheme == ByPrereleaseNumber
#packTimezone: 'utc' # Required when versioningScheme == ByPrereleaseNumber# Options: utc, local
#includeSymbols: false # Optional
#toolPackage: # Optional
#buildProperties: # Optional
#basePath: # Optional, specify path to nuspec files
#verbosityPack: 'Detailed' # Options: quiet, normal, detailed
#arguments: # Required when command == Custom
Arguments
A RGUM EN T DESC RIP T IO N
feedsToUse You can either select a feed from Azure Artifacts and/or
Feeds to use NuGet.org, or commit a nuget.config file to your source code
repository and set its path here. Options: select , config .
publishVstsFeed Select a feed hosted in this account. You must have Azure
Target feed Artifacts installed and licensed to select a feed here.
packTimezone Specifies the desired time zone used to produce the version
Time zone of the package. Selecting UTC is recommended if you're using
hosted build agents as their date and time might differ.
Options: utc , local
basePath The base path of the files defined in the nuspec file.
Base path
Control options
Versioning schemes
For byPrereleaseNumber , the version will be set to whatever you choose for major, minor, and patch, plus the
date and time in the format yyyymmdd-hhmmss .
For byEnvVar , the version will be set as whatever environment variable, e.g. MyVersion (no $ , just the
environment variable name), you provide. Make sure the environment variable is set to a proper SemVer e.g.
1.2.3 or 1.2.3-beta1 .
For byBuildNumber , the version will be set to the build number, ensure that your build number is a proper
SemVer e.g. 1.0.$(Rev:r) . If you select byBuildNumber , the task will extract a dotted version, 1.2.3.4 and use
only that, dropping any label. To use the build number as is, you should use byEnvVar as described above, and
set the environment variable to BUILD_BUILDNUMBER .
Examples
Restore
Restore all your solutions with packages from a selected feed.
Package
Create a NuGet package in the destination folder.
# Package a project
- task: NuGetCommand@2
inputs:
command: 'pack'
packagesToPack: '**/*.csproj'
packDestination: '$(Build.ArtifactStagingDirectory)'
Push
NOTE
Pipeline artifacts are downloaded to System.ArtifactsDirectory directory. packagesToPush value can be set to
$(System.ArtifactsDirectory)/**/*.nupkg in your release pipeline.
# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg'
feedsToUse: 'config'
nugetConfigPath: '$(Build.WorkingDirectory)/NuGet.config'
# Push a project
- task: NuGetCommand@2
inputs:
command: 'push'
feedsToUse: 'select'
vstsFeed: 'my-project/my-project-scoped-feed'
publishVstsFeed: 'myTestFeed'
Custom
Run any other NuGet command besides the default ones: pack, push and restore.
Open source
Check out the Azure Pipelines and Team Foundation Server out-of-the-box tasks on GitHub. Feedback and
contributions are welcome.
FAQ
Why should I check in a NuGet.Config?
Checking a NuGet.Config into source control ensures that a key piece of information needed to build your
project, the location of its packages, is available to every developer that checks out your code.
However, for situations where a team of developers works on a large range of projects, it's also possible to add an
Azure Artifacts feed to the global NuGet.Config on each developer's machine. In these situations, using the "Feeds
I select here" option in the NuGet task replicates this configuration.
Where can I learn about Azure Artifacts?
Azure Artifacts Documentation
Where can I learn more about NuGet?
NuGet Docs Overview
NuGet Create Packaging and publishing
NuGet Consume Setting up a solution to get dependencies
What other kinds of apps can I build?
Build and deploy your app examples
What other kinds of build tasks are available?
Build and release tasks catalog
How do we protect our codebase from build breaks?
Git: Improve code quality with branch policies with an option to require that code builds before it can be
merged to a branch. For GitHub repositories, similar policies are available in GitHub's repository settings
under Branches.
TFVC: Use gated check-in.
How do I modify other parts of my build pipeline?
Build and release tasks to run tests, scripts, and a wide range of other processes.
Specify build options such as specifying how completed builds are named, building multiple
configurations, creating work items on failure.
Supported source repositories to pick the source of the build and modify options such as how the agent
workspace is cleaned.
Set build triggers to modify how your CI builds run and to specify scheduled builds.
Specify build retention policies to automatically delete old builds.
I selected parallel multi-configuration, but only one build is running at a time.
If you're using Azure Pipelines, you might need more parallel jobs. See Parallel jobs in Azure Pipelines.
How do I see what has changed in my build pipeline?
View the change history of your build pipeline
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
Package: Python Pip Authenticate version 0.*
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Provides authentication for the pip client that can be used to install Python distributions.
YAML snippet
# Python pip authenticate
# Authentication task for the pip client used for installing Python distributions
- task: PipAuthenticate@0
inputs:
artifactFeeds:
#externalFeeds: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Package: Python Twine Upload Authenticate version
0.*
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Provides twine credentials to a PYPIRC_PATH environment variable for the scope of the build. This enables you to
publish Python packages to feeds with twine from your build.
YAML snippet
# Python twine upload authenticate
# Authenticate for uploading Python distributions using twine. Add '-r FeedName/EndpointName --config-file
$(PYPIRC_PATH)' to your twine upload command. For feeds present in this organization, use the feed name as the
repository (-r). Otherwise, use the endpoint name defined in the service connection.
- task: TwineAuthenticate@0
inputs:
artifactFeeds:
#externalFeeds: # Optional
#publishPackageMetadata: true # Optional
Arguments
A RGUM EN T DESC RIP T IO N
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
App Center Distribute task
4/22/2020 • 4 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to distribute app builds to testers and users through App Center.
Sign up with App Center first.
For details about using this task, see the App Center documentation article Deploy Azure DevOps Builds with
App Center.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
YAML snippet
# App Center distribute
# Distribute app builds to testers and users via Visual Studio App Center
- task: AppCenterDistribute@3
inputs:
serverEndpoint:
appSlug:
appFile:
#symbolsOption: 'Apple' # Optional. Options: apple, android
#symbolsPath: # Optional
#symbolsPdbFiles: '**/*.pdb' # Optional
#symbolsDsymFiles: # Optional
#symbolsIncludeParentDirectory: # Optional
#releaseNotesOption: 'input' # Options: input, file
#releaseNotesInput: # Required when releaseNotesOption == Input
#releaseNotesFile: # Required when releaseNotesOption == File
#isMandatory: false # Optional
#destinationType: 'groups' # Options: groups, store
#distributionGroupId: # Optional
#destinationStoreId: # Required when destinationType == store
#isSilent: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
app (Required) Relative path from the repo root to the APK or IPA
Binary file path file you want to publish
Argument alias: appFile
symbolsPath (Optional) Relative path from the repo root to the symbols
Symbols path folder.
appxsymPath (Optional) Relative path from the repo root to PDB symbols
Symbols path (*.appxsym) files. Path may contain wildcards.
dsymPath (Optional) Relative path from the repo root to dSYM folder.
dSYM path Path may contain wildcards.
Argument alias: symbolsDsymFiles
nativeLibrariesPath (Optional) Relative path from the repo root to the additional
Native Library File Path native libraries you want to publish (e.g. .so files)
packParentFolder (Optional) Upload the selected symbols file or folder and all
Include all items in parent folder other items inside the same parent folder. This is required for
React Native apps.
Argument alias: symbolsIncludeParentDirectory
releaseNotesFile (Required) Select a UTF-8 encoded text file which contains the
Release notes file Release Notes for this version.
isSilent (Optional) Testers will not receive an email for new releases.
Do not notify testers. Release will still be available to install.
Example
This example pipeline builds an Android app, runs tests, and publishes the app using App Center Distribute.
# Android
# Build your Android project with Gradle.
# Add steps that test, sign, and distribute the APK, save build artifacts, and more:
# https://docs.microsoft.com/azure/devops/pipelines/ecosystems/android
pool:
vmImage: 'macOS-latest'
steps:
- task: Gradle@2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'
tasks: build
- task: CopyFiles@2
inputs:
contents: '**/*.apk'
targetFolder: '$(build.artifactStagingDirectory)'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(build.artifactStagingDirectory)'
artifactName: 'outputs'
artifactType: 'container'
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure App Service Deploy task
11/2/2020 • 17 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy to a range of App Services on Azure. The task works on cross-platform agents running
Windows, Linux, or Mac and uses several different underlying deployment technologies.
The task works for ASP.NET, ASP.NET Core, PHP, Java, Python, Go, and Node.js based web applications.
The task can be used to deploy to a range of Azure App Services such as:
Web Apps on both Windows and Linux
Web Apps for Containers
Function Apps on both Windows and Linux
Function Apps for Containers
WebJobs
Apps configured under Azure App Service Environments
DockerImageTag (Optional) Tags are optional, but are the mechanism that
(Tag) registries use to apply version information to Docker images.
Note: the fully-qualified image name will be of the format:
{registr y or namespace}/{repositor y}:{tag} . For example,
myregistr y.azurecr.io/nginx:latest
PA RA M ET ERS DESC RIP T IO N
VirtualApplication (Optional) Specify the name of the Virtual Application that has
(Virtual application) been configured in the Azure portal. This option is not
required for deployments to the website root. The Virtual
Application must have been configured before deployment of
the web project.
ScriptPath (Required if ScriptType == File Path) The path and name of the
(Deployment script path) script to execute.
TakeAppOfflineFlag (Optional) Select this option to take the Azure App Service
(Take App Offline) offline by placing an app_offline.htm file in the root
directory before the synchronization operation begins. The file
will be removed after the synchronization completes
successfully.
Default value: true
RemoveAdditional (Optional) Select the option to delete files on the Azure App
FilesFlag Service that have no matching files in the App Service
(Remove additional files at destination) package or folder.
Note: This will also remove all files related to any extensions
installed on this Azure App Service. To prevent this, set the
Exclude files from App_Data folder checkbox.
Default value: false
{
"Data": {
"DefaultConnection": {
"ConnectionString": "Server=
(localdb)\SQLEXPRESS;Database=MyDB;Trusted_Connection=
True"
}
}
}
variables:
azureSubscriptionEndpoint: Contoso
DockerNamespace: contoso.azurecr.io
DockerRepository: aspnetcore
WebAppName: containersdemoapp
steps:
- task: AzureRMWebAppDeployment@4
displayName: Azure App Service Deploy
inputs:
appType: webAppContainer
ConnectedServiceName: $(azureSubscriptionEndpoint)
WebAppName: $(WebAppName)
DockerNamespace: $(DockerNamespace)
DockerRepository: $(DockerRepository)
DockerImageTag: $(Build.BuildId)
To deploy to a specific app type, set appType to any of the following accepted values: webApp (Web App on
Windows), webAppLinux (Web App on Linux), webAppContainer (Web App for Containers - Linux),
functionApp (Function App on Windows), functionAppLinux (Function App on Linux),
functionAppContainer (Function App for Containers - Linux), apiApp (API App), mobileApp (Mobile App). If
not mentioned, webApp is taken as the default value.
To enable any advance deployment options, add the parameter enableCustomDeployment: true and include
the below parameters as needed.
Output Variables
Web App Hosted URL: Provide a name, such as FabrikamWebAppURL , for the variable populated with the Azure
App Service Hosted URL. The variable can be used as $( variableName .AppSer viceApplicationUrl) , for
example $(FabrikamWebAppURL.AppServiceApplicationUrl) , to refer to the hosted URL of the Azure App Service in
subsequent tasks.
Usage notes
The task works with the Azure Resource Manager APIs only.
To ignore SSL errors, define a variable named VSTS_ARM_REST_IGNORE_SSL_ERRORS with value true in the release
pipeline.
For .NET apps targeting Web App on Windows, avoid deployment failure with the error ERROR_FILE_IN_USE by
ensuring that Rename locked files and Take App Offline settings are enabled. For zero downtime
deployment, use the slot swap option.
When deploying to an App Service that has Application Insights configured, and you have enabled Remove
additional files at destination , ensure you also enable Exclude files from the App_Data folder in order
to maintain the Application insights extension in a safe state. This is required because the Application Insights
continuous web job is installed into the App_Data folder.
Sample Post deployment script
The task provides an option to customize the deployment by providing a script that will run on the Azure App
Service after the app's artifacts have been successfully copied to the App Service. You can choose to provide either
an inline deployment script or the path and name of a script file in your artifact folder.
This is very useful when you want to restore your application dependencies directly on the App Service. Restoring
packages for Node, PHP, and Python apps helps to avoid timeouts when the application dependency results in a
large artifact being copied over from the agent to the Azure App Service.
An example of a deployment script is:
@echo off
if NOT exist requirements.txt (
echo No Requirements.txt found.
EXIT /b 0
)
if NOT exist "$(PYTHON_EXT)/python.exe" (
echo Python extension not available >&2
EXIT /b 1
)
echo Installing dependencies
call "$(PYTHON_EXT)/python.exe" -m pip install -U setuptools
if %errorlevel% NEQ 0 (
echo Failed to install setuptools >&2
EXIT /b 1
)
call "$(PYTHON_EXT)/python.exe" -m pip install -r requirements.txt
if %errorlevel% NEQ 0 (
echo Failed to install dependencies>&2
EXIT /b 1
)
Deployment methods
Several deployment methods are available in this task. Web Deploy (msdeploy.exe) is the default. To change the
deployment option, expand Additional Deployment Options and enable Select deployment method to
choose from additional package-based deployment options.
Based on the type of Azure App Service and agent, the task chooses a suitable deployment technology. The
different deployment technologies used by the task are:
Web Deploy
Kudu REST APIs
Container Registry
Zip Deploy
Run From Package
War Deploy
By default, the task tries to select the appropriate deployment technology based on the input package type, App
Service type, and agent operating system.
On non-Windows agents (for any App Service type), the task relies on Kudu REST APIs to deploy the app.
Web Deploy
Web Deploy (msdeploy.exe) can be used to deploy a Web App on Windows or a Function App to the Azure App
Service using a Windows agent. Web Deploy is feature-rich and offers options such as:
Rename locked files: Rename any file that is still in use by the web server by enabling the msdeploy flag
MSDEPLOY_RENAME_LOCKED_FILES=1 in the Azure App Service settings. This option, if set, enables
msdeploy to rename files that are locked during app deployment.
Remove additional files at destination: Deletes files in the Azure App Service that have no matching
files in the App Service artifact package or folder being deployed.
Exclude files from the App_Data folder : Prevent files in the App_Data folder (in the artifact
package/folder being deployed) being deployed to the Azure App Service
Additional Web Deploy arguments: Arguments that will be applied when deploying the Azure App
Service. Example: -disableLink:AppPoolExtension -disableLink:ContentExtension . For more examples of Web
Deploy operation settings, see Web Deploy Operation Settings.
Install Web Deploy on the agent using the Microsoft Web Platform Installer. Web Deploy 3.5 must be installed
without the bundled SQL support. There is no need to choose any custom settings when installing Web Deploy.
Web Deploy is installed at C:\Program Files (x86)\IIS\Microsoft Web Deploy V3.
Kudu REST APIs
Kudu REST APIs work on both Windows and Linux automation agents when the target is a Web App on Windows,
Web App on Linux (built-in source), or Function App. The task uses Kudu to copy files to the Azure App service.
Container Registry
Works on both Windows and Linux automation agents when the target is a Web App for Containers. The task
updates the app by setting the appropriate container registry, repository, image name, and tag information. You
can also use the task to pass a startup command for the container image.
Zip Deploy
Expects a .zip deployment package and deploys the file contents to the wwwroot folder of the App Service or
Function App in Azure. This option overwrites all existing contents in the wwwroot folder. For more information,
see Zip deployment for Azure Functions.
Run From Package
Expects the same deployment package as Zip Deploy. However, instead of deploying files to the wwwroot folder,
the entire package is mounted by the Functions runtime and files in the wwwroot folder become read-only. For
more information, see Run your Azure Functions from a package file.
War Deploy
Expects a .war deployment package and deploys the file content to the wwwroot folder or webapps folder of the
App Service in Azure.
Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.
Error: No package found with specified pattern
Check if the package mentioned in the task is published as an artifact in the build or a previous stage and
downloaded in the current job.
Error: Publish using zip deploy option is not supported for msBuild package type
Web packages created using MSBuild task (with default arguments) have a nested folder structure that can only be
deployed correctly by Web Deploy. Publish to zip deploy option can not be used to deploy those packages. To
convert the packaging structure, follow the below steps.
In Build Solution task, change the MSBuild Arguments to /p:DeployOnBuild=true
/p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True
/p:publishUrl="$(System.DefaultWorkingDirectory)\WebAppContent"
Add Archive Task and change the inputs as follows:
Change Root folder or file to archive to $(System.DefaultWorkingDirectory)\WebAppContent
Click on the more button Generate web.config parameters for Python, Node.js, Go and Java apps to edit the
parameters.
Select your application type from the drop down.
Click on OK. This will populate web.config parameters required to generate web.config.
ERROR_FILE_IN_USE
When deploying .NET apps to Web App on Windows, deployment may fail with error code ERROR_FILE_IN_USE.
To resolve the error, ensure Rename locked files and Take App Offline options are enabled in the task. For zero
downtime deployments, use slot swap.
You can also use Run From Package deployment method to avoid resource locking.
Web Deploy Error
If you are using web deploy to deploy your app, in some error scenarios Web Deploy will show an error code in
the log. To troubleshoot a web deploy error see this.
Web app deployment on App Service Environment (ASE) is not working
Ensure that the Azure DevOps build agent is on the same VNET (subnet can be different) as the Internal Load
Balancer (ILB) of ASE. This will enable the agent to pull code from Azure DevOps and deploy to ASE.
If you are using Azure DevOps, the agent neednt be accessible from internet but needs only outbound access to
connect to Azure DevOps Service.
If you are using TFS/Azure DevOps Server deployed in a Virtual Network, the agent can be completely isolated.
Build agent must be configured with the DNS configuration of the Web App it needs to deploy to. Since the
private resources in the Virtual Network don't have entries in Azure DNS, this needs to be added to the hosts
file on the agent machine.
If a self-signed certificate is used for the ASE configuration, "-allowUntrusted" option needs to be set in the
deploy task for MSDeploy.It is also recommended to set the variable VSTS_ARM_REST_IGNORE_SSL_ERRORS
to true. If a certificate from a certificate authority is used for ASE configuration, this should not be necessary.
FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during
configuration. This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about
running a self-hosted agent behind a web proxy
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure App Service Manage task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to start, stop, restart, slot swap, Swap with Preview, install site extensions, or enable continuous
monitoring for an Azure App Service.
YAML snippet
# Azure App Service manage
# Start, stop, restart, slot swap, slot delete, install site extensions or enable continuous monitoring for an
Azure App Service
- task: AzureAppServiceManage@0
inputs:
azureSubscription:
#action: 'Swap Slots' # Optional. Options: swap Slots, start Azure App Service, stop Azure App Service,
restart Azure App Service, delete Slot, install Extensions, enable Continuous Monitoring, start All Continuous
Webjobs, stop All Continuous Webjobs
webAppName:
#specifySlotOrASE: false # Optional
#resourceGroupName: # Required when action == Swap Slots || Action == Delete Slot || SpecifySlot == True
#sourceSlot: # Required when action == Swap Slots
#swapWithProduction: true # Optional
#targetSlot: # Required when action == Swap Slots && SwapWithProduction == False
#preserveVnet: false # Optional
#slot: 'production' # Required when action == Delete Slot || SpecifySlot == True
#extensionsList: # Required when action == Install Extensions
#outputVariable: # Optional
#appInsightsResourceGroupName: # Required when action == Enable Continuous Monitoring
#applicationInsightsResourceName: # Required when action == Enable Continuous Monitoring
#applicationInsightsWebTestName: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
SwapWithProduction (Optional) Select the option to swap the traffic of source slot
Swap with Production with production. If this option is not selected, then you will
have to provide source and target slot names.
Default value: true
Slot (Required)
Slot Default value: production
OutputVariable (Optional) Provide the variable name for the local installation
Output variable path for the selected extension.This field is now deprecated
and would be removed. Use LocalPathsForInstalledExtensions
variable from Output Variables section in subsequent tasks.
Azure Pipelines
Use this task to configure App settings, connection strings and other general settings in bulk using JSON syntax on
your web app or any of its deployment slots. The task works on cross platform Azure Pipelines agents running
Windows, Linux or Mac. The task works for ASP.NET, ASP.NET Core, PHP, Java, Python, Go and Node.js based web
applications.
Arguments
PA RA M ET ERS DESC RIP T IO N
Following is an example YAML snippet to deploy web application to the Azure Web App service running on
windows.
Example
variables:
azureSubscription: Contoso
WebApp_Name: sampleWebApp
# To ignore SSL error uncomment the below variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true
steps:
- task: AzureWebApp@1
displayName: Azure Web App Deploy
inputs:
azureSubscription: $(azureSubscription)
appName: $(WebApp_Name)
package: $(System.DefaultWorkingDirectory)/**/*.zip
- task: AzureAppServiceSettings@1
displayName: Azure App Service Settings
inputs:
azureSubscription: $(azureSubscription)
appName: $(WebApp_Name)
# To deploy the settings on a slot, provide slot name as below. By default, the settings would be applied
to the actual Web App (Production slot)
# slotName: staging
appSettings: |
[
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "$(Key)",
"slotSetting": false
},
{
"name": "MYSQL_DATABASE_NAME",
"value": "$(DB_Name)",
"slotSetting": false
}
]
generalSettings: |
[
{
"name": "WEBAPP_NAME",
"value": "$(WebApp_Name)",
"slotSetting": false
},
{
"name": "WEBAPP_PLAN_NAME",
"value": "$(WebApp_PlanName)",
"slotSetting": false
}
]
connectionStrings: |
[
{
"name": "MysqlCredentials",
"value": "$(MySQl_ConnectionString)",
"type": "MySql",
"slotSetting": false
}
]
Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure CLI task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to run a shell or batch script containing Azure CLI commands against an Azure subscription.
This task is used to run Azure CLI commands on cross-platform agents running on Linux, macOS, or Windows
operating systems.
What's new in Version 2.0
Supports running PowerShell and PowerShell Core script
PowerShell Core script works with Xplat agents (Windows, Linux or OSX), make sure the agent has PowerShell
version 6 or higher
PowerShell script works only with Windows agent, make sure the agent has PowerShell version 5 or lower
Prerequisites
A Microsoft Azure subscription
Azure Resource Manager service connection to your Azure account
Microsoft hosted agents have Azure CLI pre-installed. However if you are using private agents, install Azure
CLI on the computer(s) that run the build and release agent. If an agent is already running on the machine
on which the Azure CLI is installed, restart the agent to ensure all the relevant stage variables are updated.
Task Inputs
PA RA M ET ERS DESC RIP T IO N
inlineScript (Required) You can write your scripts inline here. When using
Inline Script Windows agent, use PowerShell or PowerShell Core or batch
scripting whereas use PowerShell Core or shell scripting when
using Linux based agents. For batch files use the prefix \"call\"
before every azure command. You can also pass predefined
and custom variables to this script using arguments.
Example for PowerShell/PowerShellCore/shell: az --
version az account show
Example for batch: call az --version call az account show
useGlobalConfig (Optional) If this is false, this task will use its own separate
Use global Azure CLI configuration Azure CLI configuration directory. This can be used to run
Azure CLI tasks in parallel releases"
Default value: false
failOnStandardError (Optional) If this is true, this task will fail when any errors are
Fail on Standard Error written to the StandardError stream. Unselect the checkbox to
ignore standard errors and rely on exit codes to determine the
status
Default value: false
Example
Following is an example of a YAML snippet which lists the version of Azure CLI and gets the details of the
subscription.
- task: AzureCLI@2
displayName: Azure CLI
inputs:
azureSubscription: <Name of the Azure Resource Manager service connection>
scriptType: ps
scriptLocation: inlineScript
inlineScript: |
az --version
az account show
Related tasks
Azure Resource Group Deployment
Azure Cloud Service Deployment
Azure Web App Deployment
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Azure Cloud Service Deployment task
4/22/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy an Azure Cloud Service.
YAML snippet
# Azure Cloud Service deployment
# Deploy an Azure Cloud Service
- task: AzureCloudPowerShellDeployment@1
inputs:
azureClassicSubscription:
#storageAccount: # Required when enableAdvancedStorageOptions == False
serviceName:
serviceLocation:
csPkg:
csCfg:
#slotName: 'Production'
#deploymentLabel: '$(Build.BuildNumber)' # Optional
#appendDateTimeToLabel: false # Optional
#allowUpgrade: true
#simultaneousUpgrade: false # Optional
#forceUpgrade: false # Optional
#verifyRoleInstanceStatus: false # Optional
#diagnosticStorageAccountKeys: # Optional
#newServiceCustomCertificates: # Optional
#newServiceAdditionalArguments: # Optional
#newServiceAffinityGroup: # Optional
#enableAdvancedStorageOptions: false
#aRMConnectedServiceName: # Required when enableAdvancedStorageOptions == True
#aRMStorageAccount: # Required when enableAdvancedStorageOptions == True
Arguments
A RGUM EN T DESC RIP T IO N
DeploymentLabel (Optional) Specifies the label name for the new deployment. If
Deployment label not specified, a Globally Unique Identifier (GUID) is used.
Default value: $(Build.BuildNumber)
VerifyRoleInstanceStatus When selected then the task will wait until role instances are
Verify role instance status in ready state
NewServiceAffinityGroup (Optional) While creating new service, this affinity group will
Affinity group be considered instead of using service location.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure File Copy task
11/2/2020 • 9 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015.3
Use this task to copy files to Microsoft Azure storage blobs or virtual machines (VMs).
NOTE
This task is written in PowerShell and thus works only when run on Windows agents. If your pipelines require Linux agents
and need to copy files to an Azure Storage Account, consider running az storage blob commands in the Azure CLI task
as an alternative.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
The task is used to copy application files and other artifacts that are required in order to install the app; such as
PowerShell scripts, PowerShell-DSC modules, and more.
NOTE
If you are using Azure File copy task version 3 or below refer to this.
When the target is Azure VMs, the files are first copied to an automatically generated Azure blob container and
then downloaded into the VMs. The container is deleted after the files have been successfully copied to the VMs.
The task uses AzCopy , the command-line utility built for fast copying of data from and into Azure storage
accounts. The version 4 of the Azure File Copy task uses AzCopy V10.
To dynamically deploy Azure Resource Groups that contain virtual machines, use the Azure Resource Group
Deployment task. This task has a sample template that can perform the required operations to set up the WinRM
HTTPS protocol on the virtual machines, open the 5986 port in the firewall, and install the test certificate.
NOTE
If you are deploying to Azure Static Websites as a container in blob storage, you must use Version 2 or higher of the task
in order to preserve the $web container name.
The task supports authentication based on Azure Active Directory. Authentication using a service principal and
managed identity are available. For managed identities, only system-wide managed identity is supported.
NOTE
For authorization you will have to provide access to the Security Principal. The level of authorization required can be referred
here.
YAML snippet
# Azure file copy
# Copy files to Azure Blob Storage or virtual machines
- task: AzureFileCopy@4
inputs:
sourcePath:
azureSubscription:
destination: # Options: azureBlob, azureVMs
storage:
#containerName: # Required when destination == AzureBlob
#blobPrefix: # Optional
#resourceGroup: # Required when destination == AzureVMs
#resourceFilteringMethod: 'machineNames' # Optional. Options: machineNames, tags
#machineNames: # Optional
#vmsAdminUserName: # Required when destination == AzureVMs
#vmsAdminPassword: # Required when destination == AzureVMs
#targetPath: # Required when destination == AzureVMs
#additionalArgumentsForBlobCopy: # Optional
#additionalArgumentsForVMCopy: # Optional
#enableCopyPrerequisites: false # Optional
#copyFilesInParallel: true # Optional
#cleanTargetBeforeCopy: false # Optional
#skipCACheck: true # Optional
#sasTokenTimeOutInMinutes: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Related tasks
Azure Resource Group Deployment
Azure Cloud Service Deployment
Azure Web App Deployment
FAQ
What are the Azure PowerShell prerequisites for using this task?
The task requires Azure PowerShell to be installed on the machine running the automation agent. The
recommended version is 1.0.2, but the task will work with version 0.9.8 and higher. You can use the Azure
PowerShell Installer v1.0.2 to obtain this.
What are the WinRM prerequisites for this task?
The task uses Windows Remote Management (WinRM) HTTPS protocol to copy the files from the storage blob
container to the Azure VMs. This requires the WinRM HTTPS service to be configured on the VMs, and a suitable
certificate installed. Configure WinRM after virtual machine creation
If the VMs have been created without opening the WinRM HTTPS ports, follow these steps:
1. Configure an inbound access rule to allow HTTPS on port 5986 of each VM.
2. Disable UAC remote restrictions.
3. Specify the credentials for the task to access the VMs using an administrator-level login in the simple form
username without any domain part.
4. Install a certificate on the machine that runs the automation agent.
5. Set the Test Cer tificate parameter of the task if you are using a self-signed certificate.
What type of service connection should I choose?
For Azure Resource Manager storage accounts and Azure Resource Manager VMs, use an Azure Resource
Manager service connection type. See more details at Automating Azure Resource Group deployment
using a Service Principal.
While using an Azure Resource Manager service connection type, the task automatically filters
appropriate newer Azure Resource Manager storage accounts, and other fields. For example, the Resource
Group or cloud service, and the virtual machines.
How do I create a school or work account for use with this task?
A suitable account can be easily created for use in a service connection:
1. Use the Azure portal to create a new user account in Azure Active Directory.
2. Add the Azure Active Directory user account to the co-administrators group in your Azure subscription.
3. Sign into the Azure portal with this user account and change the password.
4. Use the username and password of this account in the service connection. Deployments will be processed
using this account.
If the task fails, will the copy resume?
Since AzCopy V10 does not support journal files, the task cannot resume the copy. You will have to run the task
again to copy all the files.
Are the log files and plan files cleaned after the copy?
The log and plan files are not deleted by the task. To explicitly clean up the files you can add a CLI step in the
workflow using this command.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Function App task
11/2/2020 • 7 minutes to read • Edit Online
Azure Pipelines
Use the Azure Function App task to deploy Functions to Azure.
Arguments
PA RA M ET ERS DESC RIP T IO N
Example
variables:
azureSubscription: Contoso
# To ignore SSL error uncomment the below variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true
steps:
- task: AzureFunctionApp@1
displayName: Azure Function App Deploy
inputs:
azureSubscription: $(azureSubscription)
appName: samplefunctionapp
package: $(System.DefaultWorkingDirectory)/**/*.zip
To deploy Function on Linux, add the appType parameter and set it to appType: functionAppLinux . If not mentioned,
functionApp is taken as the default value.
To explicitly specify the deployment method as Zip Deploy, add the parameter deploymentMethod: zipDeploy . Other
supported value for this parameter is runFromPackage . If not mentioned, auto is taken as the default value.
For an end-to-end walkthrough, see Build and deploy Java to Azure Functions for End-to-end CI/CD.
Deployment methods
Several deployment methods are available in this task. Auto is the default option.
To change the deployment option in designer task, expand Additional Deployment Options and enable Select
deployment method to choose from additional package-based deployment options.
Based on the type of Azure App Service and Azure Pipelines agent, the task chooses a suitable deployment
technology. The different deployment technologies used by the task are:
Kudu REST APIs
Zip Deploy
RunFromPackage
By default the task tries to select the appropriate deployment technology given the input package, app service type
and agent OS.
When post deployment script is provided, use Zip Deploy
When the App Service type is Web App on Linux App, use Zip Deploy
If War file is provided, use War Deploy
If Jar file is provided, use Run From Zip
For all others, use Run From Package (via Zip Deploy)
On non-Windows agent (for any App service type), the task relies on Kudu REST APIs to deploy the Web App.
Kudu REST APIs
Works on Windows as well as Linux automation agent when the target is a Web App on Windows or Web App on
Linux (built-in source) or Function App. The task uses Kudu to copy over files to the Azure App service.
Zip Deploy
Creates a .zip deployment package of the chosen Package or folder and deploys the file contents to the wwwroot
folder of the App Service name function app in Azure. This option overwrites all existing contents in the wwwroot
folder. For more information, see Zip deployment for Azure Functions.
RunFromPackage
Creates the same deployment package as Zip Deploy. However, instead of deploying files to the wwwroot folder,
the entire package is mounted by the Functions runtime. With this option, files in the wwwroot folder become
read-only. For more information, see Run your Azure Functions from a package file.
Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.
Error: No package found with specified pattern
Check if the package mentioned in the task is published as an artifact in the build or a previous stage and
downloaded in the current job.
Error: Publish using zip deploy option is not supported for msBuild package type
Web packages created using MSBuild task (with default arguments) have a nested folder structure that can only be
deployed correctly by Web Deploy. Publish to zip deploy option can not be used to deploy those packages. To
convert the packaging structure, follow the below steps.
In Build Solution task, change the MSBuild Arguments to /p:DeployOnBuild=true
/p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True
/p:publishUrl="$(System.DefaultWorkingDirectory)\WebAppContent"
Add Archive Task and change the inputs as follows:
Change Root folder or file to archive to $(System.DefaultWorkingDirectory)\WebAppContent
Function app deployment on Windows is successful but the app is not working
This may be because web.config is not present in your app. You can either add a web.config file to your source or
auto-generate one using the Application and Configuration Settings of the task.
Click on the task and go to Generate web.config parameters for Python, Node.js, Go and Java apps.
Click on the more button Generate web.config parameters for Python, Node.js, Go and Java apps to edit the
parameters.
Select your application type from the drop down.
Click on OK. This will populate web.config parameters required to generate web.config.
FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during configuration.
This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about running a self-
hosted agent behind a web proxy
Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Function App for Container task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy an Azure Function on Linux using a custom image.
Task Inputs
PA RA M ET ERS DESC RIP T IO N
Example
This example deploys Azure Functions on Linux using containers:
variables:
imageName: contoso.azurecr.io/azurefunctions-containers:$(build.buildId)
azureSubscription: Contoso
# To ignore SSL error uncomment the following variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true
steps:
- task: AzureFunctionAppContainer@1
displayName: Azure Function App on Container deploy
inputs:
azureSubscription: $(azureSubscription)
appName: functionappcontainers
imageName: $(imageName)
Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.
FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during configuration.
This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about running a self-
hosted agent behind a web proxy
Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Key Vault task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Overview
Use this task to download secrets such as authentication keys, storage account keys, data encryption keys, .PFX
files, and passwords from an Azure Key Vault instance. The task can be used to fetch the latest values of all or a
subset of secrets from the vault, and set them as variables that can be used in subsequent tasks of a pipeline. The
task is Node-based, and works with agents on Linux, macOS, and Windows.
Prerequisites
The task has the following Prerequisites:
An Azure subscription linked to Azure Pipelines or Team Foundation Server using the Azure Resource
Manager service connection.
An Azure Key Vault containing the secrets.
You can create a key vault:
In the Azure portal
By using Azure PowerShell
By using the Azure CLI
Add secrets to a key vault:
By using the PowerShell cmdlet Set-AzureKeyVaultSecret. If the secret does not exist, this cmdlet creates it. If
the secret already exists, this cmdlet creates a new version of that secret.
By using the Azure CLI. To add a secret to a key vault, for example a secret named SQLPassword with the
value Pa$$w0rd , type:
YAML snippet
# Azure Key Vault
# Download Azure Key Vault secrets
- task: AzureKeyVault@1
inputs:
azureSubscription:
keyVaultName:
secretsFilter: '*'
runAsPreJob: false # Azure DevOps Services only
Arguments
PA RA M ET ER DESC RIP T IO N
ConnectedServiceName Azure Subscription (Required) Select the service connection for the Azure
subscription containing the Azure Key Vault instance, or create
a new connection. Learn more
KeyVaultName (Required) Select the name of the Azure Key Vault from which
Key Vault the secrets will be downloaded.
RunAsPreJob (Required) Run the task before job execution begins. Exposes
Make secrets available to whole job secrets to all tasks in the job, not just tasks that follow this
one.
Default value: false
PA RA M ET ER DESC RIP T IO N
ConnectedServiceName Azure Subscription (Required) Select the service connection for the Azure
subscription containing the Azure Key Vault instance, or create
a new connection. Learn more
KeyVaultName (Required) Select the name of the Azure Key Vault from which
Key Vault the secrets will be downloaded.
NOTE
Values are retrieved as strings. For example, if there is a secret named connectionString , a task variable
connectionString is created with the latest value of the respective secret fetched from Azure key vault. This variable is
then available in subsequent tasks.
If the value fetched from the vault is a certificate (for example, a PFX file), the task variable will contain the contents
of the PFX in string format. You can use the following PowerShell code to retrieve the PFX file from the task
variable:
$kvSecretBytes = [System.Convert]::FromBase64String($(PfxSecret))
$certCollection = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2Collection
$certCollection.Import($kvSecretBytes,$null,
[System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable)
If the certificate file will be stored locally on the machine, it is good practice to encrypt it with a password:
For more details, see Get started with Azure Key Vault certificates.
Contact Information
Contact [email protected] if you discover issues using the task, to share feedback about the
task, or to suggest new features that you would like to see.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Azure Monitor Alerts task
4/22/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to configure alerts on available metrics for an Azure resource.
YAML snippet
# Azure Monitor alerts
# Configure alerts on available metrics for an Azure resource
- task: AzureMonitorAlerts@0
inputs:
azureSubscription:
resourceGroupName:
#resourceType: 'Microsoft.Insights/components' # Options: microsoft.Insights/Components,
microsoft.Web/Sites, microsoft.Storage/StorageAccounts, microsoft.Compute/VirtualMachines
resourceName:
alertRules:
#notifyServiceOwners: # Optional
#notifyEmails: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
ResourceGroupName (Required) Select the Azure Resource Group that contains the
Resource Group Azure resource where you want to configure an alert.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Database for Mysql Deployment task
5/3/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to run your scripts and make changes to your Azure DB for Mysql. Note that this is an early preview
version.
YAML snippet
# Azure Database for MySQL deployment
# Run your scripts and make changes to your Azure Database for MySQL
- task: AzureMysqlDeployment@1
inputs:
azureSubscription:
serverName:
#databaseName: # Optional
sqlUsername:
sqlPassword:
#taskNameSelector: 'SqlTaskFile' # Optional. Options: sqlTaskFile, inlineSqlTask
#sqlFile: # Required when taskNameSelector == SqlTaskFile
#sqlInline: # Required when taskNameSelector == InlineSqlTask
#sqlAdditionalArguments: # Optional
#ipDetectionMethod: 'AutoDetect' # Options: autoDetect, iPAddressRange
#startIpAddress: # Required when ipDetectionMethod == IPAddressRange
#endIpAddress: # Required when ipDetectionMethod == IPAddressRange
#deleteFirewallRule: true # Optional
Arguments
A RGUM EN T DESC RIP T IO N
TaskNameSelector (Optional) Select one of the options between Script File &
Type Inline Script.
SqlFile (Required) Full path of the script file on the automation agent
MySQL Script or on a UNC path accessible to the automation agent like,
\BudgetIT\DeployBuilds\script.sql . Also, predefined
system variables like, $(agent.releaseDirectory) can also
be used here. A file containing SQL statements can be used
here
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Security and Compliance Assessment task
11/2/2020 • 2 minutes to read • Edit Online
Azure Policy allows you to assess and enforce resource compliance against defined IT policies. Use this task in a
gate to identify, analyze and evaluate the security risks, and determine the mitigation measures required to reduce
the risks.
Demands
Can be used only as a gate. This task is not supported in a build or release pipeline.
YAML snippet
# Check Azure Policy compliance
# Security and compliance assessment for Azure Policy
- task: AzurePolicyCheckGate@0
inputs:
azureSubscription:
#resourceGroupName: # Optional
#resources: # Optional
Arguments
PA RA M ET ERS DESC RIP T IO N
Resource name Select the name of the Azure resources for which you want to
check policy compliance.
Azure PowerShell task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to run a PowerShell script within an Azure environment. The Azure context is authenticated with the
provided Azure Resource Manager service connection.
YAML snippet
# Azure PowerShell
# Run a PowerShell script within an Azure environment
- task: AzurePowerShell@4
inputs:
#azureSubscription: Required. Name of Azure Resource Manager service connection
#scriptType: 'FilePath' # Optional. Options: filePath, inlineScript
#scriptPath: # Optional
#inline: '# You can write your Azure PowerShell scripts inline here. # You can also pass predefined and
custom variables to this script using arguments' # Optional
#scriptArguments: # Optional
#errorActionPreference: 'stop' # Optional. Options: stop, continue, silentlyContinue
#failOnStandardError: false # Optional
#azurePowerShellVersion: 'OtherVersion' # Required. Options: latestVersion, otherVersion
#preferredAzurePowerShellVersion: # Required when azurePowerShellVersion == OtherVersion
Arguments
A RGUM EN T DESC RIP T IO N
FailOnStandardError (Optional) If this is true, this task will fail if any errors are
Fail on Standard Error written to the error pipeline, or if any data is written to the
Standard Error stream.
Default value: false
Samples
- task: AzurePowerShell@4
inputs:
azureSubscription: my-arm-service-connection
scriptType: filePath
scriptPath: $(Build.SourcesDirectory)\myscript.ps1
scriptArguments:
-Arg1 val1 `
-Arg2 val2 `
-Arg3 val3
azurePowerShellVersion: latestVersion
Troubleshooting
Script worked locally, but failed in the pipeline
This typically occurs when the service connection used in the pipeline has insufficient permissions to run the
script. Locally, the script runs with your credentials and would succeed as you may have the required access.
To resolve this issue, ensure the service principle/ authentication credentials have the required permissions. For
more information, see Use Role-Based Access Control to manage access to your Azure subscription resources.
Error: Could not find the modules: '' with Version: ''. If the module was recently installed, retry after restarting
the Azure Pipelines task agent
Azure PowerShell task uses Azure/AzureRM/Az PowerShell Module to interact with Azure Subscription. This issue
occurs when the PowerShell module is not available on the Hosted Agent. Hence, for a particular task version,
Preferred Azure PowerShell version must be specified in the Azure PowerShell version options from the
following available list of versions.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Resource Group Deployment task
11/2/2020 • 8 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy, start, stop, and delete Azure Resource Groups.
YAML snippet
# Azure resource group deployment
# Deploy an Azure Resource Manager (ARM) template to a resource group and manage virtual machines
- task: AzureResourceGroupDeployment@2
inputs:
azureSubscription:
#action: 'Create Or Update Resource Group' # Options: create Or Update Resource Group, select Resource
Group, start, stop, stopWithDeallocate, restart, delete, deleteRG
resourceGroupName:
#location: # Required when action == Create Or Update Resource Group
#templateLocation: 'Linked artifact' # Options: linked Artifact, uRL Of The File
#csmFileLink: # Required when templateLocation == URL Of The File
#csmParametersFileLink: # Optional
#csmFile: # Required when TemplateLocation == Linked Artifact
#csmParametersFile: # Optional
#overrideParameters: # Optional
#deploymentMode: 'Incremental' # Options: Incremental, Complete, Validation
#enableDeploymentPrerequisites: 'None' # Optional. Options: none, configureVMwithWinRM,
configureVMWithDGAgent
#teamServicesConnection: # Required when enableDeploymentPrerequisites == ConfigureVMWithDGAgent
#teamProject: # Required when enableDeploymentPrerequisites == ConfigureVMWithDGAgent
#deploymentGroupName: # Required when enableDeploymentPrerequisites == ConfigureVMWithDGAgent
#copyAzureVMTags: true # Optional
#runAgentServiceAsUser: # Optional
#userName: # Required when enableDeploymentPrerequisites == ConfigureVMWithDGAgent &&
RunAgentServiceAsUser == True
#password: # Optional
#outputVariable: # Optional
#deploymentName: # Optional
#deploymentOutputs: # Optional
#addSpnToEnvironment: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
project (Required) Specify the Team project which has the Deployment
Team project Group defined in it.
Argument aliases: teamProject
password The password for the user to run the agent service on the
Password Windows VMs.
It is assumed that the password is same for the specified user
on all the VMs.
It can accept variable defined in build or release pipelines as
$(passwordVariable) . You may mark variable as secret to
secure it.
For linux VMs, a password is not required and will be ignored.
outputVariable (Optional) Provide a name for the variable for the resource
VM details for WinRM group. The variable can be used as $(variableName) to refer
to the resource group in subsequent tasks like in the
PowerShell on Target Machines task for deploying
applications. Valid only when the selected action is Create,
Update or Select , and required when an existing resource
group is selected.
deploymentOutputs (Optional) Provide a name for the variable for the output
Deployment outputs variable which will contain the outputs section of the current
deployment object in string format. You can use the
Conver tFrom-Json PowerShell cmdlet to parse the JSON
object and access the individual output values.
addSpnToEnvironment Adds service principal ID and key of the Azure endpoint you
Access service principal details in override parameters chose to the script's execution environment. You can use
these variables: $ser vicePrincipalId and
$ser vicePrincipalKey in your override parameters like -key
$ser vicePrincipalKey
Troubleshooting
Error: Internal Server Error
These issues are mostly transient in nature. There are multiple reasons why it could be happening:
One of the Azure service you're trying to deploy is undergoing maintainance in the region you're trying to
deploy to. Keep an eye out on https://status.azure.com/ to check downtimes of Azure Services.
Azure Pipelines service itself is going through maintenance. Keep an eye out on https://status.dev.azure.com/
for downtimes.
However, we've seen some instances where this is due to an error in the ARM template, such as the Azure service
you're trying to deploy doesn't support the region you've chosen for the resource.
Error: Timeout
Timeout issues could be coming from two places:
Azure Pipelines Agent
Portal Deployment
You can identify if the timeout is from portal, by checking for the portal deployment link that'll be in the task logs.
If there's no link, this is likely due to Azure Pipelines agent. If there's a link, follow the link to see if there's a timeout
that has happened in the portal deployment.
Azure Pipelines Agent
If the issue is coming from Azure Pipelines agent, you can increase the timeout by setting timeoutInMinutes as key
in the YAML to 0. Check out this article for more details:
https://docs.microsoft.com/azure/devops/pipelines/process/phases?tabs=yaml.
Portal Deployment
Check out this doc on how to identify if the error came from the Azure portal:
https://docs.microsoft.com/azure/azure-resource-manager/templates/deployment-history?tabs=azure-portal.
In case of portal deployment, try setting "timeoutInMinutes" in the ARM template to "0". If not specified, the value
assumed is 60 minutes. 0 makes sure the deployment will run for as long as it can to succeed.
This could also be happening because of transient issues in the system. Keep an eye on
https://status.dev.azure.com/ to check if there's a downtime in Azure Pipelines service.
Error: Azure Resource Manager (ARM ) template failed validation
This issue happens mostly because of an invalid parameter in the ARM Template, such as an unsupported SKU or
Region. If the validation has failed, please check the error message. It should point you to the resource and
parameter that is invalid.
In addition, refer to this article regarding structure and syntax of ARM Templates:
https://docs.microsoft.com/azure/azure-resource-manager/templates/template-syntax.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure SQL Database Deployment task
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy to Azure SQL DB using a DACPAC or run scripts using SQLCMD.
IMPORTANT
This task is supported only in a Windows environment. If you are trying to use Azure Active Directory (Azure AD) integrated
authentication, you must create a private agent. Azure AD integrated authentication is not supported for hosted agents.
YAML snippet
# Azure SQL Database deployment
# Deploy an Azure SQL Database using DACPAC or run scripts using SQLCMD
- task: SqlAzureDacpacDeployment@1
inputs:
#azureConnectionType: 'ConnectedServiceNameARM' # Optional. Options: connectedServiceName,
connectedServiceNameARM
#azureClassicSubscription: # Required when azureConnectionType == ConnectedServiceName
#azureSubscription: # Required when azureConnectionType == ConnectedServiceNameARM
#authenticationType: 'server' # Options: server, aadAuthenticationPassword, aadAuthenticationIntegrated,
connectionString
#serverName: # Required when authenticationType == Server || AuthenticationType ==
AadAuthenticationPassword || AuthenticationType == AadAuthenticationIntegrated
#databaseName: # Required when authenticationType == Server || AuthenticationType ==
AadAuthenticationPassword || AuthenticationType == AadAuthenticationIntegrated
#sqlUsername: # Required when authenticationType == Server
#sqlPassword: # Required when authenticationType == Server
#aadSqlUsername: # Required when authenticationType == AadAuthenticationPassword
#aadSqlPassword: # Required when authenticationType == AadAuthenticationPassword
#connectionString: # Required when authenticationType == ConnectionString
#deployType: 'DacpacTask' # Options: dacpacTask, sqlTask, inlineSqlTask
#deploymentAction: 'Publish' # Required when deployType == DacpacTask. Options: publish, extract, export,
import, script, driftReport, deployReport
#dacpacFile: # Required when deploymentAction == Publish || DeploymentAction == Script || DeploymentAction
== DeployReport
#bacpacFile: # Required when deploymentAction == Import
#sqlFile: # Required when deployType == SqlTask
#sqlInline: # Required when deployType == InlineSqlTask
#publishProfile: # Optional
#additionalArguments: # Optional
#sqlAdditionalArguments: # Optional
#inlineAdditionalArguments: # Optional
#ipDetectionMethod: 'AutoDetect' # Options: autoDetect, iPAddressRange
#startIpAddress: # Required when ipDetectionMethod == IPAddressRange
#endIpAddress: # Required when ipDetectionMethod == IPAddressRange
#deleteFirewallRule: true # Optional
Arguments
A RGUM EN T DESC RIP T IO N
DatabaseName (Required) Name of the Azure SQL Database, where the files
Database will be deployed.
TaskNameSelector (Optional) Specify the type of artifact, SQL DACPAC file, SQL
Deploy Type Script file, or Inline SQL Script.
Argument alias: deployType
Default value: DacpacTask
DeploymentAction (Required) Choose one of the SQL Actions from the list.
Action Publish, Extract, Export, Import, Script, Drift Report, Deploy
Report.
For more details refer link
Default value: Publish
A RGUM EN T DESC RIP T IO N
SqlFile (Required when Deploy Type is SQL Script file) Location of the
SQL Script SQL script file on the automation agent or on a UNC path
accessible to the automation agent like,
\BudgetIT\Web\Deploy\FabrikamDB.sql . Predefined system
variables like, $(agent.releaseDirectory) can also be used
here.
SqlInline (Required when Deploy Type is Inline SQL Script) Enter the
Inline SQL Script SQL script to execute on the Database selected above.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Web App task
11/2/2020 • 8 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy web applications to Azure App service.
Arguments
PA RA M ET ERS DESC RIP T IO N
Following is an example YAML snippet to deploy web application to the Azure Web App service running on
Windows.
Example
variables:
azureSubscription: Contoso
# To ignore SSL error uncomment the below variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true
steps:
- task: AzureWebApp@1
displayName: Azure Web App Deploy
inputs:
azureSubscription: $(azureSubscription)
appName: samplewebapp
package: $(System.DefaultWorkingDirectory)/**/*.zip
To deploy Web App on Linux, add the appType parameter and set it to appType: webAppLinux .
To specify the deployment method as Zip Deploy, add the parameter deploymentMethod: zipDeploy . Other
supported value for this parameter is runFromPackage . If not mentioned, auto is taken as the default value.
Deployment methods
Several deployment methods are available in this task. Auto is the default option.
To change the deployment option in designer task, expand Additional Deployment Options and enable Select
deployment method to choose from additional package-based deployment options.
Based on the type of Azure App Service and Azure Pipelines agent, the task chooses a suitable deployment
technology. The different deployment technologies used by the task are:
Kudu REST APIs
Zip Deploy
RunFromPackage
By default the task tries to select the appropriate deployment technology given the input package, app service type
and agent OS.
When the App Service type is Web App on Linux App, use Zip Deploy
If War file is provided, use War Deploy
If Jar file is provided, use Run From package
For all others, use Run From Zip (via Zip Deploy)
On non-Windows agent (for any App service type), the task relies on Kudu REST APIs to deploy the Web App.
Kudu REST APIs
Works on Windows as well as Linux automation agent when the target is Web App on Windows or Web App on
Linux (built-in source) or Function App. The task uses Kudu to copy files to the Azure App service.
Zip Deploy
Creates a .zip deployment package of the chosen Package or folder and deploys the file contents to the wwwroot
folder of the App Service name function app in Azure. This option overwrites all existing contents in the wwwroot
folder. For more information, see Zip deployment for Azure Functions.
RunFromPackage
Creates the same deployment package as Zip Deploy. However, instead of deploying files to the wwwroot folder,
the entire package is mounted by the Functions runtime. With this option, files in the wwwroot folder become
read-only. For more information, see Run your Azure Functions from a package file.
Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.
Error: No package found with specified pattern
Check if the package mentioned in the task is published as an artifact in the build or a previous stage and
downloaded in the current job.
Error: Publish using zip deploy option is not supported for msBuild package type
Web packages created using MSBuild task (with default arguments) have a nested folder structure that can only be
deployed correctly by Web Deploy. Publish to zip deploy option can not be used to deploy those packages. To
convert the packaging structure, follow the below steps.
In Build Solution task, change the MSBuild Arguments to /p:DeployOnBuild=true
/p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True
/p:publishUrl="$(System.DefaultWorkingDirectory)\WebAppContent"
Add Archive Task and change the inputs as follows:
Change Root folder or file to archive to $(System.DefaultWorkingDirectory)\WebAppContent
Web app deployment on Windows is successful but the app is not working
This may be because web.config is not present in your app. You can either add a web.config file to your source or
auto-generate one using the Application and Configuration Settings of the task.
Click on the task and go to Generate web.config parameters for Python, Node.js, Go and Java apps.
Click on the more button Generate web.config parameters for Python, Node.js, Go and Java apps to edit the
parameters.
FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during
configuration. This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about
running a self-hosted agent behind a web proxy
Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure virtual machine scale set Deployment task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy a virtual machine scale set image.
YAML snippet
# Azure VM scale set deployment
# Deploy a virtual machine scale set image
- task: AzureVmssDeployment@0
inputs:
azureSubscription:
#action: 'Update image' # Options: update Image, configure Application Startup
vmssName:
vmssOsType: # Options: windows, linux
imageUrl:
#customScriptsDirectory: # Optional
#customScript: # Optional
#customScriptArguments: # Optional
#customScriptsStorageAccount: # Optional
#skipArchivingCustomScripts: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Azure subscription (Required) Select the Azure Resource Manager subscription for
the scale set.
Virtual machine scale set name (Required) Name of virtual machine scale set which you want
to update by using either a VHD image or by using Custom
script VM extension.
Custom script directory (Optional) Path to directory containing custom script(s) that
will be run by using Custom Script VM extension. The
extension approach is useful for post deployment
configuration, application/software installation, or any other
application configuration/management task. For example: the
script can set a machine level stage variable which the
application uses, like database connection string.
Command (Optional) The script that will be run by using Custom Script
VM extension. This script can invoke other scripts in the
directory. The script will be invoked with arguments passed
below.
This script in conjugation with such arguments can be used to
execute commands. For example:
1. Update-DatabaseConnectionStrings.ps1 -clusterType dev -
user $(dbUser) -password $(dbUserPwd) will update
connection string in web.config of web application.
2. install-secrets.sh --key-vault-type prod -key
serviceprincipalkey will create an encrypted file containing
service principal key.
Azure storage account where custom scripts will be uploaded (Optional) The Custom Script Extension downloads and
executes scripts provided by you on each virtual machines in
the virtual machine scale set. These scripts will be stored in the
storage account specified here. Specify a pre-existing ARM
storage account.
Skip Archiving custom scripts (Optional) By default, this task creates a compressed archive of
directory containing custom scripts. This improves
performance and reliability while uploading to azure storage. If
not selected, archiving will not be done and all files will be
individually uploaded.
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Azure Web App for Container task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy Web Apps, Azure Functions, and WebJobs to Azure App Services using a custom Docker
image.
Task Inputs
PA RA M ET ERS DESC RIP T IO N
variables:
imageName: contoso.azurecr.io/aspnetcore:$(build.buildId)
azureSubscription: Contoso
# To ignore SSL error uncomment the following variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true
steps:
- task: AzureWebAppContainer@1
displayName: Azure Web App on Container Deploy
inputs:
appName: webappforcontainers
azureSubscription: $(azureSubscription)
imageName: $(imageName)
Troubleshooting
Error: Could not fetch access token for Azure. Verify if the Service Principal used is valid and not expired.
The task uses the service principal in the service connection to authenticate with Azure. If the service principal has
expired or does not have permissions to the App Service, the task fails with the specified error. Verify validity of the
service principal used and that it is present in the app registration. For more details, see Use Role-Based Access
Control to manage access to your Azure subscription resources. This blog post also contains more information
about using service principal authentication.
SSL error
To use a certificate in App Service, the certificate must be signed by a trusted certificate authority. If your web app
gives you certificate validation errors, you're probably using a self-signed certificate. Set a variable named
VSTS_ARM_REST_IGNORE_SSL_ERRORS to the value true in the build or release pipeline to resolve the error.
A release hangs for long time and then fails
This may be because there is insufficient capacity on your App Service Plan. To resolve this, you can scale up the
App Service instance to increase available CPU, RAM, and disk space or try with a different App Service plan.
5xx Error Codes
If you are seeing a 5xx error, then check the status of your Azure service.
FAQs
How should I configure my service connection?
This task requires an Azure Resource Manager service connection.
How should I configure Web Job Deployment with Azure Application Insights?
When deploying to an App Service with Application Insights configured and you have enabled “Remove additional
files at destination”, then you also need to enable “Exclude files from the App_Data folder” in order to keep the app
insights extension in a safe state. This is required because App Insights continuous web job gets installed into the
App_Data folder.
How should I configure my agent if it is behind a proxy while deploying to App Service?
When your self-hosted agent requires a web proxy, you can inform the agent about the proxy during configuration.
This allows your agent to connect to Azure Pipelines or TFS through the proxy. Learn more about running a self-
hosted agent behind a web proxy
Open Source
This task is open source on GitHub. Feedback and contributions are welcome.
Build Machine Image task
4/10/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to build a machine image using Packer. This image can be used for Azure Virtual machine scale set
deployment.
YAML snippet
# Build machine image
# Build a machine image using Packer, which may be used for Azure Virtual machine scale set deployment
- task: PackerBuild@1
inputs:
#templateType: 'builtin' # Options: builtin, custom
#customTemplateLocation: # Required when templateType == Custom
#customTemplateParameters: '{}' # Optional
connectedServiceName:
#isManagedImage: true
#managedImageName: # Required when isManagedImage == True
location:
storageAccountName:
azureResourceGroup:
#baseImageSource: 'default' # Options: default, customVhd
#baseImage: 'MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:windows' # Required when
baseImageSource == Default# Options: microsoftWindowsServer:WindowsServer:2012-R2-Datacenter:Windows,
microsoftWindowsServer:WindowsServer:2016-Datacenter:Windows, microsoftWindowsServer:WindowsServer:2012-
Datacenter:Windows, microsoftWindowsServer:WindowsServer:2008-R2-SP1:Windows, canonical:UbuntuServer:14.04.4-
LTS:Linux, canonical:UbuntuServer:16.04-LTS:Linux, redHat:RHEL:7.2:Linux, redHat:RHEL:6.8:Linux,
openLogic:CentOS:7.2:Linux, openLogic:CentOS:6.8:Linux, credativ:Debian:8:Linux, credativ:Debian:7:Linux,
sUSE:OpenSUSE-Leap:42.2:Linux, sUSE:SLES:12-SP2:Linux, sUSE:SLES:11-SP4:Linux
#customImageUrl: # Required when baseImageSource == CustomVhd
#customImageOSType: 'windows' # Required when baseImageSource == CustomVhd# Options: windows, linux
packagePath:
deployScriptPath:
#deployScriptArguments: # Optional
#additionalBuilderParameters: '{vm_size:Standard_D3_v2}' # Optional
#skipTempFileCleanupDuringVMDeprovision: true # Optional
#packerVersion: # Optional
#imageUri: # Optional
#imageId: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Packer template (Required) Select whether you want the task to auto generate
Packer template or use custom template provided by you.
Azure subscription (Required) Select the Azure Resource Manager subscription for
baking and storing the machine image.
Storage location (Required) Location for storing the built machine image. This
location will also be used to create a temporary VM for the
purpose of building image.
Storage account (Required) Storage account for storing the built machine
image. This storage account must be pre-existing in the
location selected.
Resource group (Required) Azure Resource group that contains the selected
storage account.
Base image source (Required) Select the source of base image. You can either
choose from a curated gallery of OS images or provide url of
your custom image.
Base image (Required) Choose from curated list of OS images. This will be
used for installing pre-requisite(s) and application(s) before
capturing machine image.
Base image URL (Required) Specify url of base image. This will be used for
installing pre-requisite(s) and application(s) before capturing
machine image.
Deployment Package (Required) Specify the path for deployment package directory
relative to $(System.DefaultWorkingDirectory). Supports
minimatch pattern. Example path:
FrontendWebApp//Galler yApp
Additional Builder parameters (Optional) In auto generated Packer template mode the task
creates a Packer template with an Azure builder. This builder is
used to generate a machine image. You can add keys to the
Azure builder to customize the generated Packer template. For
example setting ssh_tty=true in case you are using a CentOS
base image and you need to have a tty to run sudo.
To view/edit the additional parameters in a grid, click on “…”
next to text box.
Skip temporary file cleanup during deprovision (Optional) During deprovisioning of VM, skip clean-up of
temporary files uploaded to VM. Refer here
Image URL (Optional) Provide a name for the output variable which will
store generated machine image url.
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Chef task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy to Chef environments by editing environment attributes.
YAML snippet
# Chef
# Deploy to Chef environments by editing environment attributes
- task: Chef@1
inputs:
connectedServiceName:
environment:
attributes:
#chefWaitTime: '30'
Arguments
A RGUM EN T DESC RIP T IO N
Environment Attributes (Required) Specify the value of the leaf node attribute(s) to be
updated. Example. { "default_attributes.connectionString" :
"$(connectionString)", "override_attributes.buildLocation" :
"https:\//sample.blob.core.windows.net/build" }. Task fails if the
leaf node does not exist.
Wait Time (Required) The amount of time (in minutes) to wait for this
task to complete. Default value: 30 minutes
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Chef Knife task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to run scripts with Knife commands on your Chef workstation.
YAML snippet
# Chef Knife
# Run scripts with Knife commands on your Chef workstation
- task: ChefKnife@1
inputs:
connectedServiceName:
scriptPath:
#scriptArguments: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Script Path (Required) Path of the script. Should be fully qualified path or
relative to the default working directory.
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Copy Files Over SSH task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to copy files from a source folder to a target folder on a remote machine over SSH.
This task allows you to connect to a remote machine using SSH and copy files matching a set of minimatch
patterns from specified source folder to target folder on the remote machine. Supported protocols for file transfer
are SFTP and SCP via SFTP. In addition to Linux, macOS is partially supported (see FAQ).
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Prerequisites
The task supports use of an SSH key pair to connect to the remote machine(s).
The public key must be pre-installed or copied to the remote machine(s).
YAML snippet
# Copy files over SSH
# Copy files or build artifacts to a remote machine over SSH
- task: CopyFilesOverSSH@0
inputs:
sshEndpoint:
#sourceFolder: # Optional
#contents: '**'
#targetFolder: # Optional
#cleanTargetFolder: false # Optional
#overwrite: true # Optional
#failOnEmptySource: false # Optional
#flattenFolders: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Source folder The source folder for the files to copy to the remote machine.
If omitted, the root of the repository is used. Names
containing wildcards such as *.zip are not supported. Use
variables if files are not in the repository. Example:
$(Agent.BuildDirectory)
Target folder Target folder on the remote machine to where files will be
copied. Example: /home/user/MySite . Preface with a tilde (~ )
to specify the user's home directory.
Advanced - Clean target folder If this option is selected, all existing files in the target folder
will be deleted before copying.
Advanced - Over write If this option is selected (the default), existing files in the
target folder will be replaced.
Advanced - Flatten folders If this option is selected, the folder structure is not preserved
and all the files will be copied into the specified target folder
on the remote machine.
Supported algorithms
Key pair algorithms
RSA
DSA
Encryption algorithms
aes256-cbc
aes192-cbc
aes128-cbc
blowfish-cbc
3des-cbc
arcfour256
arcfour128
cast128-cbc
arcfour
For OpenSSL v1.0.1 and higher (on agent):
aes256-ctr
aes192-ctr
aes128-ctr
For OpenSSL v1.0.1 and higher, NodeJS v0.11.12 and higher (on agent):
aes128-gcm
[email protected]
aes256-gcm
[email protected]
See also
Install SSH Key task
SSH task
Blog post SSH build task
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
What key formats are supported for the SSH tasks?
The Azure Pipelines SSH tasks use the Node.js ssh2 package for SSH connections. Ensure that you are using the
latest version of the SSH tasks. Older versions may not support the OpenSSH key format.
If you run into an "Unsupported key format" error, then you may need to add the -m PEM flag to your ssh-keygen
command so that the key is in a supported format.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Is this task supported for target machines running operating systems other than Linux?
This task is intended for target machines running Linux.
For copying files to a macOS machine, this task may be used, but authenticating with a password is not
supported.
For copying files to a Windows machine, consider using Windows Machine File Copy.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Docker task
11/7/2020 • 4 minutes to read • Edit Online
Use this task to build and push Docker images to any container registry using Docker registry service
connection.
Overview
Following are the key benefits of using Docker task as compared to directly using docker client binary in script -
Integration with Docker registr y ser vice connection - The task makes it easy to use a Docker
registry service connection for connecting to any container registry. Once logged in, the user can author
follow up tasks to execute any tasks/scripts by leveraging the login already done by the Docker task. For
example, you can use the Docker task to sign in to any Azure Container Registry and then use a
subsequent task/script to build and push an image to this registry.
Metadata added as labels - The task adds traceability-related metadata to the image in the form of the
following labels -
com.azure.dev.image.build.buildnumber
com.azure.dev.image.build.builduri
com.azure.dev.image.build.definitionname
com.azure.dev.image.build.repository.name
com.azure.dev.image.build.repository.uri
com.azure.dev.image.build.sourcebranchname
com.azure.dev.image.build.sourceversion
com.azure.dev.image.release.definitionname
com.azure.dev.image.release.releaseid
com.azure.dev.image.release.releaseweburl
com.azure.dev.image.system.teamfoundationcollectionuri
com.azure.dev.image.system.teamproject
Task Inputs
PA RA M ET ERS DESC RIP T IO N
Dockerfile (Optional) Path to the Dockerfile. The task will use the first
Dockerfile dockerfile it finds to build the image.
Default value: **/Dockerfile
Login
Following YAML snippet showcases container registry login using a Docker registry service connection -
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
In the above snippet, the images contosoRepository:tag1 and contosoRepository:tag2 are built and pushed to
the container registries corresponding to dockerRegistryServiceConnection1 and
dockerRegistryServiceConnection2 .
If one wants to build and push to a specific authenticated container registry instead of building and pushing to all
authenticated container registries at once, the containerRegistry input can be explicitly specified along with
command: buildAndPush as shown below -
steps:
- task: Docker@2
displayName: Build and Push
inputs:
command: buildAndPush
containerRegistry: dockerRegistryServiceConnection1
repository: contosoRepository
tags: |
tag1
tag2
Logout
Following YAML snippet showcases container registry logout using a Docker registry service connection -
- task: Docker@2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: dockerRegistryServiceConnection1
Start/stop
This task can also be used to control job and service containers. This usage is uncommon, but occasionally used
for unique circumstances.
resources:
containers:
- container: builder
image: ubuntu:18.04
steps:
- script: echo "I can run inside the container (it starts by default)"
target:
container: builder
- task: Docker@2
inputs:
command: stop
container: builder
# any task beyond this point would not be able to target the buider container
# because it's been stopped
steps:
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: dockerRegistryServiceConnection1
- task: Docker@2
displayName: Build
inputs:
command: build
repository: contosoRepository
tags: tag1
arguments: --secret id=mysecret,src=mysecret.txt
NOTE
The arguments input is evaluated for all commands except buildAndPush . As buildAndPush is a convenience command
( build followed by push ), arguments input is ignored for this command.
Troubleshooting
Why does Docker task ignore arguments passed to buildAndPush command?
Docker task configured with buildAndPush command ignores the arguments passed since they become
ambiguous to the build and push commands that are run internally. You can split your command into separate
build and push steps and pass the suitable arguments. See this stackoverflow post for example.
DockerV2 only supports Docker registry service connection and not support ARM service connection. How
can I use an existing Azure service principal (SPN ) for authentication in Docker task?
You can create a Docker registry service connection using your Azure SPN credentials. Choose the Others from
Registry type and provide the details as follows:
Azure Pipelines
Use this task to build, push or run multi-container Docker applications. This task can be used with a Docker registry
or an Azure Container Registry.
This YAML example specifies the inputs for Azure Container Registry:
variables:
azureContainerRegistry: Contoso.azurecr.io
azureSubscriptionEndpoint: Contoso
steps:
- task: DockerCompose@0
displayName: Container registry login
inputs:
containerregistrytype: Azure Container Registry
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
This YAML example specifies a container registry other than ACR where Contoso is the name of the Docker
registry service connection for the container registry:
- task: DockerCompose@0
displayName: Container registry login
inputs:
containerregistrytype: Container Registry
dockerRegistryEndpoint: Contoso
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true
additionalImageTags (Optional) Additional tags for the Docker images being built
(Additional Image Tags) or pushed.
This YAML example builds the image where the image name is qualified on the basis of the inputs related to Azure
Container Registry:
- task: DockerCompose@0
displayName: Build services
inputs:
action: Build services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
additionalImageTags: $(Build.BuildId)
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true
PA RA M ET ERS DESC RIP T IO N
additionalImageTags (Optional) Additional tags for the Docker images being built
(Additional Image Tags) or pushed.
- task: DockerCompose@0
displayName: Push services
inputs:
action: Push services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
additionalImageTags: $(Build.BuildId)
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true
- task: DockerCompose@0
displayName: Run services
inputs:
action: Run services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.ci.build.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
buildImages: true
abortOnContainerExit: true
detached: false
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true
entrypoint (Optional) Override the default entry point for the specific
(Entry Point Override) service container.
- task: DockerCompose@0
displayName: Run a specific service
inputs:
action: Run a specific service
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
serviceName: myhealth.web
ports: 80
detached: true
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true
removeBuildOptions (Optional) Remove the build options from the output Docker
(Remove Build Options) Compose file.
Default value: false
baseResolveDirectory (Optional) The base directory from which relative paths in the
(Base Resolve Directory) output Docker Compose file should be resolved.
- task: DockerCompose@0
displayName: Lock services
inputs:
action: Lock services
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
outputDockerComposeFile: $(Build.StagingDirectory)/docker-compose.yml
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true
- task: DockerCompose@0
displayName: Write service image digests
inputs:
action: Write service image digests
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
imageDigestComposeFile: $(Build.StagingDirectory)/docker-compose.images.yml
Combine configuration
PA RA M ET ERS DESC RIP T IO N
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true
removeBuildOptions (Optional) Remove the build options from the output Docker
(Remove Build Options) Compose file.
Default value: false
baseResolveDirectory (Optional) The base directory from which relative paths in the
(Base Resolve Directory) output Docker Compose file should be resolved.
- task: DockerCompose@0
displayName: Combine configuration
inputs:
action: Combine configuration
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
additionalDockerComposeFiles: docker-compose.override.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
outputDockerComposeFile: $(Build.StagingDirectory)/docker-compose.yml
dockerComposeFile (Docker Compose File) (Required) Path to the primary Docker Compose file to use.
Default value: **/docker-compose.yml
qualifyImageNames (Optional) Qualify image names for built services with the
(Qualify Image Names) Docker registry service connection's hostname if not
otherwise specified.
Default value: true
- task: DockerCompose@0
displayName: Run a Docker Compose command
inputs:
action: Run a Docker Compose command
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
dockerComposeFile: docker-compose.yml
projectName: $(Build.Repository.Name)
qualifyImageNames: true
dockerComposeCommand: rm
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Package and Deploy Helm Charts task
11/7/2020 • 7 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy, configure, or update a Kubernetes cluster in Azure Container Service by running Helm
commands. Helm is a tool that streamlines deploying and managing Kubernetes apps using a packaging format
called charts.
You can define, version, share, install, and upgrade even the most complex Kubernetes app by using Helm.
Helm helps you combine multiple Kubernetes manifests (yaml) such as service, deployments, configmaps, and
more into a single unit called Helm Charts. You don't need to either invent or use a tokenization or a templating
tool.
Helm Charts help you manage application dependencies and deploy as well as rollback as a unit. They are also
easy to create, version, publish, and share with other partner teams.
Azure Pipelines has built-in support for Helm charts:
The Helm Tool installer task can be used to install the correct version of Helm onto the agents.
The Helm package and deploy task can be used to package the app and deploy it to a Kubernetes cluster. You
can use the task to install or update Tiller to a Kubernetes namespace, to securely connect to Tiller over TLS for
deploying charts, or to run any Helm command such as lint .
The Helm task supports connecting to an Azure Kubernetes Service by using an Azure service connection. You
can connect to any Kubernetes cluster by using kubeconfig or a service account.
Helm deployments can be supplemented by using the Kubectl task; for example, create/update,
imagepullsecret, and others.
Service Connection
The task works with two service connection types: Azure Resource Manager and Kubernetes Ser vice
Connection .
NOTE
A service connection isn't required if an environment resource that points to a Kubernetes cluster has already been specified
in the pipeline's stage.
This YAML example YAML shows how Azure Resource Manager is used to refer to the Kubernetes cluster. This is
used with one of the helm commands and the appropriate values required for the command:
variables:
azureSubscriptionEndpoint: Contoso
azureContainerRegistry: contoso.azurecr.io
azureResourceGroup: Contoso
kubernetesCluster: Contoso
- task: HelmDeploy@0
displayName: Helm deploy
inputs:
connectionType: Azure Resource Manager
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
This YAML example YAML shows how Kubernetes service connection is used to refer to the Kubernetes cluster. This
is used with one of the helm commands and the appropriate values required for the command:
- task: HelmDeploy@0
displayName: Helm deploy
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: Contoso
Command values
The command input accepts one of the following helm commands:
create/delete/expose/get/init/install/login/logout/ls/package/rollback/upgrade.
PA RA M ET ERS DESC RIP T IO N
- task: HelmDeploy@0
displayName: Helm list
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: ls
arguments: --all
init command
PA RA M ET ERS DESC RIP T IO N
canaryimage Use the canary Tiller image, the latest pre-release version of
(Use canary image version) Tiller.
Default value: false
- task: HelmDeploy@0
displayName: Helm init
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: init
upgradetiller: true
waitForExecution: true
arguments: --client-only
install command
PA RA M ET ERS DESC RIP T IO N
overrideValues (Optional) Set values on the command line. You can specify
(Set Values) multiple values by separating values with commas. For
example, key1=val1,key2=val2 . You can also specify
multiple values by delimiting them with newline as so:
key1=val1
key2=val2
Please note that if you have a value which itself contains
newlines, use the valueFile option, else the task will treat
the newline as a delimiter. The task will construct the helm
command by using these set values. For example, helm
install --set key1=val1 ./redis
- task: HelmDeploy@0
displayName: Helm install
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: install
chartType: FilePath
chartPath: Application/charts/sampleapp
package command
PA RA M ET ERS DESC RIP T IO N
- task: HelmDeploy@0
displayName: Helm package
inputs:
command: package
chartPath: Application/charts/sampleapp
destination: $(Build.ArtifactStagingDirectory)
upgrade command
PA RA M ET ERS DESC RIP T IO N
overrideValues (Optional) Set values on the command line. You can specify
(Set Values) multiple values by separating values with commas. For
example, key1=val1,key2=val2 . You can also specify
multiple values by delimiting them with newline as so:
key1=val1
key2=val2
Please note that if you have a value which itself contains
newlines, use the valueFile option, else the task will treat
the newline as a delimiter. The task will construct the helm
command by using these set values. For example, helm
install --set key1=val1 ./redis
resetValues (Optional) Reset the values to the ones built into the chart.
(Reset Values) Default value: false
Troubleshooting
HelmDeploy task throws error 'unknown flag: --wait' while running 'helm init --wait --client-only' on Helm 3.0.2
version.
There are some breaking changes between Helm 2 and Helm 3. One of them includes removal of tiller, and hence
helm init command is no longer supported. Remove command: init when you use Helm 3.0+ versions.
When using Helm 3, if System.debug is set to true and Helm upgrade is the command being used, the pipeline
fails even though the upgrade was successful.
This is a known issue with Helm 3, as it writes some logs to stderr. Helm Deploy Task is marked as failed if there are
logs to stderr or exit code is non-zero. Set the task input failOnStderr: false to ignore the logs printed to stderr.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
IIS Web App Deploy task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy a website or web app using WebDeploy.
YAML snippet
# IIS web app deploy
# Deploy a website or web application using Web Deploy
- task: IISWebAppDeploymentOnMachineGroup@0
inputs:
webSiteName:
#virtualApplication: # Optional
#package: '$(System.DefaultWorkingDirectory)\**\*.zip'
#setParametersFile: # Optional
#removeAdditionalFilesFlag: false # Optional
#excludeFilesFromAppDataFlag: false # Optional
#takeAppOfflineFlag: false # Optional
#additionalArguments: # Optional
#xmlTransformation: # Optional
#xmlVariableSubstitution: # Optional
#jSONFiles: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Remove Additional Files at Destination (Optional) Select the option to delete files on the Web App
that have no matching files in the Web App zip package.
Exclude Files from the App_Data Folder (Optional) Select the option to prevent files in the App_Data
folder from being deployed to the Web App.
Take App Offline (Optional) Select the option to take the Web App offline by
placing an app_offline.htm file in the root directory of the Web
App before the sync operation begins. The file will be removed
after the sync operation completes successfully.
A RGUM EN T DESC RIP T IO N
XML variable substitution (Optional) Variables defined in the Build or Release Pipeline will
be matched against the 'key' or 'name' entries in the
appSettings, applicationSettings, and connectionStrings
sections of any config file and parameters.xml. Variable
Substitution is run after config transforms.
JSON variable substitution (Optional) Provide new line separated list of JSON files to
substitute the variable values. Files names are to be provided
relative to the root folder.
To substitute JSON variables that are nested or hierarchical,
specify them using JSONPath expressions.
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
IIS Web App Manage task
11/2/2020 • 9 minutes to read • Edit Online
Azure Pipelines
Use this task to create or update a Website, Web App, Virtual Directory, or Application Pool.
YAML snippet
# IIS web app manage
# Create or update websites, web apps, virtual directories, or application pools
- task: IISWebAppManagementOnMachineGroup@0
inputs:
#enableIIS: false # Optional
#iISDeploymentType: 'IISWebsite' # Options: iISWebsite, iISWebApplication, iISVirtualDirectory,
iISApplicationPool
#actionIISWebsite: 'CreateOrUpdateWebsite' # Required when iISDeploymentType == IISWebsite# Options:
createOrUpdateWebsite, startWebsite, stopWebsite
#actionIISApplicationPool: 'CreateOrUpdateAppPool' # Required when iISDeploymentType ==
IISApplicationPool# Options: createOrUpdateAppPool, startAppPool, stopAppPool, recycleAppPool
#startStopWebsiteName: # Required when actionIISWebsite == StartWebsite || ActionIISWebsite == StopWebsite
websiteName:
#websitePhysicalPath: '%SystemDrive%\inetpub\wwwroot'
#websitePhysicalPathAuth: 'WebsiteUserPassThrough' # Options: websiteUserPassThrough, websiteWindowsAuth
#websiteAuthUserName: # Required when websitePhysicalPathAuth == WebsiteWindowsAuth
#websiteAuthUserPassword: # Optional
#addBinding: false # Optional
#protocol: 'http' # Required when iISDeploymentType == RandomDeployment# Options: https, http
#iPAddress: 'All Unassigned' # Required when iISDeploymentType == RandomDeployment
#port: '80' # Required when iISDeploymentType == RandomDeployment
#serverNameIndication: false # Optional
#hostNameWithOutSNI: # Optional
#hostNameWithHttp: # Optional
#hostNameWithSNI: # Required when iISDeploymentType == RandomDeployment
#sSLCertThumbPrint: # Required when iISDeploymentType == RandomDeployment
bindings:
#createOrUpdateAppPoolForWebsite: false # Optional
#configureAuthenticationForWebsite: false # Optional
appPoolNameForWebsite:
#dotNetVersionForWebsite: 'v4.0' # Options: v4.0, v2.0, no Managed Code
#pipeLineModeForWebsite: 'Integrated' # Options: integrated, classic
#appPoolIdentityForWebsite: 'ApplicationPoolIdentity' # Options: applicationPoolIdentity, localService,
localSystem, networkService, specificUser
#appPoolUsernameForWebsite: # Required when appPoolIdentityForWebsite == SpecificUser
#appPoolPasswordForWebsite: # Optional
#anonymousAuthenticationForWebsite: false # Optional
#basicAuthenticationForWebsite: false # Optional
#windowsAuthenticationForWebsite: true # Optional
parentWebsiteNameForVD:
virtualPathForVD:
#physicalPathForVD: '%SystemDrive%\inetpub\wwwroot'
#vDPhysicalPathAuth: 'VDUserPassThrough' # Optional. Options: vDUserPassThrough, vDWindowsAuth
#vDAuthUserName: # Required when vDPhysicalPathAuth == VDWindowsAuth
#vDAuthUserPassword: # Optional
parentWebsiteNameForApplication:
virtualPathForApplication:
#physicalPathForApplication: '%SystemDrive%\inetpub\wwwroot'
#applicationPhysicalPathAuth: 'ApplicationUserPassThrough' # Optional. Options:
applicationUserPassThrough, applicationWindowsAuth
#applicationAuthUserName: # Required when applicationPhysicalPathAuth == ApplicationWindowsAuth
#applicationAuthUserPassword: # Optional
#createOrUpdateAppPoolForApplication: false # Optional
#createOrUpdateAppPoolForApplication: false # Optional
appPoolNameForApplication:
#dotNetVersionForApplication: 'v4.0' # Options: v4.0, v2.0, no Managed Code
#pipeLineModeForApplication: 'Integrated' # Options: integrated, classic
#appPoolIdentityForApplication: 'ApplicationPoolIdentity' # Options: applicationPoolIdentity,
localService, localSystem, networkService, specificUser
#appPoolUsernameForApplication: # Required when appPoolIdentityForApplication == SpecificUser
#appPoolPasswordForApplication: # Optional
appPoolName:
#dotNetVersion: 'v4.0' # Options: v4.0, v2.0, no Managed Code
#pipeLineMode: 'Integrated' # Options: integrated, classic
#appPoolIdentity: 'ApplicationPoolIdentity' # Options: applicationPoolIdentity, localService, localSystem,
networkService, specificUser
#appPoolUsername: # Required when appPoolIdentity == SpecificUser
#appPoolPassword: # Optional
#startStopRecycleAppPoolName: # Required when actionIISApplicationPool == StartAppPool ||
ActionIISApplicationPool == StopAppPool || ActionIISApplicationPool == RecycleAppPool
#appCmdCommands: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Enable IIS (Optional) Check this if you want to install IIS on the machine.
Configuration type (Required) You can create or update sites, applications, virtual
directories, and application pools.
Website name (Required) Provide the name of the IIS website to create or
update.
Physical path (Required) Provide the physical path where the website
content will be stored. The content can reside on the local
Computer, or in a remote directory, or on a network share, like
C:\Fabrikam or \ContentShare\Fabrikam.
Physical path authentication (Required) Select the authentication mechanism that will be
used to access the physical path of the website.
Username (Required) Provide the user name that will be used to access
the website's physical path.
A RGUM EN T DESC RIP T IO N
Add binding (Optional) Select the option to add port binding for the
website.
Server Name Indication required (Optional) Select the option to set the Server Name Indication
(SNI) for the website.
SNI extends the SSL and TLS protocols to indicate the host
name that the clients are attempting to connect to. It allows,
multiple secure websites with different certificates, to use the
same IP address.
Host name (Optional) Enter a host name (or domain name) for the
website.
If a host name is specified, then the clients must use the host
name instead of the IP address to access the website.
Host name (Optional) Enter a host name (or domain name) for the
website.
If a host name is specified, then the clients must use the host
name instead of the IP address to access the website.
Host name (Required) Enter a host name (or domain name) for the
website.
If a host name is specified, then the clients must use the host
name instead of the IP address to access the website.
SSL certificate thumbprint (Required) Provide the thumb-print of the Secure Socket Layer
certificate that the website is going to use for the HTTPS
communication as a 40 character long hexadecimal string. The
SSL certificate should be already installed on the Computer, at
Local Computer, Personal store.
Add bindings (Required) Click on the extension [...] button to add bindings
for the website.
A RGUM EN T DESC RIP T IO N
Create or update app pool (Optional) Select the option to create or update an application
pool. If checked, the website will be created in the specified
app pool.
.NET version (Required) Select the version of the .NET Framework that is
loaded by the application pool.
If the applications assigned to this application pool do not
contain managed code, then select the 'No Managed Code'
option from the list.
Managed pipeline mode (Required) Select the managed pipeline mode that specifies
how IIS processes requests for managed content. Use classic
mode only when the applications in the application pool
cannot run in the Integrated mode.
Basic authentication (Optional) Select the option to enable basic authentication for
website.
Parent website name (Required) Provide the name of the parent Website of the
virtual directory.
Virtual path (Required) Provide the virtual path of the virtual directory.
Example: To create a virtual directory Site/Application/VDir
enter /Application/Vdir. The parent website and application
should be already existing.
A RGUM EN T DESC RIP T IO N
Physical path (Required) Provide the physical path where the virtual
directory's content will be stored. The content can reside on
the local Computer, or in a remote directory, or on a network
share, like C:\Fabrikam or \ContentShare\Fabrikam.
Physical path authentication (Optional) Select the authentication mechanism that will be
used to access the physical path of the virtual directory.
Username (Required) Provide the user name that will be used to access
the virtual directory's physical path.
Parent website name (Required) Provide the name of the parent Website under
which the application will be created or updated.
Physical path (Required) Provide the physical path where the application's
content will be stored. The content can reside on the local
Computer, or in a remote directory, or on a network share, like
C:\Fabrikam or \ContentShare\Fabrikam.
Physical path authentication (Optional) Select the authentication mechanism that will be
used to access the physical path of the application.
Username (Required) Provide the user name that will be used to access
the application's physical path.
Create or update app pool (Optional) Select the option to create or update an application
pool. If checked, the application will be created in the specified
app pool.
.NET version (Required) Select the version of the .NET Framework that is
loaded by the application pool.
If the applications assigned to this application pool do not
contain managed code, then select the 'No Managed Code'
option from the list.
Managed pipeline mode (Required) Select the managed pipeline mode that specifies
how IIS processes requests for managed content. Use classic
mode only when the applications in the application pool
cannot run in the Integrated mode.
.NET version (Required) Select the version of the .NET Framework that is
loaded by the application pool.
If the applications assigned to this application pool do not
contain managed code, then select the 'No Managed Code'
option from the list.
Managed pipeline mode (Required) Select the managed pipeline mode that specifies
how IIS processes requests for managed content. Use classic
mode only when the applications in the application pool
cannot run in the Integrated mode.
Application pool name (Required) Provide the name of the IIS application pool.
A RGUM EN T DESC RIP T IO N
Additional appcmd.exe commands (Optional) Enter additional AppCmd.exe commands. For more
than one command use a line separator, like
list apppools
list sites
recycle apppool /apppool.name:ExampleAppPoolName
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Kubectl task
11/7/2020 • 6 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy, configure, or update a Kubernetes cluster by running kubectl commands.
Service Connection
The task works with two service connection types: Azure Resource Manager and Kubernetes Ser vice
Connection , described below.
Azure Resource Manager
PA RA M ET ERS DESC RIP T IO N
This YAML example shows how Azure Resource Manager is used to refer to the Kubernetes cluster. This is to be
used with one of the kubectl commands and the appropriate values required by the command.
variables:
azureSubscriptionEndpoint: Contoso
azureContainerRegistry: contoso.azurecr.io
azureResourceGroup: Contoso
kubernetesCluster: Contoso
useClusterAdmin: false
steps:
- task: Kubernetes@1
displayName: kubectl apply
inputs:
connectionType: Azure Resource Manager
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
useClusterAdmin: $(useClusterAdmin)
This YAML example shows how a Kubernetes Service Connection is used to refer to the Kubernetes cluster. This is
to be used with one of the kubectl commands and the appropriate values required by the command.
- task: Kubernetes@1
displayName: kubectl apply
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: Contoso
Commands
The command input accepts one of the following kubectl commands:
apply , create , delete , exec , expose , get , login , logout , logs , run , set , or top .
This YAML example demonstrates the use of a configuration file with the apply command:
- task: Kubernetes@1
displayName: kubectl apply using configFile
inputs:
connectionType: Azure Resource Manager
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
useConfigurationFile: true
configuration: mhc-aks.yaml
Secrets
Kubernetes objects of type secret are intended to hold sensitive information such as passwords, OAuth tokens,
and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a pod
definition or in a Docker image. Azure Pipelines simplifies the addition of ImagePullSecrets to a service account,
or setting up of any generic secret, as described below.
ImagePullSecret
PA RA M ET ERS DESC RIP T IO N
forceUpdate (Optional) Delete the secret if it exists and create a new one
Force update secret with updated values.
Default value: true
- task: Kubernetes@1
displayName: kubectl apply for secretType dockerRegistry
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: dockerRegistry
containerRegistryType: Azure Container Registry
azureSubscriptionEndpointForSecrets: $(azureSubscriptionEndpoint)
azureContainerRegistry: $(azureContainerRegistry)
secretName: mysecretkey2
forceUpdate: true
Generic Secrets
This YAML example creates generic secrets from literal values specified for the secretArguments input:
- task: Kubernetes@1
displayName: secretType generic with literal values
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: generic
secretArguments: --from-literal=contoso=5678
secretName: mysecretkey
Pipeline variables can be used to pass arguments for specifying literal values, as shown here:
- task: Kubernetes@1
displayName: secretType generic with pipeline variables
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: generic
secretArguments: --from-literal=contoso=$(contosovalue)
secretName: mysecretkey
ConfigMap
ConfigMaps allow you to decouple configuration artifacts from image content to maintain portability for
containerized applications.
- task: Kubernetes@1
displayName: kubectl apply
inputs:
configMapName: myconfig
useConfigMapFile: true
configMapFile: src/configmap
This YAML example creates a ConfigMap by specifying the literal values directly as the configMapArguments
input, and setting forceUpdate to true:
- task: Kubernetes@1
displayName: configMap with literal values
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: generic
secretArguments: --from-literal=contoso=$(contosovalue)
secretName: mysecretkey4
configMapName: myconfig
forceUpdateConfigMap: true
configMapArguments: --from-literal=myname=contoso
You can use pipeline variables to pass literal values when creating ConfigMap, as shown here:
- task: Kubernetes@1
displayName: configMap with pipeline variables
inputs:
azureSubscriptionEndpoint: $(azureSubscriptionEndpoint)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: apply
arguments: -f mhc-aks.yaml
secretType: generic
secretArguments: --from-literal=contoso=$(contosovalue)
secretName: mysecretkey4
configMapName: myconfig
forceUpdateConfigMap: true
configMapArguments: --from-literal=myname=$(contosovalue)
Advanced
PA RA M ET ERS DESC RIP T IO N
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Kubernetes manifest task
11/2/2020 • 12 minutes to read • Edit Online
Use a Kubernetes manifest task in a build or release pipeline to bake and deploy manifests to Kubernetes clusters.
Overview
The following list shows the key benefits of this task:
Ar tifact substitution : The deployment action takes as input a list of container images that you can
specify along with their tags and digests. The same input is substituted into the nontemplatized manifest
files before application to the cluster. This substitution ensures that the cluster nodes pull the right version
of the image.
Manifest stability : The rollout status of the deployed Kubernetes objects is checked. The stability checks
are incorporated to determine whether the task status is a success or a failure.
Traceability annotations : Annotations are added to the deployed Kubernetes objects to superimpose
traceability information. The following annotations are supported:
azure-pipelines/org
azure-pipelines/project
azure-pipelines/pipeline
azure-pipelines/pipelineId
azure-pipelines/execution
azure-pipelines/executionuri
azure-pipelines/jobName
Secret handling : The createSecret action lets Docker registry secrets be created using Docker registry
service connections. It also lets generic secrets be created using either plain-text variables or secret
variables. Before deployment to the cluster, you can use the secrets input along with the deploy action to
augment the input manifest files with the appropriate imagePullSecrets value.
Bake manifest : The bake action of the task allows for baking templates into Kubernetes manifest files.
The action uses tools such as Helm, Compose, and kustomize. With baking, these Kubernetes manifest files
are usable for deployments to the cluster.
Deployment strategy : Choosing the canar y strategy with the deploy action leads to creation of
workloads having names suffixed with "-baseline" and "-canary". The task supports two methods of traffic
splitting:
Ser vice Mesh Interface : Service Mesh Interface (SMI) abstraction allows configuration with
service mesh providers like Linkerd and Istio. The Kubernetes Manifest task maps SMI TrafficSplit
objects to the stable, baseline, and canary services during the life cycle of the deployment strategy.
Canary deployments that are based on a service mesh and use this task are more accurate. This
accuracy comes because service mesh providers enable the granular percentage-based split of
traffic. The service mesh uses the service registry and sidecar containers that are injected into pods.
This injection occurs alongside application containers to achieve the granular traffic split.
Kubernetes with no ser vice mesh : In the absence of a service mesh, you might not get the exact
percentage split you want at the request level. But you can possibly do canary deployments by
using baseline and canary variants next to the stable variant.
The service sends requests to pods of all three workload variants as the selector-label constraints
are met. Kubernetes Manifest honors these requests when creating baseline and canary variants.
This routing behavior achieves the intended effect of routing only a portion of total requests to the
canary.
Compare the baseline and canary workloads by using either a Manual Intervention task in release
pipelines or a Delay task in YAML pipelines. Do the comparison before using the promote or reject action
of the task.
Deploy action
PA RA M ET ER DESC RIP T IO N
action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .
manifests (Required)
Manifests
The path to the manifest files to be used for deployment. A
file-matching pattern is an acceptable value for this input.
containers (Optional)
Containers
The fully qualified URL of the image to be used for
substitutions on the manifest files. This input accepts the
specification of multiple artifact substitutions in newline-
separated form. Here's an example:
containers: |
contosodemo.azurecr.io/foo:test1
contosodemo.azurecr.io/bar:test2
imagePullSecrets (Optional)
Image pulls secrets
Multiline input where each line contains the name of a
Docker registry secret that has already been set up within
the cluster. Each secret name is added under
imagePullSecrets for the workloads that are found in the
input manifest files.
PA RA M ET ER DESC RIP T IO N
strategy (Optional)
Strategy
The deployment strategy used while manifest files are applied
on the cluster. Currently, canar y is the only acceptable
deployment strategy.
trafficSplitMethod (Optional)
Traffic split method
Acceptable values are pod and smi. The default value is pod .
For the value smi, the percentage traffic split is done at the
request level by using a service mesh. A service mesh must
be set up by a cluster admin. This task handles orchestration
of SMI TrafficSplit objects.
For the value pod , the percentage split isn't possible at the
request level in the absence of a service mesh. Instead, the
percentage input is used to calculate the replicas for baseline
and canary. The calculation is a percentage of replicas that
are specified in the input manifests for the stable variant.
replicas: 4
strategy: canary
percentage: 25
strategy: canary
trafficSplitMethod: smi
percentage: 20
baselineAndCanaryReplicas: 1
The following YAML code is an example of deploying to a Kubernetes namespace by using manifest files:
steps:
- task: KubernetesManifest@0
displayName: Deploy
inputs:
kubernetesServiceConnection: someK8sSC1
namespace: default
manifests: manifests/deployment.yml|manifests/service.yml
containers: |
foo/demo:$(tagVariable1)
bar/demo:$(tagVariable2)
imagePullSecrets: |
some-secret
some-other-secret
In the above example, the task tries to find matches for the images foo/demo and bar/demo in the image fields of
manifest files. For each match found, the value of either tagVariable1 or tagVariable2 is appended as a tag to
the image name. You can also specify digests in the containers input for artifact substitution.
NOTE
While you can author deploy, promote, and reject actions with YAML input related to deployment strategy, support for a
Manual Intervention task is currently unavailable for build pipelines.
For release pipelines, we advise you to use actions and input related to deployment strategy in the following sequence:
1. A deploy action specified with strategy: canary and percentage: $(someValue) .
2. A Manual Intervention task so that you can pause the pipeline and compare the baseline variant with the canary
variant.
3. A promote action that runs if a Manual Intervention task is resumed and a reject action that runs if a Manual
Intervention task is rejected.
Promote and reject actions
PA RA M ET ER DESC RIP T IO N
action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .
namespace (Required)
Namespace
The namespace within the cluster to deploy to.
manifests (Required)
Manifests
The path to the manifest files to be used for deployment. A
file-matching pattern is an acceptable value for this input.
containers (Optional)
Containers
The fully qualified resource URL of the image to be used for
substitutions on the manifest files. The URL
contosodemo.azurecr.io/helloworld:test is an example.
imagePullSecrets (Optional)
Image pull secrets
Multiline input where each line contains the name of a
Docker registry secret that is already set up within the cluster.
Each secret name is added under the imagePullSecrets field
for the workloads that are found in the input manifest files.
strategy (Optional)
Strategy
The deployment strategy used in the deploy action before a
promote action or reject action. Currently, canar y is the only
acceptable deployment strategy.
action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .
secretName (Required)
Secret name
The name of the secret to be created or updated.
namespace (Required)
Namespace
The cluster namespace within which to create a secret.
The following YAML code shows a sample creation of Docker registry secrets by using Docker Registry service
connection:
steps:
- task: KubernetesManifest@0
displayName: Create secret
inputs:
action: createSecret
secretType: dockerRegistry
secretName: foobar
dockerRegistryEndpoint: demoACR
kubernetesServiceConnection: someK8sSC
namespace: default
steps:
- task: KubernetesManifest@0
displayName: Create secret
inputs:
action: createSecret
secretType: generic
secretName: some-secret
secretArguments: --from-literal=key1=value1
kubernetesServiceConnection: someK8sSC
namespace: default
Bake action
PA RA M ET ER DESC RIP T IO N
action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .
Multiline input that accepts the path to the override files. The
files are used when manifest files from Helm charts are
baked.
The following YAML code is an example of baking manifest files from Helm charts. Note the usage of name input
in the first task. This name is later referenced from the deploy step for specifying the path to the manifests that
were produced by the bake step.
steps:
- task: KubernetesManifest@0
name: bake
displayName: Bake K8s manifests from Helm chart
inputs:
action: bake
helmChart: charts/sample
overrides: 'image.repository:nginx'
- task: KubernetesManifest@0
displayName: Deploy K8s manifests
inputs:
kubernetesServiceConnection: someK8sSC
namespace: default
manifests: $(bake.manifestsBundle)
containers: |
nginx: 1.7.9
Scale action
PA RA M ET ER DESC RIP T IO N
action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .
kind (Required)
Kind
The kind of Kubernetes object to be scaled up or down.
Examples include ReplicaSet and StatefulSet.
name (Required)
Name
The name of the Kubernetes object to be scaled up or down.
replicas (Required)
Replica count
The number of replicas to scale to.
namespace (Required)
Namespace
The namespace within the cluster to deploy to.
Patch action
PA RA M ET ER DESC RIP T IO N
action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .
resourceToPatch (Required)
Resource to patch
Indicates one of the following patch methods:
A manifest file identifies the objects to be patched.
An individual object is identified by kind and name as
the patch target.
Acceptable values are file and name . The default value is
file .
mergeStrategy (Required)
Merge strategy
The strategy to be used for applying the patch.
patch (Required)
Patch
The contents of the patch.
PA RA M ET ER DESC RIP T IO N
namespace (Required)
Namespace
The namespace within the cluster to deploy to.
steps:
- task: KubernetesManifest@0
displayName: Patch
inputs:
action: patch
kind: pod
name: demo-5fbc4d6cd9-pgxn4
mergeStrategy: strategic
patch: '{"spec":{"containers":[{"name":"demo","image":"foobar/demo:2239"}]}}'
kubernetesServiceConnection: someK8sSC
namespace: default
Delete action
PA RA M ET ER DESC RIP T IO N
action (Required)
Action
Acceptable values are deploy , promote , reject , bake ,
createSecret , scale , patch , and delete .
arguments (Required)
Arguments
Arguments to be passed on to kubectl for deleting the
necessary objects. An example is:
arguments: deployment hello-world foo-bar
namespace (Required)
Namespace
The namespace within the cluster to deploy to.
steps:
- task: KubernetesManifest@0
displayName: Delete
inputs:
action: delete
arguments: deployment expressapp
kubernetesServiceConnection: someK8sSC
namespace: default
Troubleshooting
My Kubernetes cluster is behind a firewall and I am using hosted agents. How can I deploy to this cluster?
You can grant hosted agents access through your firewall by allowing the IP addresses for the hosted agents. For
more details, see Agent IP ranges
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
PowerShell on Target Machines task
11/2/2020 • 5 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to execute PowerShell scripts on remote machine(s).
This task can run both PowerShell scripts and PowerShell-DSC scripts:
For PowerShell scripts, the computers must have PowerShell 2.0 or higher installed.
For PowerShell-DSC scripts, the computers must have the latest version of the Windows Management Framework
installed. This is installed by default on Windows 8.1, Windows Server 2012 R2, and later.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are
called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Prerequisites
This task uses Windows Remote Management (WinRM) to access on-premises physical computers or virtual computers
that are domain-joined or workgroup-joined.
To set up WinRM for on-premises physical computers or vir tual machines
Follow the steps described in domain-joined
To set up WinRM for Microsoft Azure Vir tual Machines
Azure Virtual Machines require WinRM to use the HTTPS protocol. You can use a self-signed Test Certificate. In this case,
the automation agent will not validate the authenticity of the certificate as being issued by a trusted certification authority.
Azure Classic Vir tual Machines . When you create a classic virtual machine from the Azure portal, the virtual
machine is already set up for WinRM over HTTPS, with the default port 5986 already opened in the firewall and a
self-signed certificate installed on the machine. These virtual machines can be accessed with no further
configuration required. Existing Classic virtual machines can be also selected by using the Azure Resource Group
Deployment task.
Azure Resource Group . If you have an Azure Resource Group
already defined in the Azure portal, you must configure it to use the WinRM HTTPS protocol. You need to open port 5986
in the firewall, and install a self-signed certificate.
To dynamically deploy Azure Resource Groups that contain virtual machines, use the Azure Resource Group Deployment
task. This task has a checkbox named Enable Deployment Prerequisites . Select this to automatically set up the WinRM
HTTPS protocol on the virtual machines, open port 5986 in the firewall, and install a test certificate. The virtual machines
are then ready for use in the deployment task.
YAML snippet
# PowerShell on target machines
# Execute PowerShell scripts on remote machines using PSSession and Invoke-Command for remoting
- task: PowerShellOnTargetMachines@3
inputs:
machines:
#userName: # Optional
#userPassword: # Optional
#scriptType: 'Inline' # Optional. Options: filePath, inline
#scriptPath: # Required when scriptType == FilePath
#inlineScript: '# Write your powershell commands here.Write-Output Hello World' # Required when scriptType ==
Inline
#scriptArguments: # Optional
#initializationScript: # Optional
#sessionVariables: # Optional
#communicationProtocol: 'Https' # Optional. Options: http, https
#authenticationMechanism: 'Default' # Optional. Options: default, credssp
#newPsSessionOptionArguments: '-SkipCACheck -IdleTimeout 7200000 -OperationTimeout 0 -OutputBufferingMode Block'
# Optional
#errorActionPreference: 'stop' # Optional. Options: stop, continue, silentlyContinue
#failOnStderr: false # Optional
#ignoreLASTEXITCODE: false # Optional
#workingDirectory: # Optional
#runPowershellInParallel: true # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Protocol The protocol that will be used to connect to the target host, either
HTTP or HTTPS.
Test Cer tificate If you choose the HTTPS option, set this checkbox to skip
validating the authenticity of the machine's certificate by a trusted
certification authority.
Deployment - PowerShell Script The location of the PowerShell script on the target machine. Can
include environment variables such as $env:windir and
$env:systemroot Example:
C:\FabrikamFibre\Web\deploy.ps1
A RGUM EN T DESC RIP T IO N
Deployment - Script Arguments The arguments required by the script, if any. Example:
-applicationPath $(applicationPath) -username
$(vmusername) -password $(vmpassword)
Deployment - Initialization Script The location on the target machine(s) of the data script used by
PowerShell-DSC. It is recommended to use arguments instead of
an initialization script.
Deployment - Session Variables Used to set up the session variables for the PowerShell scripts. A
comma-separated list such as $varx=valuex, $vary=valuey
Most commonly used for backward compatibility with earlier
versions of the release service. It is recommended to use
arguments instead of session variables.
Advanced - Run PowerShell in Parallel Set this option to execute the PowerShell scripts in parallel on all
the target machines
Advanced - Select Machines By Depending on how you want to specify the machines in the group
when using the Filter Criteria parameter, choose Machine
Names or Tags .
Advanced - Filter Criteria Optional. A list of machine names or tag names that identifies the
machines that the task will target. The filter criteria can be:
- The name of an Azure Resource Group.
- An output variable from a previous task.
- A comma-delimited list of tag names or machine names.
Format when using machine names is a comma-separated list of
the machine FQDNs or IP addresses.
Specify tag names for a filter as {TagName}: {Value} Example:
Role:DB;OS:Win8.1
Version 3.x of the task includes the Inline script setting where you can enter your PowerShell script code.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Service Fabric Application Deployment task
11/2/2020 • 7 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to deploy a Service Fabric application to a cluster. This task deploys an Azure Service Fabric application to a cluster according to
the settings defined in the publish profile.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs are called builds, service
connections are called service endpoints, stages are called environments, and jobs are called phases.
Prerequisites
Service Fabric
This task uses a Service Fabric installation to connect and deploy to a Service Fabric cluster.
Download and install Service Fabric on the build agent.
YAML snippet
# Service Fabric application deployment
# Deploy an Azure Service Fabric application to a cluster
- task: ServiceFabricDeploy@1
inputs:
applicationPackagePath:
serviceConnectionName:
#publishProfilePath: # Optional
#applicationParameterPath: # Optional
#overrideApplicationParameter: false # Optional
#compressPackage: false # Optional
#copyPackageTimeoutSec: # Optional
#registerPackageTimeoutSec: # Optional
#overwriteBehavior: 'SameAppTypeAndVersion' # Options: always, never, sameAppTypeAndVersion
#skipUpgradeSameTypeAndVersion: false # Optional
#skipPackageValidation: false # Optional
#useDiffPackage: false # Optional
#overridePublishProfileSettings: false # Optional
#isUpgrade: true # Optional
#unregisterUnusedVersions: true # Optional
#upgradeMode: 'Monitored' # Required when overridePublishProfileSettings == True && IsUpgrade == True# Options: monitored,
unmonitoredAuto, unmonitoredManual
#failureAction: 'Rollback' # Required when overridePublishProfileSettings == True && IsUpgrade == True && UpgradeMode == Monitored#
Options: rollback, manual
#upgradeReplicaSetCheckTimeoutSec: # Optional
#timeoutSec: # Optional
#forceRestart: false # Optional
#healthCheckRetryTimeoutSec: # Optional
#healthCheckWaitDurationSec: # Optional
#healthCheckStableDurationSec: # Optional
#upgradeDomainTimeoutSec: # Optional
#considerWarningAsError: false # Optional
#defaultServiceTypeHealthPolicy: # Optional
#maxPercentUnhealthyDeployedApplications: # Optional
#upgradeTimeoutSec: # Optional
#serviceTypeHealthPolicyMap: # Optional
#configureDockerSettings: false # Optional
#registryCredentials: 'AzureResourceManagerEndpoint' # Required when configureDockerSettings == True# Options:
azureResourceManagerEndpoint, containerRegistryEndpoint, usernamePassword
#dockerRegistryConnection: # Required when configureDockerSettings == True && RegistryCredentials == ContainerRegistryEndpoint
#azureSubscription: # Required when configureDockerSettings == True && RegistryCredentials == AzureResourceManagerEndpoint
#registryUserName: # Optional
#registryPassword: # Optional
#passwordEncrypted: true # Optional
Task Inputs
PA RA M ET ERS DESC RIP T IO N
publishProfilePath (Optional) Path to the publish profile file that defines the settings to use.
Publish Profile [Variables](../../build/variables.md) and wildcards can be used in the path.
Publish profiles can be created in Visual Studio as shown here
overrideApplicationParameter (Optional) Variables defined in the build or release pipeline will be matched
Override Application Parameters against the 'Parameter Name' entries in the application manifest file.
Application parameters file can be created in Visual Studio as shown here
Example: If your application has a parameter defined as below-
<Parameters>
<Parameter Name = "SampleApp_PartitionCount" Value ="1"/>
<Parameter Name = "SampleApp_InstanceCount" DefaultValue ="-
1"/>
</Parameters>
and you want to change the partition count to 2, you can define a
release pipeline or an environment variable "SampleApp_PartitionCount"
and its value as "2".
Note: If same variables are defined in the release pipeline and in the
environment, then the environment variables will supersede the release
pipeline variables
Default value: false
skipPackageValidation (Optional) Indicates whether the package should be validated or not before
Skip package validation deployment. More information about package validation can be found here
Default value: false
overridePublishProfileSettings (Optional) This will override all upgrade settings with either the values
Override All Publish Profile Upgrade Settings specified below or the default value if not specified. More information about
upgrade settings can be found here
Default value: false
unregisterUnusedVersions (Optional) Indicates whether all unused versions of the application type will
Unregister Unused Versions be removed after an upgrade
Default value: true
upgradeMode (Required)
Upgrade Mode Default value: Monitored
FailureAction (Required)
FailureAction Default value: Rollback
UpgradeReplicaSetCheckTimeoutSec (Optional)
UpgradeReplicaSetCheckTimeoutSec
TimeoutSec (Optional)
TimeoutSec
ForceRestart (Optional)
ForceRestart Default value: false
HealthCheckRetryTimeoutSec (Optional)
HealthCheckRetryTimeoutSec
HealthCheckWaitDurationSec (Optional)
HealthCheckWaitDurationSec
HealthCheckStableDurationSec (Optional)
HealthCheckStableDurationSec
UpgradeDomainTimeoutSec (Optional)
UpgradeDomainTimeoutSec
ConsiderWarningAsError (Optional)
ConsiderWarningAsError Default value: false
DefaultServiceTypeHealthPolicy (Optional)
DefaultServiceTypeHealthPolicy
MaxPercentUnhealthyDeployedApplications (Optional)
MaxPercentUnhealthyDeployedApplications
UpgradeTimeoutSec (Optional)
UpgradeTimeoutSec
ServiceTypeHealthPolicyMap (Optional)
ServiceTypeHealthPolicyMap
PA RA M ET ERS DESC RIP T IO N
registryCredentials (Required) Choose how credentials for the Docker registry will be provided
Registry Credentials Source Default value: AzureResourceManagerEndpoint
registryPassword (Optional) Password for the Docker registry. If the password is not
Registry Password encrypted, it is recommended that you use a custom release pipeline secret
variable to store it
Arguments
A RGUM EN T DESC RIP T IO N
Publish Profile The location of the publish profile that specifies the settings to use for
deployment, including the location of the target Service Fabric cluster. Can
include wildcards and variables. Example:
$(system.defaultworkingdirectory)/**/drop/projectartifacts/**/PublishProfiles/Cloud
Application Package The location of the Service Fabric application package to be deployed to the
cluster. Can include wildcards and variables. Example:
$(system.defaultworkingdirectory)/**/drop/applicationpackage
Cluster Connection The name of the Azure Service Fabric service connection defined in the
TS/TFS project that describes the connection to the cluster.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Service Fabric Compose Deploy task
6/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy a Docker-compose application to a Service Fabric cluster. This task deploys an Azure Service Fabric
application to a cluster according to the settings defined in the compose file.
Prerequisites
NOTE: This task is currently in preview and requires a preview version of Service Fabric that supports compose deploy. See
Docker Compose deployment support in Azure Service Fabric.
Service Fabric
This task uses a Service Fabric installation to connect and deploy to a Service Fabric cluster.
Download and install Azure Service Fabric Core SDK on the build agent.
YAML snippet
# Service Fabric Compose deploy
# Deploy a Docker Compose application to an Azure Service Fabric cluster
- task: ServiceFabricComposeDeploy@0
inputs:
clusterConnection:
#composeFilePath: '**/docker-compose.yml'
#applicationName: 'fabric:/Application1'
#registryCredentials: 'AzureResourceManagerEndpoint' # Options: azureResourceManagerEndpoint,
containerRegistryEndpoint, usernamePassword, none
#dockerRegistryConnection: # Optional
#azureSubscription: # Required when registryCredentials == AzureResourceManagerEndpoint
#registryUserName: # Optional
#registryPassword: # Optional
#passwordEncrypted: true # Optional
#upgrade: # Optional
#deployTimeoutSec: # Optional
#removeTimeoutSec: # Optional
#getStatusTimeoutSec: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Cluster Connection The Azure Service Fabric service connection to use to connect and
authenticate to the cluster.
Compose File Path Path to the compose file that is to be deployed. Can include
wildcards and variables. Example:
$(System.DefaultWorkingDirectory)/**/drop/projectartifacts/**/docker-
compose.yml
. Note : combining compose files is not supported as part of this
task.
Application Name The Service Fabric Application Name of the application being
deployed. Use fabric:/ as a prefix. Application Names within a
Service Fabric cluster must be unique.
A RGUM EN T DESC RIP T IO N
Registr y Credentials Source Specifies how credentials for the Docker container registry will be
provided to the deployment task:
Azure Resource Manager Endpoint : An Azure Resource
Manager service connection and Azure subscription to be used to
obtain a service principal ID and key for an Azure Container
Registry.
Container Registr y Endpoint : A Docker registry service
connection. If a certificate matching the Server Certificate
Thumbprint in the Cluster Connection is installed on the build
agent, it will be used to encrypt the password; otherwise the
password will not be encrypted and sent in clear text.
Username and Password : Username and password to be used.
We recommend you encrypt your password using
Invoke-ServiceFabricEncryptText (Check Password
Encr ypted ). If you do not, and a certificate matching the Server
Certificate Thumbprint in the Cluster Connection is installed on the
build agent, it will be used to encrypt the password; otherwise the
password will not be encrypted and sent in clear text.
None : No registry credentials are provided (used for accessing
public container registries).
Get Status Timeout (s) Timeout in seconds for getting the status of an existing application.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
SSH Deployment task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
Use this task to run shell commands or a script on a remote machine using SSH. This task enables you to connect
to a remote machine using SSH and run commands or a script.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Prerequisites
The task supports use of an SSH key pair to connect to the remote machine(s).
The public key must be pre-installed or copied to the remote machine(s).
YAML snippet
# SSH
# Run shell commands or a script on a remote machine using SSH
- task: SSH@0
inputs:
sshEndpoint:
#runOptions: 'commands' # Options: commands, script, inline
#commands: # Required when runOptions == Commands
#scriptPath: # Required when runOptions == Script
#inline: # Required when runOptions == Inline
#interpreterCommand: # Used when runOptions == Inline
#args: # Optional
#failOnStdErr: true # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Shell script path Path to the shell script file to run on the remote machine. This
parameter is available only when Shell script is selected for
the Run option.
Interpreter command Path to the command interpreter used to execute the script.
Used when Run option = Inline. Adds a shebang line to the
beginning of the script. Relevant only for UNIX-like operating
systems. Please use empty string for Windows-based remote
hosts. See more about shebang (#!)
Advanced - Fail on STDERR If this option is selected (the default), the build will fail if the
remote commands or script write to STDERR.
Supported algorithms
Key pair algorithms
RSA
DSA
Encryption algorithms
aes256-cbc
aes192-cbc
aes128-cbc
blowfish-cbc
3des-cbc
arcfour256
arcfour128
cast128-cbc
arcfour
For OpenSSL v1.0.1 and higher (on agent):
aes256-ctr
aes192-ctr
aes128-ctr
For OpenSSL v1.0.1 and higher, NodeJS v0.11.12 and higher (on agent):
aes128-gcm
[email protected]
aes256-gcm
[email protected]
See also
Install SSH Key task
Copy Files Over SSH
Blog post SSH build task
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
What key formats are supported for the SSH tasks?
The Azure Pipelines SSH tasks use the Node.js ssh2 package for SSH connections. Ensure that you are using the
latest version of the SSH tasks. Older versions may not support the OpenSSH key format.
If you run into an "Unsupported key format" error, then you may need to add the -m PEM flag to your ssh-keygen
command so that the key is in a supported format.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features are
available on-premises if you have upgraded to the latest version of TFS.
Windows Machine File Copy task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Use this task to copy application files and other artifacts such as PowerShell scripts and PowerShell-DSC modules
that are required to install the application on Windows Machines. It uses RoboCopy, the command-line utility built
for fast copying of data.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are called
phases.
YAML snippet
# Windows machine file copy
# Copy files to remote Windows machines
- task: WindowsMachineFileCopy@2
inputs:
sourcePath:
#machineNames: # Optional
#adminUserName: # Optional
#adminPassword: # Optional
targetPath:
#cleanTargetBeforeCopy: false # Optional
#copyFilesInParallel: true # Optional
#additionalArguments: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Source The path to the files to copy. Can be a local physical path
such as c:\files or a UNC path such as
\\myserver\fileshare\files . You can use pre-defined
system variables such as $(Build.Repository.LocalPath)
(the working folder on the agent computer), which makes it
easy to specify the location of the build artifacts on the
computer that hosts the automation agent.
Destination Folder The folder on the Windows machine(s) to which the files will
be copied. Example: C:\FabrikamFibre\Web
Advanced - Clean Target Set this option to delete all the files in the destination folder
before copying the new files to it.
Advanced - Copy Files in Parallel Set this option to copy files to all the target machines in
parallel, which can speed up the copying process.
Select Machines By Depending on how you want to specify the machines in the
group when using the Filter Criteria parameter, choose
Machine Names or Tags .
Filter Criteria Optional. A list of machine names or tag names that identifies
the machines that the task will target. The filter criteria can
be:
- The name of an Azure Resource Group.
- An output variable from a previous task.
- A comma-delimited list of tag names or machine names.
Format when using machine names is a comma-separated list
of the machine FQDNs or IP addresses.
Specify tag names for a filter as {TagName}: {Value} Example:
Role:DB;OS:Win8.1
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
I get a system error 53 when using this task. Why?
Typically this occurs when the specified path cannot be located. This may be due to a firewall blocking the
necessary ports for file and printer sharing, or an invalid path specification. For more details, see Error 53 on
TechNet.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
I use TFS on-premises and I don't see some of these features. Why not?
Some of these features are available only on Azure Pipelines and not yet available on-premises. Some features
are available on-premises if you have upgraded to the latest version of TFS.
WinRM SQL Server DB Deployment task
4/10/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to deploy to SQL Server Database using a DACPAC or SQL script.
YAML snippet
# SQL Server database deploy
# Deploy a SQL Server database using DACPAC or SQL scripts
- task: SqlDacpacDeploymentOnMachineGroup@0
inputs:
#taskType: 'dacpac' # Options: dacpac, sqlQuery, sqlInline
#dacpacFile: # Required when taskType == Dacpac
#sqlFile: # Required when taskType == SqlQuery
#executeInTransaction: false # Optional
#exclusiveLock: false # Optional
#appLockName: # Required when exclusiveLock == True
#inlineSql: # Required when taskType == SqlInline
#targetMethod: 'server' # Required when taskType == Dacpac# Options: server, connectionString,
publishProfile
#serverName: 'localhost' # Required when targetMethod == Server || TaskType == SqlQuery || TaskType ==
SqlInline
#databaseName: # Required when targetMethod == Server || TaskType == SqlQuery || TaskType == SqlInline
#authScheme: 'windowsAuthentication' # Required when targetMethod == Server || TaskType == SqlQuery ||
TaskType == SqlInline# Options: windowsAuthentication, sqlServerAuthentication
#sqlUsername: # Required when authScheme == SqlServerAuthentication
#sqlPassword: # Required when authScheme == SqlServerAuthentication
#connectionString: # Required when targetMethod == ConnectionString
#publishProfile: # Optional
#additionalArguments: # Optional
#additionalArgumentsSql: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Deploy SQL Using (Required) Specify the way in which you want to deploy DB,
either by using Dacpac or by using Sql Script.
DACPAC File (Required) Location of the DACPAC file on the target machines
or on a UNC path like,
\BudgetIT\Web\Deploy\FabrikamDB.dacpac. The UNC path
should be accessible to the machine's administrator account.
Environment variables are also supported, such as $env:windir,
$env:systemroot, $env:windir\FabrikamFibre\DB. Wildcards
can be used. For example, /*.dacpac for DACPAC file
present in all sub folders.
A RGUM EN T DESC RIP T IO N
Sql File (Required) Location of the SQL file on the target. Provide
semi-colon separated list of SQL script files to execute multiple
files. The SQL scripts will be executed in the order given.
Location can also be a UNC path like,
\BudgetIT\Web\Deploy\FabrikamDB.sql. The UNC path should
be accessible to the machine's administrator account.
Environment variables are also supported, such as $env:windir,
$env:systemroot, $env:windir\FabrikamFibre\DB. Wildcards
can be used. For example, /*.sql for sql file present in all
sub folders.
Acquire an exclusive app lock while executing script(s) (Optional) Acquires an exclusive app lock while executing
script(s)
Specify SQL Using (Required) Specify the option to connect to the target SQL
Server Database. The options are either to provide the SQL
Server Database details, or the SQL Server connection string,
or the Publish profile XML file.
Database Name (Required) Provide the name of the SQL Server database.
SQL User name (Required) Provide the SQL login to connect to the SQL Server.
The option is only available if SQL Server Authentication mode
has been selected.
SQL Password (Required) Provide the Password of the SQL login. The option
is only available if SQL Server Authentication mode has been
selected.
Connection String (Required) Specify the SQL Server connection string like
"Server=localhost;Database=Fabrikam;User
ID=sqluser;Password=placeholderpassword;"
C O N T RO L O P T IO N S
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
MySql Database Deployment on Machine Group task
4/10/2020 • 2 minutes to read • Edit Online
Use this task to run your scripts and make changes to your MySQL Database. There are two ways to deploy, either
using a script file or writing the script in our inline editor. Note that this is an early preview version. Since this task
is server based, it appears on Deployment group jobs.
Prerequisites
MySQL Client in agent box
The task expects MySQL client must be in agent box.
Windows Agent : Use this script file to install MySQL client
Linux Agent : Run command 'apt-get install mysql-client' to install MySQL client
Task Inputs
PA RA M ET ERS DESC RIP T IO N
TaskNameSelector Select one of the options between Script File & Inline Script.
Deploy MySql Using Default value: SqlTaskFile
SqlFile (Required) Full path of the script file on the automation agent
MySQL Script or on a UNC path accessible to the automation agent like,
BudgetIT\DeployBuilds\script.sql. Also, predefined system
variables like, $(agent.releaseDirectory) can also be used here.
A file containing SQL statements can be used here.
DatabaseName The name of database, if you already have one, on which the
Database Name below script is needed to be run, else the script itself can be
used to create the database.
Example
This example creates a sample db in MySQL.
steps:
- task: MysqlDeploymentOnMachineGroup@1
displayName: 'Deploy Using : InlineSqlTask'
inputs:
TaskNameSelector: InlineSqlTask
SqlInline: |
CREATE DATABASE IF NOT EXISTS alm;
use alm;
ServerName: localhost
SqlUsername: root
SqlPassword: P2ssw0rd
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Docker Installer task
4/10/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to install a specific version of the Docker CLI on the agent machine.
Task Inputs
PA RA M ET ERS DESC RIP T IO N
This YAML example installs the Docker CLI on the agent machine:
- task: DockerInstaller@0
displayName: Docker Installer
inputs:
dockerVersion: 17.09.0-ce
releaseType: stable
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Go Tool Installer task
4/21/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Use this task to find or download a specific version of the Go tool into the tools cache and add it to the PATH. Use
the task to change the version of Go Lang used in subsequent tasks.
YAML snippet
# Go tool installer
# Find in cache or download a specific version of Go and add it to the PATH
- task: GoTool@0
inputs:
#version: '1.10'
#goPath: # Optional
#goBin: # Optional
Arguments
A RGUM EN T DESC RIP T IO N
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Helm installer task
2/28/2020 • 2 minutes to read • Edit Online
This task can be used for installing a specific version of helm binary on agents.
YAML snippet
# Helm tool installer
# Install Helm on an agent machine
- task: HelmInstaller@1
inputs:
#helmVersionToInstall: 'latest' # Optional
Task inputs
PA RA M ET ERS DESC RIP T IO N
The following YAML example showcases the installation of latest version of helm binary on the agent -
- task: HelmInstaller@1
displayName: Helm installer
inputs:
helmVersionToInstall: latest
The following YAML example demonstrates the use of an explicit version string rather than installing the latest
version available at the time of task execution -
- task: HelmInstaller@1
displayName: Helm installer
inputs:
helmVersionToInstall: 2.14.1
Troubleshooting
HelmInstaller task running on a private agent behind a proxy fails to download helm package.
The HelmInstaller task does not use the proxy settings to download the file https://get.helm.sh/helm-v3.1.0-linux-
amd64.zip. You can work around this by pre-installing Helm on your private agents.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Java Tool Installer task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to acquire a specific version of Java from a user supplied Azure blob, from a location in the source or
on the agent, or from the tools cache. The task also sets the JAVA_HOME environment variable. Use this task to
change the version of Java used in Java tasks.
Demands
None
YAML snippet
# Java tool installer
# Acquire a specific version of Java from a user-supplied Azure blob or the tool cache and sets JAVA_HOME
- task: JavaToolInstaller@0
inputs:
#versionSpec: '8'
jdkArchitectureOption: # Options: x64, x86
jdkSourceOption: # Options: AzureStorage, LocalDirectory
#jdkFile: # Required when jdkSourceOption == LocalDirectory
#azureResourceManagerEndpoint: # Required when jdkSourceOption == AzureStorage
#azureStorageAccountName: # Required when jdkSourceOption == AzureStorage
#azureContainerName: # Required when jdkSourceOption == AzureStorage
#azureCommonVirtualFile: # Required when jdkSourceOption == AzureStorage
jdkDestinationDirectory:
#cleanDestinationDirectory: true
Arguments
A RGUM EN T DESC RIP T IO N
jdkSourceOption (Required) Specify the source for the compressed JDK, either
JDK source Azure blob storage or a local directory on the agent or source
repository or use the pre-installed version of Java (available
for Microsoft-hosted agents). Please see example below about
how to use pre-installed version of Java
A RGUM EN T DESC RIP T IO N
jdkDestinationDirectory (Required) Specify the destination directory into which the JDK
Destination directory should be extracted.
Examples
Here's an example of getting the archive file from a local directory on Linux. The file should be an archive (.zip, .gz)
of the JAVA_HOME directory so that it includes the bin , lib , include , jre , etc. directories.
- task: JavaToolInstaller@0
inputs:
versionSpec: "11"
jdkArchitectureOption: x64
jdkSourceOption: LocalDirectory
jdkFile: "/builds/openjdk-11.0.2_linux-x64_bin.tar.gz"
jdkDestinationDirectory: "/builds/binaries/externals"
cleanDestinationDirectory: true
Here's an example of downloading the archive file from Azure Storage. The file should be an archive (.zip, .gz) of the
JAVA_HOME directory so that it includes the bin , lib , include , jre , etc. directories.
- task: JavaToolInstaller@0
inputs:
versionSpec: '6'
jdkArchitectureOption: 'x64'
jdkSourceOption: AzureStorage
azureResourceManagerEndpoint: myARMServiceConnection
azureStorageAccountName: myAzureStorageAccountName
azureContainerName: myAzureStorageContainerName
azureCommonVirtualFile: 'jdk1.6.0_45.zip'
jdkDestinationDirectory: '$(agent.toolsDirectory)/jdk6'
cleanDestinationDirectory: false
Here's an example of using "pre-installed" feature. This feature allows you to use Java versions that are pre-
installed on the Microsoft-hosted agent. You can find available pre-installed versions of Java in Software section.
- task: JavaToolInstaller@0
inputs:
versionSpec: '8'
jdkArchitectureOption: 'x86'
jdkSourceOption: 'PreInstalled'
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Kubectl installer task
2/26/2020 • 2 minutes to read • Edit Online
This task can be used for installing a specific version of kubectl binary on agents.
YAML snippet
# Kubectl tool installer
# Install Kubectl on agent machine
- task: KubectlInstaller@0
inputs:
#kubectlVersion: 'latest' # Optional
Task inputs
PA RA M ET ERS DESC RIP T IO N
The following YAML example showcases the installation of latest version of kubectl binary on the agent -
- task: KubectlInstaller@0
displayName: Kubectl installer
inputs:
kubectlVersion: latest
The following YAML example demonstrates the use of an explicit version string rather than installing the latest
version available at the time of task execution -
- task: KubectlInstaller@0
displayName: Kubectl installer
inputs:
kubectlVersion: 1.15.0
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Node.js Tool Installer task
11/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Build
Use this task to find, download, and cache a specified version of Node.js and add it to the PATH.
Demands
None
YAML snippet
# Node.js tool installer
# Finds or downloads and caches the specified version spec of Node.js and adds it to the PATH
- task: NodeTool@0
inputs:
#versionSpec: '6.x'
#checkLatest: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
checkLatest (Optional) Select if you want the agent to check for the latest
Check for Latest Version available version that satisfies the version spec. For example,
you select this option because you run this build on your self-
hosted agent and you want to always use the latest 6.x
version.
TIP
If you're using the Microsoft-hosted agents, you should leave this check box cleared. We update the Microsoft-hosted
agents on a regular basis, but they're often slightly behind the latest version. So selecting this box will result in your build
spending a lot of time updating to a newer minor version.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
NuGet Tool Installer task
6/2/2020 • 2 minutes to read • Edit Online
Azure Pipelines
Build
Use this task to find, download, and cache a specified version of NuGet and add it to the PATH.
Demands
None
YAML snippet
# NuGet tool installer
# Acquires a specific version of NuGet from the internet or the tools cache and adds it to the PATH. Use this
task to change the version of NuGet used in the NuGet tasks.
- task: NuGetToolInstaller@1
inputs:
#versionSpec: # Optional
#checkLatest: false # Optional
Arguments
A RGUM EN T DESC RIP T IO N
checkLatest Always check for and download the latest available version of
Always check for new versions NuGet.exe which satisfies the version spec. Enabling this
option could cause unexpected build breaks when a new
version of NuGet is released
TIP
If you're using the Microsoft-hosted agents, you should leave this check box cleared. We update the Microsoft-hosted
agents on a regular basis, but they're often slightly behind the latest version. So selecting this box will result in your build
spending a lot of time updating to a newer minor version.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
Use .NET Core task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to acquire a specific version of .NET Core from the Internet or the tools cache and add it to the PATH.
You can also use this task to change the version of .NET Core used in subsequent tasks like .NET Core cli task.
One other reason to use tool installer is if you want to decouple your pipeline from our update cycles to help avoid
a pipeline run being broken due to a change we make to our agent software.
What's New
Support for installing multiple versions side by side.
Support for patterns in version to fetch latest in minor/major version. For example, you can now specify
2.2.x to get the latest patch.
Perform Multi-level lookup. This input is only applicable to Windows based agents. It configures the .NET
Core's host process behavior for looking for a suitable shared framework on the machine. For more
information, see Multi-level SharedFX Lookup.
Installs NuGet version 4.4.1 and sets up proxy configuration if present in NuGet config.
Task Inputs
PA RA M ET ERS DESC RIP T IO N
useGlobalJson Select this option to install all SDKs from global.json files.
Use global json These files are searched from
system.DefaultWorkingDirector y. You can change the
search root path by setting working directory input
steps:
- task: UseDotNet@2
displayName: 'Use .NET Core sdk'
inputs:
packageType: sdk
version: 2.2.203
installationPath: $(Agent.ToolsDirectory)/dotnet
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Use Python Version task
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines
Use this task to select a version of Python to run on an agent, and optionally add it to PATH.
Demands
None
Prerequisites
A Microsoft-hosted agent with side-by-side versions of Python installed, or a self-hosted agent with
Agent.ToolsDirectory configured (see FAQ).
This task will fail if no Python versions are found in Agent.ToolsDirectory. Available Python versions on Microsoft-
hosted agents can be found here.
NOTE
x86 and x64 versions of Python are available on Microsoft-hosted Windows agents, but not on Linux or macOS agents.
YAML snippet
# Use Python version
# Use the specified version of Python from the tool cache, optionally adding it to the PATH
- task: UsePythonVersion@0
inputs:
#versionSpec: '3.x'
#addToPath: true
#architecture: 'x64' # Options: x86, x64 (this argument applies only on Windows agents)
Arguments
A RGUM EN T DESC RIP T IO N
As of version 0.150 of the task, version spec will also accept pypy2 or pypy3 .
If the task completes successfully, the task's output variable will contain the directory of the Python installation:
Remarks
After running this task with "Add to PATH," the python command in subsequent scripts will be for the highest
available version of the interpreter matching the version spec and architecture.
The versions of Python installed on the Microsoft-hosted Ubuntu and macOS images follow the symlinking
structure for Unix-like systems defined in PEP 394. For example, for Python 3.7, python3.7 is the actual
interpreter. python3 is symlinked to that interpreter, and python is a symlink to that symlink.
On the Microsoft-hosted Windows images, the interpreter is just python .
For Microsoft-hosted agents, x86 is supported only on Windows. This is because Windows can run executables
compiled for the x86 architecture with the WoW64 subsystem. Hosted Ubuntu and Hosted macOS run 64-bit
operating systems and run only 64-bit Python.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
How can I configure a self-hosted agent to use this task?
The desired Python version will have to be added to the tool cache on the self-hosted agent in order for the task to
use it. Normally the tool cache is located under the _work/_tool directory of the agent or the path can be
overridden by the environment variable AGENT_TOOLSDIRECTORY . Under that directory, create the following
directory structure based off of your Python version:
$AGENT_TOOLSDIRECTORY/
Python/
{version number}/
{platform}/
{tool files}
{platform}.complete
The version number should follow the format of 1.2.3 . The platform should either be x86 or x64 . The
tool files should be the unzipped Python version files. The {platform}.complete should be a 0 byte file that
looks like x86.complete or x64.complete and just signifies the tool has been installed in the cache properly.
As a complete, concrete example, here is how a completed download of Python 3.6.4 for x64 would look in the
tool cache:
$AGENT_TOOLSDIRECTORY/
Python/
3.6.4/
x64/
{tool files}
x64.complete
Azure Pipelines
Use this task to select a version of Ruby to run on an agent, and optionally add it to PATH.
Demands
None
Prerequisites
A Microsoft-hosted agent with side-by-side versions of Ruby installed, or a self-hosted agent with
Agent.ToolsDirectory configured (see FAQ).
This task will fail if no Ruby versions are found in Agent.ToolsDirectory. Available Ruby versions on Microsoft-
hosted agents can be found here.
YAML snippet
# Use Ruby version
# Use the specified version of Ruby from the tool cache, optionally adding it to the PATH
- task: UseRubyVersion@0
inputs:
#versionSpec: '>= 2.4'
#addToPath: true # Optional
Arguments
A RGUM EN T DESC RIP T IO N
If the task completes successfully, the task's output variable will contain the directory of the Ruby installation.
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
FAQ
Where can I learn more about tool installers?
For an explanation of tool installers and examples, see Tool installers.
Do I need an agent?
You need at least one agent to run your build or release.
I'm having problems. How can I troubleshoot them?
See Troubleshoot Build and Release.
I can't select a default agent pool and I can't queue my build or release. How do I fix this?
See Agent pools.
How can I configure a self-hosted agent to use this task?
You can run this task on a self-hosted agent with your own Ruby versions. To run this task on a self-hosted agent,
set up Agent.ToolsDirectory by following the instructions here. The tool name to use is "Ruby."
Visual Studio Test Platform Installer task
11/2/2020 • 2 minutes to read • Edit Online
Demands
[none]
YAML snippet
# Visual Studio test platform installer
# Acquire the test platform from nuget.org or the tool cache. Satisfies the ‘vstest’ demand and can be used
for running tests and collecting diagnostic data using the Visual Studio Test task.
- task: VisualStudioTestPlatformInstaller@1
inputs:
#packageFeedSelector: 'nugetOrg' # Options: nugetOrg, customFeed, netShare
#versionSelector: 'latestPreRelease' # Required when packageFeedSelector == NugetOrg ||
PackageFeedSelector == CustomFeed# Options: latestPreRelease, latestStable, specificVersion
#testPlatformVersion: # Required when versionSelector == SpecificVersion
#customFeed: # Required when packageFeedSelector == CustomFeed
#username: # Optional
#password: # Optional
#netShare: # Required when packageFeedSelector == NetShare
Arguments
A RGUM EN T DESC RIP T IO N
username (Optional) Specify the user name to authenticate with the feed
User Name specified in the Package Source argument. If using a
personal access token (PAT) in the password argument, this
input is not required.
The Test platform version option in the Visual Studio Test task must be set to Installed by Tools Installer .
Open source
This task is open source on GitHub. Feedback and contributions are welcome.
Troubleshoot pipeline runs
11/2/2020 • 19 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 |Azure DevOps Ser ver 2019 | TFS
2018 - TFS 2015
This topic provides general troubleshooting guidance. For specific troubleshooting about
.NET Core, see .NET Core troubleshooting.
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release
pipelines are called definitions, runs are called builds, service connections are called service
endpoints, stages are called environments, and jobs are called phases.
You can use the following troubleshooting sections to help diagnose issues with your
pipeline. Most pipeline failures fall into one of these categories.
Pipeline won't trigger
Pipeline queues but never gets an agent
Pipeline fails to complete
NOTE
An additional reason that runs may not start is that your organization goes dormant five
minutes after the last user signs out of Azure DevOps. After that, each of your build pipelines
will run one more time. For example, while your organization is dormant:
A nightly build of code in your organization will run only one night until someone signs in
again.
CI builds of an Other Git repo will stop running until someone signs in again.
IMPORTANT
When you define a YAML PR or CI trigger, only branches explicitly configured to be included will
trigger a run. Includes are processed first, and then excludes are removed from the list. If you
specify an exclude but don't specify any includes, nothing will trigger. For more information, see
Triggers.
When you define a YAML PR or CI trigger, you can specify both include and exclude
clauses for branches, tags, and paths. Ensure that the include clause matches the details
of your commit and that the exclude clause doesn't exclude them. For more information,
see Triggers.
NOTE
If you specify an exclude clause without an include clause, it is equivalent to specifying *
in the include clause.
NOTE
The following scenarios won't consume a parallel job:
If you use release pipelines or multi-stage YAML pipelines, then a run consumes a parallel job
only when it's being actively deployed to a stage. While the release is waiting for an approval
or a manual intervention, it does not consume a parallel job.
When you run a server job or deploy to a deployment group using release pipelines, you
don't consume any parallel jobs.
Learn more: How a parallel job is consumed by a pipeline, Add Pre-deployment approvals,
Server jobs, Deployment groups
Parallel job limits - no available agents or you have hit your free limits
Demands that don't match the capabilities of an agent
TFS agent connection issues
Parallel job limits - no available agents or you have hit your free limits
If you are currently running other pipelines, you may not have any remaining parallel
jobs, or you may have hit your free limits.
To check your limits, navigate to Project settings , Parallel jobs .
After reviewing the limits, check concurrency to see how many jobs are currently running
and how many are available.
If you are currently running other pipelines, you may not have any remaining parallel
jobs, or you may have hit your free limits.
You don't have enough concurrency
To check how much concurrency you have:
1. To check your limits, navigate to Project settings , Parallel jobs .
You can also reach this page by navigating to
https://dev.azure.com/{org}/_settings/buildqueue?_a=concurrentJobs , or choosing
manage parallel jobs from the logs.
2. Determine which pool you want to check concurrency on (Microsoft hosted or self
hosted pools), and choose View in-progress jobs .
3. You'll see text that says Currently running X/X jobs . If both numbers are the
same then jobs will wait until currently running jobs complete.
You can view all jobs, including queued jobs, by selecting Agent pools from the
Project settings .
In this example, the concurrent job limit is one, with one job running and one
queued up. When all agents are busy running jobs, as in this example, the
following message is displayed when additional jobs are queued:
The agent request is not running because all potential agents are running other
requests. Current position in queue: 1
. In this example the job is next in the queue so its position is one.
Your job may be waiting for approval
Your pipeline may not move to the next stage because it is waiting on approval. For more
information, see Define approvals and checks.
All available agents are in use
Jobs may wait if all your agents are currently busy. To check your agents:
1. Navigate to https://dev.azure.com/{org}/_settings/agentpools
2. Select the agent pool to check, in this example FabrikamPool , and choose
Agents .
This page shows all the agents currently online/offline and in use. You can also add
additional agents to the pool from this page.
Demands that don't match the capabilities of an agent
If your pipeline has demands that don't meet the capabilities of any of your agents, your
pipeline won't start. If only some of your agents have the desired capabilities and they are
currently running other pipelines, your pipeline will be stalled until one of those agents
becomes available.
To check the capabilities and demands specified for your agents and pipelines, see
Capabilities.
NOTE
Capabilities and demands are typically used only with self-hosted agents. If your pipeline has
demands that don't match the system capabilities of the agent, unless you have explicitly
labelled the agents with matching capabilities, your pipelines won't get an agent.
If the above error is received while configuring the agent, log on to your TFS machine.
Start the Internet Information Services (IIS) manager. Make sure Anonymous
Authentication is enabled.
The job has been abandoned because agent did not renew the lock. Ensure agent is
running, not sleeping, and has not lost communication with the service.
This error may indicate the agent lost communication with the server for a span of
several minutes. Check the following to rule out network or other interruptions on the
agent machine:
Verify automatic updates are turned off. A machine reboot from an update will cause a
build or release to fail with the above error. Apply updates in a controlled fashion to
avoid this type of interruption. Before rebooting the agent machine, the agent should
first be marked disabled in the pool administration page and let any running build
finish.
Verify the sleep settings are turned off.
If the agent is running on a virtual machine, avoid any live migration or other VM
maintenance operation that may severely impact the health of the machine for
multiple minutes.
If the agent is running on a virtual machine, the same operating-system-update
recommendations and sleep-setting recommendations apply to the host machine. And
also any other maintenance operations that several impact the host machine.
Performance monitor logging or other health metric logging can help to correlate this
type of error to constrained resource availability on the agent machine (disk, memory,
page file, processor, network).
Another way to correlate the error with network problems is to ping a server
indefinitely and dump the output to a file, along with timestamps. Use a healthy
interval, for example 20 or 30 seconds. If you are using Azure Pipelines, then you
would want to ping an internet domain, for example bing.com. If you are using an on-
premises TFS server, then you would want to ping a server on the same network.
Verify the network throughput of the machine is adequate. You can perform an online
speed test to check the throughput.
If you use a proxy, verify the agent is configured to use your proxy. Refer to the agent
deployment topic.
TFS Job Agent not started
This may be characterized by a message in the web console "Waiting for an agent to be
requested". Verify the TFSJobAgent (display name: Visual Studio Team Foundation
Background Job Agent) Windows service is started.
Misconfigured notification URL (1.x agent version )
This may be characterized by a message in the web console "Waiting for console output
from an agent", and the process eventually times out.
A mismatching notification URL may cause the worker to process to fail to connect to the
server. See Team Foundation Administration Console, Application Tier. The 1.x agent
listens to the message queue using the URL that it was configured with. However, when a
job message is pulled from the queue, the worker process uses the notification URL to
communicate back to the server.
Check Azure DevOps status for a service degradation
Check the Azure DevOps Service Status Portal for any issues that may cause a service
degradation, such as increased queue time for agents. For more information, see Azure
DevOps Service Status.
NOTE
If your Microsoft-hosted agent jobs are timing out, ensure that you haven't specified a pipeline
timeout that is less than the max timeout for a job. To check, see Timeouts.
This may be characterized by a message in the log "All files up to date" from the tf get
command. Verify the built-in service identity has permission to download the sources.
Either the identity Project Collection Build Service or Project Build Service will need
permission to download the sources, depending on the selected authorization scope on
General tab of the build pipeline. In the version control web UI, you can browse the
project files at any level of the folder hierarchy and check the security settings.
G e t so u r c e s t h r o u g h Te a m F o u n d a t i o n P r o x y
The easiest way to configure the agent to get sources through a Team Foundation Proxy is
set environment variable TFSPROXY that point to the TFVC proxy server for the agent's
run as user.
Windows:
set TFSPROXY=http://tfvcproxy:8081
setx TFSPROXY=http://tfvcproxy:8081 // If the agent service is running as
NETWORKSERVICE or any service account you can't easily set user level environment
variable
macOS/Linux:
export TFSPROXY=http://tfvcproxy:8081
Troubleshooting steps:
Detect files and folders in use
Anti-virus exclusion
MSBuild and /nodeReuse:false
MSBuild and /maxcpucount:[n]
Detect files and folders in use
On Windows, tools like Process Monitor can be to capture a trace of file events under a
specific directory. Or, for a snapshot in time, tools like Process Explorer or Handle can be
used.
Anti-virus exclusion
Anti-virus software scanning your files can cause file or folder in use errors during a build
or release. Adding an anti-virus exclusion for your agent directory and configured "work
folder" may help to identify anti-virus software as the interfering process.
MSBuild and /nodeReuse:false
If you invoke MSBuild during your build, make sure to pass the argument
/nodeReuse:false (short form /nr:false ). Otherwise MSBuild process(es) will remain
running after the build completes. The process(es) remain for some time in anticipation of
a potential subsequent build.
This feature of MSBuild can interfere with attempts to delete or move a directory - due to
a conflict with the working directory of the MSBuild process(es).
The MSBuild and Visual Studio Build tasks already add /nr:false to the arguments
passed to MSBuild. However, if you invoke MSBuild from your own script, then you would
need to specify the argument.
MSBuild and /maxcpucount:[n ]
By default the build tasks such as MSBuild and Visual Studio Build run MSBuild with the
/m switch. In some cases this can cause problems such as multiple process file access
issues.
Try adding the /m:1 argument to your build tasks to force MSBuild to run only one
process at a time.
File-in-use issues may result when leveraging the concurrent-process feature of MSBuild.
Not specifying the argument /maxcpucount:[n] (short form /m:[n] ) instructs MSBuild to
use a single process only. If you are using the MSBuild or Visual Studio Build tasks, you
may need to specify "/m:1" to override the "/m" argument that is added by default.
Intermittent or inconsistent MSBuild failures
If you are experiencing intermittent or inconsistent MSBuild failures, try instructing
MSBuild to use a single-process only. Intermittent or inconsistent errors may indicate that
your target configuration is incompatible with the concurrent-process feature of MSBuild.
See MSBuild and /maxcpucount:[n].
Process stops responding
Process stops responding causes and troubleshooting steps:
Waiting for Input
Process dump
WiX project
Waiting for Input
A process that stops responding may indicate that a process is waiting for input.
Running the agent from the command line of an interactive logged on session may help
to identify whether a process is prompting with a dialog for input.
Running the agent as a service may help to eliminate programs from prompting for
input. For example in .NET, programs may rely on the
System.Environment.UserInteractive Boolean to determine whether to prompt. When
running as a Windows service, the value is false.
Process dump
Analyzing a dump of the process can help to identify what a deadlocked process is
waiting on.
WiX project
Building a WiX project when custom MSBuild loggers are enabled, can cause WiX to
deadlock waiting on the output stream. Adding the additional MSBuild argument
/p:RunWixToolsOutOfProc=true will workaround the issue.
* text eol=lf
set +x
echo ##vso[task.setvariable variable=MY_VAR]my_value
set -x
steps:
- bash: |
set -x
echo ##vso[task.setvariable variable=MY_VAR]my_value
##vso[task.setvariable variable=MY_VAR]my_value
+ echo '##vso[task.setvariable variable=MY_VAR]my_value'
When the agent sees the first line, MY_VAR will be set to the correct value, "my_value".
However, when it sees the second line, the agent will process everything to the end of the
line. MY_VAR will be set to "my_value'".
Service Connection related issues
To troubleshoot issues related to service connections, see Service connection
troubleshooting.
Pipeline logs provide a powerful tool for determining the cause of pipeline failures.
A typical starting point is to review the logs in your completed build or release. You can view logs by navigating to
the pipeline run summary and selecting the job and task. If a certain task is failing, check the logs for that task.
In addition to viewing logs in the pipeline build summary, you can download complete logs which include
additional diagnostic information, and you can configure more verbose logs to assist with your troubleshooting.
To configure verbose logs for all runs, you can add a variable named system.debug and set its value to
true .
To configure verbose logs for a single run, you can start a new build by choosing Queue build , and setting
the value for the system.debug variable to true .
To configure verbose logs for all runs, edit the build, navigate to the Variables tab, and add a variable
named system.debug , set its value to true , and select to Allow at Queue Time .
To configure verbose logs for a YAML pipeline, add the system.debug variable in the variables section:
variables:
system.debug: true
To download all logs, navigate to the build results for the run, select ..., and choose Download logs .
To download all logs, navigate to the build results for the run, choose Download all logs as zip .
In addition to the pipeline diagnostic logs, the following specialized log types are available, and may contain
information to help you troubleshoot.
Worker diagnostic logs
Agent diagnostic logs
Other logs
Other logs
Inside the diagnostic logs you will find environment.txt and capabilities.txt .
The environment.txt file has various information about the environment within which your build ran. This includes
information like what tasks are run, whether or not the firewall is enabled, PowerShell version info, and some other
items. We continually add to this data to make it more useful.
The capabilities.txt file provides a clean way to see all capabilities installed on the build machine that ran your
build.
IMPORTANT
HTTP traces and trace files can contain passwords and other secrets. Do not post them on a public sites.
Windows:
set VSTS_AGENT_HTTPTRACE=true
macOS/Linux:
export VSTS_AGENT_HTTPTRACE=true
set VSTS_HTTP_PROXY=http://127.0.0.1:8888
5. Run the agent interactively. If you're running as a service, you can set as the environment variable in control
panel for the account the service is running as.
6. Restart the agent.
Use full HTTP tracing - macOS and Linux
Use Charles Proxy (similar to Fiddler on Windows) to capture the HTTP trace of the agent.
1. Start Charles Proxy.
2. Charles: Proxy > Proxy Settings > SSL Tab. Enable. Add URL.
3. Charles: Proxy > Mac OSX Proxy. Recommend disabling to only see agent traffic.
export VSTS_HTTP_PROXY=http://127.0.0.1:8888
4. Run the agent interactively. If it's running as a service, you can set in the .env file. See nix service
5. Restart the agent.
Classic release and artifacts variables
11/2/2020 • 14 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments, and
jobs are called phases.
Classic release and artifacts variables are a convenient way to exchange and transport data throughout your
pipeline. Each variable is stored as a string and its value can change between runs of your pipeline.
Variables are different from Runtime parameters which are only available at template parsing time.
NOTE
This is a reference article that covers the classic release and artifacts variables. To understand variables in YAML
pipelines, see user-defined variables.
As you compose the tasks for deploying your application into each stage in your DevOps CI/CD processes,
variables will help you to:
Define a more generic deployment pipeline once, and then customize it easily for each stage. For
example, a variable can be used to represent the connection string for web deployment, and the value
of this variable can be changed from one stage to another. These are custom variables .
Use information about the context of the particular release, stage, artifacts, or agent in which the
deployment pipeline is being run. For example, your script may need access to the location of the build
to download it, or to the working directory on the agent to create temporary files. These are default
variables .
TIP
You can view the current values of all variables for a release, and use a default variable to run a release in debug mode.
Default variables
Information about the execution context is made available to running tasks through default variables. Your
tasks and scripts can use these variables to find information about the system, release, stage, or agent they
are running in. With the exception of System.Debug , these variables are read-only and their values are
automatically set by the system. Some of the most significant variables are described in the following tables.
To view the full list, see View the current values of all variables.
Example: https://fabrikam.vsrm.visualstudio.com/
Example: https://dev.azure.com/fabrikam/
Example: 6c6f3423-1c84-4625-995a-f7f143a1e43d
Example: 1
Example: Fabrikam
Example: 79f5c12e-3337-4151-be41-a268d2c73344
Example: C:\agent\_work\r1\a
Example: C:\agent\_work\r1\a
System.WorkFolder The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.RootDirectory and Agent.WorkFolder.
Example: C:\agent\_work
VA RIA B L E N A M E DESC RIP T IO N
System.Debug This is the only system variable that can be set by the
users. Set this to true to run the release in debug mode to
assist in fault-finding.
Example: true
Example: 1
Example: 1
Example: 1
Example: fabrikam-cd
Example: 2f435d07-769f-4e46-849d-10d1ab9ba6ab
Example: 254
Example: 127
Example: 276
VA RIA B L E N A M E DESC RIP T IO N
Example: Dev
Example: vstfs://ReleaseManagement/Environment/276
Example: InProgress
Example: fabrikam\_web
Example: 118
Example: Release-47
Example: vstfs://ReleaseManagement/Release/118
Example:
https://dev.azure.com/fabrikam/f3325c6c/_release?
releaseId=392&_a=release-summary
Example: [email protected]
VA RIA B L E N A M E DESC RIP T IO N
Example: 2f435d07-769f-4e46-849d-10d1ab9ba6ab
Example: FALSE
Release.TriggeringArtifact.Alias The alias of the artifact which triggered the release. This is
empty when the release was scheduled or triggered
manually.
Example: fabrikam\_app
Example: NotStarted
Agent.Name The name of the agent as registered with the agent pool.
This is likely to be different from the computer name.
Example: fabrikam-agent
Example: fabrikam-agent
Example: 2.109.1
Example: Release
Agent.HomeDirectory The folder where the agent is installed. This folder contains
the code and resources for the agent.
Example: C:\agent
VA RIA B L E N A M E DESC RIP T IO N
Example: C:\agent\_work\r1\a
Agent.RootDirectory The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.WorkFolder and System.WorkFolder.
Example: C:\agent\_work
Agent.WorkFolder The working directory for this agent, where subfolders are
created for every build or release. Same as
Agent.RootDirectory and System.WorkFolder.
Example: C:\agent\_work
Example: 1
Release.Artifacts.{alias}.SourceBranch The full path and name of the branch from which the
source was built.
Release.Artifacts.{alias}.SourceBranchName The name only of the branch from which the source was
built.
Release.Artifacts.{alias}.Repository.Provider The type of repository from which the source was built.
Release.Artifacts.{alias}.PullRequest.TargetBranch The full path and name of the branch that is the target of a
pull request. This variable is initialized only if the release is
triggered by a pull request flow.
Release.Artifacts.{alias}.PullRequest.TargetBranchName The name only of the branch that is the target of a pull
request. This variable is initialized only if the release is
triggered by a pull request flow.
VA RIA B L E N A M E SA M E A S
To use a default variable in your script, you must first replace the . in the default variable names with _ .
For example, to print the value of artifact variable Release.Artifacts.{Artifact alias}.DefinitionName for the
artifact source whose alias is ASPNET4.CI in a PowerShell script, you would use
$env:RELEASE_ARTIFACTS_ASPNET4_CI_DEFINITIONNAME .
Note that the original name of the artifact source alias, ASPNET4.CI , is replaced by ASPNET4_CI .
View the current values of all variables
1. Open the pipelines view of the summary for the release, and choose the stage you are interested in. In
the list of steps, choose Initialize job .
2. This opens the log for this step. Scroll down to see the values used by the agent for this job.
TIP
If you get an error related to an Azure RM service connection, see How to: Troubleshoot Azure Resource Manager
service connections.
Custom variables
Custom variables can be defined at various scopes.
Share values across all of the definitions in a project by using variable groups. Choose a variable
group when you need to use the same values across all the definitions, stages, and tasks in a project,
and you want to be able to change the values in a single place. You define and manage variable groups
in the Librar y tab.
Share values across all of the stages by using release pipeline variables . Choose a release pipeline
variable when you need to use the same value across all the stages and tasks in the release pipeline,
and you want to be able to change the value in a single place. You define and manage these variables
in the Variables tab in a release pipeline. In the Pipeline Variables page, open the Scope drop-down
list and select "Release". By default, when you add a variable, it is set to Release scope.
Share values across all of the tasks within one specific stage by using stage variables . Use a stage-
level variable for values that vary from stage to stage (and are the same for all the tasks in an stage).
You define and manage these variables in the Variables tab of a release pipeline. In the Pipeline
Variables page, open the Scope drop-down list and select the required stage. When you add a variable,
set the Scope to the appropriate environment.
Using custom variables at project, release pipeline, and stage scope helps you to:
Avoid duplication of values, making it easier to update all occurrences as one operation.
Store sensitive values in a way that they cannot be seen or changed by users of the release pipelines.
Designate a configuration property to be a secure (secret) variable by selecting the (padlock) icon
next to the variable.
IMPORTANT
The values of the hidden (secret) variables are securely stored on the server and cannot be viewed by users
after they are saved. During a deployment, the Azure Pipelines release service decrypts these values when
referenced by the tasks and passes them to the agent over a secure HTTPS channel.
NOTE
Creating custom variables can overwrite standard variables. For example, the PowerShell Path environment variable. If
you create a custom Path variable on a Windows agent, it will overwrite the $env:Path variable and PowerShell
won't be able to run.
NOTE
At present, variables in different groups that are linked to a pipeline in the same scope (e.g., job or stage) will collide
and the result may be unpredictable. Ensure that you use different names for variables across all your variable groups.
You can use custom variables to prompt for values during the execution of a release. For more information,
see Approvals.
Define and modify your variables in a script
To define or modify a variable from a script, use the task.setvariable logging command. Note that the
updated variable value is scoped to the job being executed, and does not flow across jobs or stages. Variable
names are transformed to uppercase, and the characters "." and " " are replaced by "_".
For example, Agent.WorkFolder becomes AGENT_WORKFOLDER . On Windows, you access this as
%AGENT_WORKFOLDER% or $env:AGENT_WORKFOLDER . On Linux and macOS, you use $AGENT_WORKFOLDER .
TIP
You can run a script on a:
Windows agent using either a Batch script task or PowerShell script task.
macOS or Linux agent using a Shell script task.
Batch
PowerShell
Shell
Batch script
"$(sauce)" "$(secret.Sauce)"
Script
@echo off
set sauceArgument=%~1
set secretSauceArgument=%~2
@echo No problem reading %sauceArgument% or %SAUCE%
@echo But I cannot read %SECRET_SAUCE%
@echo But I can read %secretSauceArgument% (but the log is redacted so I do not spoil
the secret)
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions,
runs are called builds, service connections are called service endpoints, stages are called environments, and jobs are
called phases.
This topic will help you resolve issues you may encounter when creating a connection to Microsoft Azure using
an Azure Resource Manager ARM service connection for your Azure DevOps CI/CD processes.
WARNING
Users who are assigned to the Global administrator role can read and modify every administrative setting in your Azure
AD organization. As a best practice, we recommend that you assign this role to fewer than five people in your
organization.
1. Sign in to the Azure portal using an administrator account. The account should be an owner, global
administrator, or user account administrator.
2. Select Azure Active Director y in the left navigation bar.
3. Ensure you are editing the appropriate directory corresponding to the user subscription. If not, select
Switch director y and log in using the appropriate credentials if required.
4. In the MANAGE section select Users .
5. Use the search box to filter the list and then select the user you want to manage.
6. In the MANAGE section select Director y role and change the role to Global administrator .
7. Save the change.
It typically takes 15 to 20 minutes to apply the changes globally. After this period has elapsed, the user can
retry creating the service connection.
The user is not authorized to add applications in the directory
You must have permissions to add integrated applications in the directory. The directory administrator has
permissions to change this setting.
1. Select Azure Active Director y in the left navigation bar.
2. Ensure you are editing the appropriate directory corresponding to the user subscription. If not, select
Switch director y and log in using the appropriate credentials if required.
3. In the MANAGE section select Users .
4. Select User settings .
5. In the App registrations section, change Users can register applications to Yes .
Create the service principal manually with the user already having required permissions in Azure Active Directory
You can also create the service principal with an existing user who already has the required permissions in
Azure Active Directory. For more information, see Create an Azure Resource Manager service connection with
an existing service principal.
Failed to obtain an access token or a valid refresh token was not found
These errors typically occur when your session has expired.
To resolve these issues:
1. Sign out of Azure Pipelines or TFS.
2. Open an InPrivate or incognito browser window and navigate to https://visualstudio.microsoft.com/team-
services/.
3. If you are prompted to sign out, do so.
4. Sign in using the appropriate credentials.
5. Choose the organization you want to use from the list.
6. Select the project you want to add the service connection to.
7. Create the service connection you need by opening the Settings page. Then, select Ser vices > New
ser vice connection > Azure Resource Manager .
Failed to assign Contributor role
This error typically occurs when you do not have Write permission for the selected Azure subscription when
the system attempts to assign the Contributor role.
To resolve this issue, ask the subscription administrator to assign you the appropriate role.
Some subscriptions are missing from the list of subscriptions
To fix this issue you will need to modify the supported account types and who can use your application. To do
so, follow the steps below:
1. Sign in to the Azure portal.
2. If you have access to multiple tenants, use the Director y + subscription filter in the top menu to
select the tenant in which you want to register an application.
3. Search for and select Azure Active Director y .
4. Under Manage , select App registrations .
5. Select you application from the list of registered applications.
6. Under Essentials , select Suppor ted account types .
7. Under Suppor ted account types , Who can use this application or access this API? select Accounts in
any organizational director y .
8. Select Save .
NOTE
Managed identities are not supported in Microsoft Hosted Agents. You will have to set-up a self hosted agent on an
Azure VM and configure managed identity for the virtual machine.
Azure Pipelines
This article is a detailed reference guide to Azure Pipelines YAML pipelines. It includes a catalog of all
supported YAML capabilities and the available options.
The best way to get started with YAML pipelines is to read the quickstart guide. After that, to learn how to
configure your YAML pipeline for your needs, see conceptual topics like Build variables and Jobs.
To learn how to configure your YAML pipeline for your needs, see conceptual topics like Build variables
and Jobs.
Pipeline structure
A pipeline is one or more stages that describe a CI/CD process. Stages are the major divisions in a pipeline.
The stages "Build this app," "Run these tests," and "Deploy to preproduction" are good examples.
A stage is one or more jobs, which are units of work assignable to the same machine. You can arrange both
stages and jobs into dependency graphs. Examples include "Run this stage before that one" and "This job
depends on the output of that job."
A job is a linear series of steps. Steps can be tasks, scripts, or references to external templates.
This hierarchy is reflected in the structure of a YAML file like:
Pipeline
Stage A
Job 1
Step 1.1
Step 1.2
...
Job 2
Step 2.1
Step 2.2
...
Stage B
...
Simple pipelines don't require all of these levels. For example, in a single-job build you can omit the
containers for stages and jobs because there are only steps. And because many options shown in this article
aren't required and have good defaults, your YAML definitions are unlikely to include all of them.
A pipeline is one or more jobs that describe a CI/CD process. A job is a unit of work assignable to the same
machine. You can arrange jobs into dependency graphs like "This job depends on the output of that job."
A job is a linear series of steps. Steps can be tasks, scripts, or references to external templates.
This hierarchy is reflected in the structure of a YAML file like:
Pipeline
Job 1
Step 1.1
Step 1.2
...
Job 2
Step 2.1
Step 2.2
...
For single-job pipelines, you can omit the jobs container because there are only steps. And because many
options shown in this article aren't required and have good defaults, your YAML definitions are unlikely to
include all of them.
Conventions
Here are the syntax conventions used in this article:
To the left of : is a literal keyword used in pipeline definitions.
To the right of : is a data type. The data type can be a primitive type like string or a reference to a rich
structure defined elsewhere in this article.
The notation [ datatype ] indicates an array of the mentioned data type. For instance, [ string ] is an
array of strings.
The notation { datatype : datatype } indicates a mapping of one data type to another. For instance,
{ string: string } is a mapping of strings to strings.
The symbol | indicates there are multiple data types available for the keyword. For instance,
job | templateReference means either a job definition or a template reference is allowed.
YAML basics
This document covers the schema of an Azure Pipelines YAML file. To learn the basics of YAML, see Learn
YAML in Y Minutes. Azure Pipelines doesn't support all YAML features. Unsupported features include anchors,
complex keys, and sets. Also, unlike standard YAML, Azure Pipelines depends on seeing stage , job , task ,
or a task shortcut like script as the first key in a mapping.
Pipeline
Schema
Example
If you have a single stage, you can omit the stages keyword and directly specify the jobs keyword:
If you have a single job, you can omit the jobs keyword and directly specify the steps keyword:
Stage
A stage is a collection of related jobs. By default, stages run sequentially. Each stage starts only after the
preceding stage is complete.
Use approval checks to manually control when a stage should run. These checks are commonly used to
control deployments to production environments.
Checks are a mechanism available to the resource owner. They control when a stage in a pipeline consumes a
resource. As an owner of a resource like an environment, you can define checks that are required before a
stage that consumes the resource can start.
Currently, manual approval checks are supported on environments. For more information, see Approvals.
Schema
Example
stages:
- stage: string # name of the stage (A-Z, a-z, 0-9, and underscore)
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
variables: # several syntaxes, see specific section
jobs: [ job | templateReference]
jobs:
- job: string # name of the job (A-Z, a-z, 0-9, and underscore)
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
strategy:
parallel: # parallel strategy; see the following "Parallel" topic
matrix: # matrix strategy; see the following "Matrix" topic
maxParallel: number # maximum number of matrix jobs to run simultaneously
continueOnError: boolean # 'true' if future jobs should run even if this job fails; defaults to
'false'
pool: pool # see the following "Pool" schema
workspace:
clean: outputs | resources | all # what to clean up before the job runs
container: containerReference # container to run this job inside of
timeoutInMinutes: number # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: number # how much time to give 'run always even if cancelled tasks' before
killing them
variables: # several syntaxes, see specific section
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
services: { string: string | container } # container resources to run as a service container
For more information about workspaces, including clean options, see the workspace topic in Jobs.
Learn more about variables, steps, pools, and server jobs.
NOTE
If you have only one stage and one job, you can use single-job syntax as a shorter way to describe the steps to run.
Container reference
A container is supported by jobs.
Schema
Example
container:
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # endpoint for a private container registry
env: { string: string } # list of environment variables to add
# you can also use any of the other supported container attributes
Strategies
The matrix and parallel keywords specify mutually exclusive strategies for duplicating a job.
Matrix
Use of a matrix generates copies of a job, each with different input. These copies are useful for testing against
different configurations or platform versions.
Schema
Example
strategy:
matrix: { string1: { string2: string3 } }
maxParallel: number
For each occurrence of string1 in the matrix, a copy of the job is generated. The name string1 is the copy's
name and is appended to the name of the job. For each occurrence of string2, a variable called string2 with
the value string3 is available to the job.
NOTE
Matrix configuration names must contain only basic Latin alphabet letters (A-Z and a-z), digits (0-9), and underscores (
_ ). They must start with a letter. Also, their length must be 100 characters or fewer.
The optional maxParallel keyword specifies the maximum number of simultaneous matrix legs to run at
once.
If maxParallel is unspecified or set to 0, no limit is applied.
If maxParallel is unspecified, no limit is applied.
NOTE
The matrix syntax doesn't support automatic job scaling but you can implement similar functionality using the
each keyword. For an example, see nedrebo/parameterized-azure-jobs.
Parallel
This strategy specifies how many duplicates of a job should run. It's useful for slicing up a large test matrix.
The Visual Studio Test task understands how to divide the test load across the number of scheduled jobs.
Schema
Example
strategy:
parallel: number
Deployment job
A deployment job is a special type of job. It's a collection of steps to run sequentially against the environment.
In YAML pipelines, we recommend that you put your deployment steps in a deployment job.
Schema
Example
jobs:
- deployment: string # name of the deployment job, A-Z, a-z, 0-9, and underscore. The word "deploy" is
a keyword and is unsupported as the deployment name.
displayName: string # friendly name to display in the UI
pool: # see pool schema
name: string # Use only global level variables for defining a pool name. Stage/job level
variables are not supported to define pool name.
demands: string | [ string ]
workspace:
clean: outputs | resources | all # what to clean up before the job runs
dependsOn: string
condition: string
continueOnError: boolean # 'true' if future jobs should run even if this job fails;
defaults to 'false'
container: containerReference # container to run this job inside
services: { string: string | container } # container resources to run as a service container
timeoutInMinutes: nonEmptyString # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: nonEmptyString # how much time to give 'run always even if cancelled tasks'
before killing them
variables: # several syntaxes, see specific section
environment: string # target environment name and optionally a resource name to record the deployment
history; format: <environment-name>.<resource-name>
strategy:
runOnce: #rolling, canary are the other strategies that are supported
deploy:
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
Steps
A step is a linear sequence of operations that make up a job. Each step runs in its own process on an agent
and has access to the pipeline workspace on a local hard drive. This behavior means environment variables
aren't preserved between steps but file system changes are.
Schema
Example
For more information about steps, see the schema references for:
Script
Bash
pwsh
PowerShell
Checkout
Task
Step templates
All steps, regardless of whether they're documented in this article, support the following properties:
displayName
name
condition
continueOnError
enabled
env
timeoutInMinutes
Variables
You can add hard-coded values directly or reference variable groups. Specify variables at the pipeline, stage,
or job level.
Schema
Example
For a simple set of hard-coded variables, use this mapping syntax:
variables:
- name: string # name of a variable
value: string # value of the variable
- group: string # name of a variable group
variables:
- name: myReadOnlyVar
value: myValue
readonly: true
Template references
NOTE
Be sure to see the full template expression syntax, which is all forms of ${{ }} .
You can export reusable sections of your pipeline to a separate file. These separate files are known as
templates. Azure Pipelines supports four kinds of templates:
Azure Pipelines supports four kinds of templates:
Stage
Job
Step
Variable
You can also use templates to control what is allowed in a pipeline and to define how parameters can be used.
Parameter
You can export reusable sections of your pipeline to separate files. These separate files are known as
templates. Azure DevOps Server 2019 supports these two kinds of templates:
Job
Step
Templates themselves can include other templates. Azure Pipelines supports a maximum of 50 unique
template files in a single pipeline.
Stage templates
You can define a set of stages in one file and use it multiple times in other files.
Schema
Example
In the main pipeline:
Job templates
You can define a set of jobs in one file and use it multiple times in other files.
Schema
Example
In the main pipeline:
steps:
- template: string # reference to template
parameters: { string: any } # provided parameters
NOTE
The variables keyword uses two syntax forms: sequence and mapping. In mapping syntax, all keys are variable
names and their values are variable values. To use variable templates, you must use sequence syntax. Sequence syntax
requires you to specify whether you're mentioning a variable ( name ), a variable group ( group ), or a template (
template ). See the variables topic for more.
Parameters
You can use parameters in templates and pipelines.
Schema
YAML Example
Template Example
The type and name fields are required when defining parameters. See all parameter data types.
parameters:
- name: string # name of the parameter; required
type: enum # data types, see below
default: any # default value; if no default, then the parameter MUST be given by the user at
runtime
values: [ string ] # allowed list of values (for some data types)
Types
DATA T Y P E N OT ES
string string
The step, stepList, job, jobList, deployment, deploymentList, stage, and stageList data types all use standard
YAML schema format. This example includes string, number, boolean, object, step, and stepList.
parameters:
- name: myString
type: string
default: a string
- name: myMultiString
type: string
default: default
values:
- default
- ubuntu
- name: myNumber
type: number
default: 2
values:
- 1
- 2
- 4
- 8
- 16
- name: myBoolean
type: boolean
default: true
- name: myObject
type: object
default:
foo: FOO
bar: BAR
things:
- one
- two
- three
nested:
one: apple
two: pear
count: 3
- name: myStep
type: step
default:
script: echo my step
- name: mySteplist
type: stepList
default:
- script: echo step one
- script: echo step two
trigger: none
jobs:
- job: stepList
steps: ${{ parameters.mySteplist }}
- job: myStep
steps:
- ${{ parameters.myStep }}
Resources
A resource is any external service that is consumed as part of your pipeline. An example of a resource is
another CI/CD pipeline that produces:
Artifacts like Azure Pipelines or Jenkins.
Code repositories like GitHub, Azure Repos, or Git.
Container-image registries like Azure Container Registry or Docker hub.
Resources in YAML represent sources of pipelines, containers, repositories, and types. For more information
on Resources, see here.
General schema
resources:
pipelines: [ pipeline ]
repositories: [ repository ]
containers: [ container ]
Pipeline resource
If you have an Azure pipeline that produces artifacts, your pipeline can consume the artifacts by using the
pipeline keyword to define a pipeline resource. You can also enable pipeline-completion triggers.
Schema
Example
resources:
pipelines:
- pipeline: string # identifier for the pipeline resource
project: string # project for the build pipeline; optional input for current project
source: string # source pipeline definition name
branch: string # branch to pick the artifact, optional; defaults to all branches
version: string # pipeline run number to pick artifact, optional; defaults to last successfully
completed run
trigger: # optional; triggers are not enabled by default.
branches:
include: [string] # branches to consider the trigger events, optional; defaults to all branches.
exclude: [string] # branches to discard the trigger events, optional; defaults to none.
IMPORTANT
When you define a resource trigger, if its pipeline resource is from the same repo as the current pipeline, triggering
follows the same branch and commit on which the event is raised. But if the pipeline resource is from a different repo,
the current pipeline is triggered on the branch specified by the Default branch for manual and scheduled builds
setting. For more information, see Branch considerations for pipeline completion triggers.
resources.pipeline.<Alias>.projectName
resources.pipeline.<Alias>.projectID
resources.pipeline.<Alias>.pipelineName
resources.pipeline.<Alias>.pipelineID
resources.pipeline.<Alias>.runName
resources.pipeline.<Alias>.runID
resources.pipeline.<Alias>.runURI
resources.pipeline.<Alias>.sourceBranch
resources.pipeline.<Alias>.sourceCommit
resources.pipeline.<Alias>.sourceProvider
resources.pipeline.<Alias>.requestedFor
resources.pipeline.<Alias>.requestedForID
You can consume artifacts from a pipeline resource by using a download task. See the download keyword.
Container resource
Container jobs let you isolate your tools and dependencies inside a container. The agent launches an instance
of your specified container then runs steps inside it. The container keyword lets you specify your container
images.
Service containers run alongside a job to provide various dependencies like databases.
Schema
Example
resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container
mapDockerSocket: bool # whether to map in the Docker daemon socket; defaults to true
mountReadOnly: # volumes to mount read-only - all default to false
externals: boolean # components required to talk to the agent
tasks: boolean # tasks required by the job
tools: boolean # installable tools like Python and Ruby
work: boolean # the work directory
resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container
mapDockerSocket: bool # whether to map in the Docker daemon socket; defaults to true
resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container
Repository resource
If your pipeline has templates in another repository, you must let the system know about that repository. The
repository keyword lets you specify an external repository.
If your pipeline has templates in another repository, or if you want to use multi-repo checkout with a
repository that requires a service connection, you must let the system know about that repository. The
repository keyword lets you specify an external repository.
Schema
Example
resources:
repositories:
- repository: string # identifier (A-Z, a-z, 0-9, and underscore)
type: enum # see the following "Type" topic
name: string # repository name (format depends on `type`)
ref: string # ref name to use; defaults to 'refs/heads/master'
endpoint: string # name of the service connection to use (for types that aren't Azure Repos)
trigger: # CI trigger for this repository, no CI trigger if skipped (only works for Azure Repos)
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
tags:
include: [ string ] # tag names which will trigger a build
exclude: [ string ] # tag names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
Type
Pipelines support the following values for the repository type: git , github , and bitbucket . The git type
refers to Azure Repos Git repos.
If you specify type: git , the name value refers to another repository in the same project. An example
is name: otherRepo . To refer to a repo in another project within the same organization, prefix the name
with that project's name. An example is name: OtherProject/otherRepo .
If you specify type: github , the name value is the full name of the GitHub repo and includes the user
or organization. An example is name: Microsoft/vscode . GitHub repos require a GitHub service
connection for authorization.
If you specify type: bitbucket , the name value is the full name of the Bitbucket Cloud repo and
includes the user or organization. An example is name: MyBitbucket/vscode . Bitbucket Cloud repos
require a Bitbucket Cloud service connection for authorization.
Triggers
Push trigger
Pull request trigger
Scheduled trigger
Pipeline trigger
NOTE
Trigger blocks can't contain variables or template expressions.
Push trigger
A push trigger specifies which branches cause a continuous integration build to run. If you specify no push
trigger, pushes to any branch trigger a build. Learn more about triggers and how to specify them.
Schema
Example
There are three distinct syntax options for the trigger keyword: a list of branches to include, a way to disable
CI triggers, and the full syntax for complete control.
List syntax:
trigger: [ string ] # list of branch names
Disablement syntax:
Full syntax:
trigger:
batch: boolean # batch changes if true; start a new build for every push if false (default)
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
tags:
include: [ string ] # tag names which will trigger a build
exclude: [ string ] # tag names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
If you specify an exclude clause without an include clause for branches , tags , or paths , it is equivalent to
specifying * in the include clause.
trigger:
batch: boolean # batch changes if true; start a new build for every push if false (default)
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
IMPORTANT
When you specify a trigger, only branches that you explicitly configure for inclusion trigger a pipeline. Inclusions are
processed first, and then exclusions are removed from that list. If you specify an exclusion but no inclusions, nothing
triggers.
PR trigger
A pull request trigger specifies which branches cause a pull request build to run. If you specify no pull request
trigger, pull requests to any branch trigger a build. Learn more about pull request triggers and how to specify
them.
IMPORTANT
YAML PR triggers are supported only in GitHub and Bitbucket Cloud. If you use Azure Repos Git, you can configure a
branch policy for build validation to trigger your build pipeline for validation.
IMPORTANT
YAML PR triggers are supported only in GitHub. If you use Azure Repos Git, you can configure a branch policy for build
validation to trigger your build pipeline for validation.
Schema
Example
There are three distinct syntax options for the pr keyword: a list of branches to include, a way to disable PR
triggers, and the full syntax for complete control.
List syntax:
Disablement syntax:
pr: none # will disable PR builds entirely; will not disable CI triggers
Full syntax:
pr:
autoCancel: boolean # indicates whether additional pushes to a PR should cancel in-progress runs for
the same PR. Defaults to true
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
pr:
autoCancel: boolean # indicates whether additional pushes to a PR should cancel in-progress runs for
the same PR. Defaults to true
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
drafts: boolean # For GitHub only, whether to build draft PRs, defaults to true
If you specify an exclude clause without an include clause for branches or paths , it is equivalent to
specifying * in the include clause.
IMPORTANT
When you specify a pull request trigger, only branches that you explicitly configure for inclusion trigger a pipeline.
Inclusions are processed first, and then exclusions are removed from that list. If you specify an exclusion but no
inclusions, nothing triggers.
Scheduled trigger
YAML scheduled triggers are unavailable in either this version of Azure DevOps Server or Visual Studio Team
Foundation Server. You can use scheduled triggers in the classic editor.
A scheduled trigger specifies a schedule on which branches are built. If you specify no scheduled trigger, no
scheduled builds occur. Learn more about scheduled triggers and how to specify them.
Schema
Example
schedules:
- cron: string # cron syntax defining a schedule in UTC time
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes
since the last successful scheduled run. The default is false.
NOTE
If you specify an exclude clause without an include clause for branches , it is equivalent to specifying * in the
include clause.
Pipeline trigger
Pipeline completion triggers are configured using a pipeline resource. For more information, see Pipeline
completion triggers.
Pool
The pool keyword specifies which pool to use for a job of the pipeline. A pool specification also holds
information about the job's strategy for running.
In Azure DevOps Server 2019 you can specify a pool at the job level in YAML, and at the pipeline level in the
pipeline settings UI. In Azure DevOps Server 2019.1 you can also specify a pool at the pipeline level in YAML if
you have a single implicit job.
You can specify a pool at the pipeline, stage, or job level.
The pool specified at the lowest level of the hierarchy is used to run the job.
Schema
Example
The full syntax is:
pool:
name: string # name of the pool to run this job in
demands: string | [ string ] # see the following "Demands" topic
vmImage: string # name of the VM image you want to use; valid only in the Microsoft-hosted pool
Environment
The environment keyword specifies the environment or its resource that is targeted by a deployment job of
the pipeline. An environment also holds information about the deployment strategy for running the steps
defined inside the job.
Schema
Example
The full syntax is:
If you specify an environment or one of its resources but don't need to specify other properties, you can
shorten the syntax to:
environment: environmentName.resourceName
strategy: # deployment strategy
runOnce: # default strategy
deploy:
steps:
- script: echo Hello world
Server
The server value specifies a server job. Only server tasks like invoking an Azure function app can be run in a
server job.
Schema
Example
When you use server , a job runs as a server job rather than an agent job.
pool: server
Script
The script keyword is a shortcut for the command-line task. The task runs a script using cmd.exe on
Windows and Bash on other platforms.
Schema
Example
steps:
- script: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
workingDirectory: string # initial working directory for the step
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
target:
container: string # where this step will run; values are the container name or the word 'host'
commands: enum # whether to process all logging commands from this step; values are `any` (default)
or `restricted`
timeoutInMinutes: number
env: { string: string } # list of environment variables to add
If you don't specify a command mode, you can shorten the target structure to:
- script:
target: string # container name or the word 'host'
Bash
The bash keyword is a shortcut for the shell script task. The task runs a script in Bash on Windows, macOS,
and Linux.
Schema
Example
steps:
- bash: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
workingDirectory: string # initial working directory for the step
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
target:
container: string # where this step will run; values are the container name or the word 'host'
commands: enum # whether to process all logging commands from this step; values are `any` (default)
or `restricted`
timeoutInMinutes: number
env: { string: string } # list of environment variables to add
If you don't specify a command mode, you can shorten the target structure to:
- bash:
target: string # container name or the word 'host'
pwsh
pwsh
The pwsh keyword is a shortcut for the PowerShell task when that task's pwsh value is set to true . The task
runs a script in PowerShell Core on Windows, macOS, and Linux.
Schema
Example
steps:
- pwsh: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
errorActionPreference: enum # see the following "Error action preference" topic
ignoreLASTEXITCODE: boolean # see the following "Ignore last exit code" topic
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
workingDirectory: string # initial working directory for the step
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
timeoutInMinutes: number
env: { string: string } # list of environment variables to add
NOTE
Each PowerShell session lasts only for the duration of the job in which it runs. Tasks that depend on what has been
bootstrapped must be in the same job as the bootstrap.
PowerShell
The powershell keyword is a shortcut for the PowerShell task. The task runs a script in Windows PowerShell.
Schema
Example
steps:
- powershell: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
errorActionPreference: enum # see the following "Error action preference" topic
ignoreLASTEXITCODE: boolean # see the following "Ignore last exit code" topic
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
workingDirectory: string # initial working directory for the step
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
timeoutInMinutes: number
env: { string: string } # list of environment variables to add
NOTE
Each PowerShell session lasts only for the duration of the job in which it runs. Tasks that depend on what has been
bootstrapped must be in the same job as the bootstrap.
Learn more about conditions and timeouts.
Error action preference
Unless otherwise specified, the error action preference defaults to the value stop , and the line
$ErrorActionPreference = 'stop' is prepended to the top of your script.
When the error action preference is set to stop, errors cause PowerShell to terminate the task and return a
nonzero exit code. The task is also marked as Failed.
Schema
Example
ignoreLASTEXITCODE: boolean
Publish
The publish keyword is a shortcut for the Publish Pipeline Artifact task. The task publishes (uploads) a file or
folder as a pipeline artifact that other jobs and pipelines can consume.
Schema
Example
steps:
- publish: string # path to a file or folder
artifact: string # artifact name
displayName: string # friendly name to display in the UI
Download
The download keyword is a shortcut for the Download Pipeline Artifact task. The task downloads artifacts
associated with the current run or from another Azure pipeline that is associated as a pipeline resource.
Schema
Example
steps:
- download: [ current | pipeline resource identifier | none ] # disable automatic download if "none"
artifact: string ## artifact name, optional; downloads all the available artifacts if not specified
patterns: string # patterns representing files to include; optional
displayName: string # friendly name to display in the UI
Checkout
Nondeployment jobs automatically check out source code. Use the checkout keyword to configure or
suppress this behavior.
Schema
Example
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # if true, execute `execute git clean -ffdx && git reset --hard HEAD` before fetching
fetchDepth: number # the depth of commits to ask Git to fetch; defaults to no limit
lfs: boolean # whether to download Git-LFS files; defaults to false
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules; defaults to not checking out submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1);
defaults to a directory called `s`
persistCredentials: boolean # if 'true', leave the OAuth token in the Git config after the initial
fetch; defaults to false
steps:
- checkout: self | none | repository name # self represents the repo where the initial Pipelines YAML
file was found
clean: boolean # if true, run `execute git clean -ffdx && git reset --hard HEAD` before fetching
fetchDepth: number # the depth of commits to ask Git to fetch; defaults to no limit
lfs: boolean # whether to download Git-LFS files; defaults to false
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get
submodules of submodules; defaults to not checking out submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1);
defaults to a directory called `s`
persistCredentials: boolean # if 'true', leave the OAuth token in the Git config after the initial
fetch; defaults to false
NOTE
In addition to the cleaning option available using checkout , you can also configuring cleaning in a workspace. For
more information about workspaces, including clean options, see the workspace topic in Jobs.
To avoid syncing sources at all:
steps:
- checkout: none
NOTE
If you're running the agent in the Local Service account and want to modify the current repository by using git
operations or loading git submodules, give the proper permissions to the Project Collection Build Service Accounts
user.
- checkout: self
submodules: true
persistCredentials: true
To check out multiple repositories in your pipeline, use multiple checkout steps:
- checkout: self
- checkout: git://MyProject/MyRepo
- checkout: MyGitHubRepo # Repo declared in a repository resource
For more information, see Check out multiple repositories in your pipeline.
Task
Tasks are the building blocks of a pipeline. There's a catalog of tasks available to choose from.
Schema
Example
steps:
- task: string # reference to a task and version, e.g. "VSBuild@1"
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to
'false'
enabled: boolean # whether to run this step; defaults to 'true'
target:
container: string # where this step will run; values are the container name or the word 'host'
commands: enum # whether to process all logging commands from this step; values are `any` (default)
or `restricted`
timeoutInMinutes: number
inputs: { string: string } # task-specific inputs
env: { string: string } # list of environment variables to add
If you don't specify a command mode, you can shorten the target structure to:
- task:
target: string # container name or the word 'host'
Syntax highlighting
Syntax highlighting is available for the pipeline schema via a Visual Studio Code extension. You can download
Visual Studio Code, install the extension, and check out the project on GitHub. The extension includes a JSON
schema for validation.
You also can obtain a schema that's specific to your organization (that is, it contains installed custom tasks)
from the Azure DevOps REST API yamlschema endpoint.
Expressions
11/2/2020 • 18 minutes to read • Edit Online
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called
definitions, runs are called builds, service connections are called service endpoints, stages are called environments,
and jobs are called phases.
Expressions can be used in many places where you need to specify a string, boolean, or number value when
authoring a pipeline. The most common use of expressions is in conditions to determine whether a job or
step should run.
Another common use of expressions is in defining variables. Expressions can be evaluated at compile time
or at run time. Compile time expressions can be used anywhere; runtime expressions can be used in
variables and conditions.
The difference between runtime and compile time expression syntaxes is primarily what context is available.
In a compile-time expression ( ${{ <expression> }} ), you have access to parameters and statically defined
variables . In a runtime expression ( $[ <expression> ] ), you have access to more variables but no
parameters.
In this example, a runtime expression sets the value of $(isMain) . A static variable in a compile expression
sets the value of $(compileVar) .
variables:
staticVar: 'my value' # static variable
compileVar: ${{ variables.staticVar }} # compile time expression
isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/master')] # runtime expression
steps:
- script: |
echo ${{variables.staticVar}} # outputs my value
echo $(compileVar) # outputs my value
echo $(isMain) # outputs True
Literals
As part of an expression, you can use boolean, null, number, string, or version literals.
# Examples
variables:
someBoolean: ${{ true }} # case insensitive, so True or TRUE also works
someNumber: ${{ -1.2 }}
someString: ${{ 'a b c' }}
someVersion: ${{ 1.2.3 }}
Boolean
True and False are boolean literal expressions.
Null
Null is a special literal expression that's returned from a dictionary miss, e.g. ( variables['noSuch'] ). Null can
be the output of an expression but cannot be called directly within an expression.
Number
Starts with '-', '.', or '0' through '9'.
String
Must be single-quoted. For example: 'this is a string' .
To express a literal single-quote, escape it with a single quote. For example:
'It''s OK if they''re using contractions.' .
myKey: |
one
two
three
Version
A version number with up to four segments. Must start with a number and contain two or three period ( . )
characters. For example: 1.2.3.4 .
Variables
As part of an expression, you may access variables using one of two syntaxes:
Index syntax: variables['MyVar']
Property dereference syntax: variables.MyVar
Functions
The following built-in functions can be used in expressions.
and
Evaluates to True if all parameters are True
Min parameters: 2. Max parameters: N
Casts parameters to Boolean for evaluation
Short-circuits after first False
Example: and(eq(variables.letters, 'ABC'), eq(variables.numbers, 123))
coalesce
Evaluates the parameters in order, and returns the value that does not equal null or empty-string.
Min parameters: 2. Max parameters: N
Example: coalesce(variables.couldBeNull, variables.couldAlsoBeNull, 'literal so it always works')
contains
Evaluates True if left parameter String contains right parameter
Min parameters: 2. Max parameters: 2
Casts parameters to String for evaluation
Performs ordinal ignore-case comparison
Example: contains('ABCDE', 'BCD') (returns True)
containsValue
Evaluates True if the left parameter is an array, and any item equals the right parameter. Also evaluates
True if the left parameter is an object, and the value of any property equals the right parameter.
Min parameters: 2. Max parameters: 2
If the left parameter is an array, convert each item to match the type of the right parameter. If the left
parameter is an object, convert the value of each property to match the type of the right parameter. The
equality comparison for each specific item evaluates False if the conversion fails.
Ordinal ignore-case comparison for Strings
Short-circuits after the first match
NOTE
There is no literal syntax in a YAML pipeline for specifying an array. This function is of limited use in general pipelines.
It's intended for use in the pipeline decorator context with system-provided arrays such as the list of steps.
counter
This function can only be used in an expression that defines a variable. It cannot be used as part of a
condition for a step, job, or stage.
Evaluates a number that is incremented with each run of a pipeline.
Parameters: 2. prefix and seed .
Prefix is a string expression. A separate value of counter is tracked for each unique value of prefix
Seed is the starting value of the counter
You can create a counter that is automatically incremented by one in each execution of your pipeline. When
you define a counter, you provide a prefix and a seed . Here is an example that demonstrates this.
variables:
major: 1
# define minor as a counter with the prefix as variable major, and seed as 100.
minor: $[counter(variables['major'], 100)]
steps:
- bash: echo $(minor)
The value of minor in the above example in the first run of the pipeline will be 100. In the second run it will
be 101, provided the value of major is still 1.
If you edit the YAML file, and update the value of the variable major to be 2, then in the next run of the
pipeline, the value of minor will be 100. Subsequent runs will increment the counter to 101, 102, 103, ...
Later, if you edit the YAML file, and set the value of major back to 1, then the value of the counter resumes
where it left off for that prefix. In this example, it resumes at 102.
Here is another example of setting a variable to act as a counter that starts at 100, gets incremented by 1 for
every run, and gets reset to 100 every day.
NOTE
pipeline.startTimeis not available outside of expressions. pipeline.startTime formats
system.pipelineStartTime into a date and time object so that it is available to work with expressions. The default
time zone for pipeline.startTime is UTC. You can change the time zone for your organization.
jobs:
- job:
variables:
a: $[counter(format('{0:yyyyMMdd}', pipeline.startTime), 100)]
steps:
- bash: echo $(a)
Here is an example of having a counter that maintains a separate value for PRs and CI runs.
variables:
patch: $[counter(variables['build.reason'], 0)]
Counters are scoped to a pipeline. In other words, its value is incremented for each run of that pipeline.
There are no project-scoped counters.
endsWith
Evaluates True if left parameter String ends with right parameter
Min parameters: 2. Max parameters: 2
Casts parameters to String for evaluation
Performs ordinal ignore-case comparison
Example: endsWith('ABCDE', 'DE') (returns True)
eq
Evaluates True if parameters are equal
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Returns False if conversion fails.
Ordinal ignore-case comparison for Strings
Example: eq(variables.letters, 'ABC')
format
Evaluates the trailing parameters and inserts them into the leading parameter string
Min parameters: 1. Max parameters: N
Example: format('Hello {0} {1}', 'John', 'Doe')
Uses .NET custom date and time format specifiers for date formatting ( yyyy , yy , MM , M , dd , d , HH ,
H , m , mm , ss , s , f , ff , ffff , K )
Example: format('{0:yyyyMMdd}', pipeline.startTime) . In this case pipeline.startTime is a special date
time object variable.
Escape by doubling braces. For example: format('literal left brace {{ and literal right brace }}')
ge
Evaluates True if left parameter is greater than or equal to the right parameter
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Errors if conversion fails.
Ordinal ignore-case comparison for Strings
Example: ge(5, 5) (returns True)
gt
Evaluates True if left parameter is greater than the right parameter
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Errors if conversion fails.
Ordinal ignore-case comparison for Strings
Example: gt(5, 2) (returns True)
in
Evaluates True if left parameter is equal to any right parameter
Min parameters: 1. Max parameters: N
Converts right parameters to match type of left parameter. Equality comparison evaluates False if
conversion fails.
Ordinal ignore-case comparison for Strings
Short-circuits after first match
Example: in('B', 'A', 'B', 'C') (returns True)
join
Concatenates all elements in the right parameter array, separated by the left parameter string.
Min parameters: 2. Max parameters: 2
Each element in the array is converted to a string. Complex objects are converted to empty string.
If the right parameter is not an array, the result is the right parameter converted to a string.
In this example, a semicolon gets added between each item in the array. The parameter type is an object.
parameters:
- name: myArray
type: object
default:
- FOO
- BAR
- ZOO
variables:
A: ${{ join(';',parameters.myArray) }}
steps:
- script: echo $A # outputs FOO;BAR;ZOO
le
Evaluates True if left parameter is less than or equal to the right parameter
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Errors if conversion fails.
Ordinal ignore-case comparison for Strings
Example: le(2, 2) (returns True)
length
Returns the length of a string or an array, either one that comes from the system or that comes from a
parameter
Min parameters: 1. Max parameters 1
Example: length('fabrikam') returns 8
lower
Converts a string or variable value to all lowercase characters
Min parameters: 1. Max parameters 1
Returns the lowercase equivalent of a string
Example: lower('FOO') returns foo
lt
Evaluates True if left parameter is less than the right parameter
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Errors if conversion fails.
Ordinal ignore-case comparison for Strings
Example: lt(2, 5) (returns True)
ne
Evaluates True if parameters are not equal
Min parameters: 2. Max parameters: 2
Converts right parameter to match type of left parameter. Returns True if conversion fails.
Ordinal ignore-case comparison for Strings
Example: ne(1, 2) (returns True)
not
Evaluates True if parameter is False
Min parameters: 1. Max parameters: 1
Converts value to Boolean for evaluation
Example: not(eq(1, 2)) (returns True)
notIn
Evaluates True if left parameter is not equal to any right parameter
Min parameters: 1. Max parameters: N
Converts right parameters to match type of left parameter. Equality comparison evaluates False if
conversion fails.
Ordinal ignore-case comparison for Strings
Short-circuits after first match
Example: notIn('D', 'A', 'B', 'C') (returns True)
or
Evaluates True if any parameter is true
Min parameters: 2. Max parameters: N
Casts parameters to Boolean for evaluation
Short-circuits after first True
Example: or(eq(1, 1), eq(2, 3)) (returns True, short-circuits)
replace
Returns a new string in which all instances of a string in the current instance are replaced with another
string
Min parameters: 3. Max parameters: 3
replace(a, b, c) : returns a, with all instances of b replaced by c
Example:
replace('https://www.tinfoilsecurity.com/saml/consume','https://www.tinfoilsecurity.com','http://server')
(returns http://server/saml/consume )
startsWith
Evaluates true if left parameter string starts with right parameter
Min parameters: 2. Max parameters: 2
Casts parameters to String for evaluation
Performs ordinal ignore-case comparison
Example: startsWith('ABCDE', 'AB') (returns True)
upper
Converts a string or variable value to all uppercase characters
Min parameters: 1. Max parameters 1
Returns the uppercase equivalent of a string
Example: upper('bah') returns BAH
xor
Evaluates True if exactly one parameter is True
Min parameters: 2. Max parameters: 2
Casts parameters to Boolean for evaluation
Example: xor(True, False) (returns True)
For a job:
With no arguments, evaluates to True regardless of whether any jobs in the dependency graph
succeeded or failed.
With job names as arguments, evaluates to True whether any of those jobs succeeded or failed.
This is like always() , except it will evaluate False when the pipeline is canceled.
Conditional insertion
You can use an if clause to conditionally assign the value or a variable or set inputs for tasks. Conditionals
only work when using template syntax.
For templates, you can use conditional insertion when adding a sequence or mapping. Learn more about
conditional insertion in templates.
Conditionally assign a variable
variables:
${{ if eq(variables['Build.SourceBranchName'], 'master') }}: # only works if you have a master branch
stageName: prod
pool:
vmImage: 'ubuntu-latest'
steps:
- script: echo ${{variables.stageName}}
pool:
vmImage: 'ubuntu-latest'
steps:
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(Pipeline.Workspace)'
${{ if eq(variables['Build.SourceBranchName'], 'master') }}:
artifact: 'prod'
${{ if ne(variables['Build.SourceBranchName'], 'master') }}:
artifact: 'dev'
publishLocation: 'pipeline'
Dependencies
Expressions can use the dependencies context to reference previous jobs or stages. You can use
dependencies to:
Reference the job status of a previous job
Reference the stage status of a previous stage
Reference output variables in the previous job in the same stage
Reference output variables in the previous stage in a stage
Reference output variables in a job in a previous stage in the following stage
The context is called dependencies for jobs and stages and works much like variables. Inside a job, if you
refer to an output variable from a job in another stage, the context is called stageDependencies .
If you experience issues with output variables having quote characters ( ' or " ) in them, see this
troubleshooting guide.
Stage to stage dependencies
Structurally, the dependencies object is a map of job and stage names to results and outputs . Expressed
as JSON, it would look like:
"dependencies": {
"<STAGE_NAME>" : {
"result": "Succeeded|SucceededWithIssues|Skipped|Failed|Canceled",
"outputs": {
"jobName.stepName.variableName": "value"
}
},
"...": {
// another stage
}
}
Use this form of dependencies to map in variables or check conditions at a stage level. In this example, Stage
B runs whether Stage A is successful or skipped.
stages:
- stage: A
condition: false
jobs:
- job: A1
steps:
- script: echo Job A1
- stage: B
condition: in(dependencies.A.result, 'Succeeded', 'SucceededWithIssues', 'Skipped')
jobs:
- job: B1
steps:
- script: echo Job B1
Stages can also use output variables from another stage. In this example, Stage B depends on a variable in
Stage A.
stages:
- stage: A
jobs:
- job: A1
steps:
- bash: echo "##vso[task.setvariable variable=shouldrun;isOutput=true]true"
# or on Windows:
# - script: echo ##vso[task.setvariable variable=shouldrun;isOutput=true]true
name: printvar
- stage: B
condition: and(succeeded(), eq(dependencies.A.outputs['A1.printvar.shouldrun'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
NOTE
By default, each stage in a pipeline depends on the one just before it in the YAML file. If you need to refer to a stage
that isn't immediately prior to the current one, you can override this automatic default by adding a dependsOn
section to the stage.
"dependencies": {
"<JOB_NAME>": {
"result": "Succeeded|SucceededWithIssues|Skipped|Failed|Canceled",
"outputs": {
"stepName.variableName": "value1"
}
},
"...": {
// another job
}
}
In this example, Job A will always be skipped and Job B will run. Job C will run, since all of its dependencies
either succeed or are skipped.
jobs:
- job: a
condition: false
steps:
- script: echo Job A
- job: b
steps:
- script: echo Job B
- job: c
dependsOn:
- a
- b
condition: |
and
(
in(dependencies.a.result, 'Succeeded', 'SucceededWithIssues', 'Skipped'),
in(dependencies.b.result, 'Succeeded', 'SucceededWithIssues', 'Skipped')
)
steps:
- script: echo Job C
jobs:
- job: A
steps:
- bash: echo "##vso[task.setvariable variable=shouldrun;isOutput=true]true"
# or on Windows:
# - script: echo ##vso[task.setvariable variable=shouldrun;isOutput=true]true
name: printvar
- job: B
condition: and(succeeded(), eq(dependencies.A.outputs['printvar.shouldrun'], 'true'))
dependsOn: A
steps:
- script: echo hello from B
"stageDependencies": {
"<STAGE_NAME>" : {
"<JOB_NAME>": {
"result": "Succeeded|SucceededWithIssues|Skipped|Failed|Canceled",
"outputs": {
"stepName.variableName": "value"
}
},
"...": {
// another job
}
},
"...": {
// another stage
}
}
In this example, job B1 will run whether job A1 is successful or skipped. Job B2 will check the value of the
output variable from job A1 to determine whether it should run.
trigger: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: A
jobs:
- job: A1
steps:
- bash: echo "##vso[task.setvariable variable=shouldrun;isOutput=true]true"
# or on Windows:
# - script: echo ##vso[task.setvariable variable=shouldrun;isOutput=true]true
name: printvar
- stage: B
dependsOn: A
jobs:
- job: B1
condition: in(stageDependencies.A.A1.result, 'Succeeded', 'SucceededWithIssues', 'Skipped')
steps:
- script: echo hello from Job B1
- job: B2
condition: eq(stageDependencies.A.A1.outputs['printvar.shouldrun'], 'true')
steps:
- script: echo hello from Job B2
Filtered arrays
When operating on a collection of items, you can use the * syntax to apply a filtered array. A filtered array
returns all objects/elements regardless their names.
As an example, consider an array of objects named foo . We want to get an array of the values of the id
property in each object in our array.
[
{ "id": 1, "a": "avalue1"},
{ "id": 2, "a": "avalue2"},
{ "id": 3, "a": "avalue3"}
]
This tells the system to operate on foo as a filtered array and then select the id property.
This would return:
[ 1, 2, 3 ]
Type casting
Values in an expression may be converted from one type to another as the expression gets evaluated. When
an expression is evaluated, the parameters are coalesced to the relevant data type and then turned back into
strings.
For example, in this YAML, the values true and false are converted to 1 and 0 when the expression is
evaluated. The function lt() returns True when the left parameter is less than the right parameter.
variables:
firstEval: $[lt(false, true)] # 0 vs. 1, True
secondEval: $[lt(true, false)] # 1 vs. 0, False
steps:
- script: echo $(firstEval)
- script: echo $(secondEval)
In this example, the values variables.emptyString and the empty string both evaluate as empty strings. The
function coalesce() evaluates the parameters in order, and returns the first value that does not equal null or
empty-string.
variables:
coalesceLiteral: $[coalesce(variables.emptyString, '', 'literal value')]
steps:
- script: echo $(coalesceLiteral) # outputs literal value
F RO M / TO B O O L EA N N UL L N UM B ER ST RIN G VERSIO N
Boolean
To number:
False → 0
True → 1
To string:
False → 'false'
True → 'true'
Null
To Boolean: False
To number: 0
To string: '' (the empty string)
Number
To Boolean: 0 → False , any other number → True
To version: Must be greater than zero and must contain a non-zero decimal. Must be less than
Int32.MaxValue (decimal component also).
To string: Converts the number to a string with no thousands separator and no decimal separator.
String
To Boolean: '' (the empty string) → False , any other string → True
To null: '' (the empty string) → Null , any other string not convertible
To number: '' (the empty string) → 0, otherwise, runs C#'s Int32.TryParse using InvariantCulture and
the following rules: AllowDecimalPoint | AllowLeadingSign | AllowLeadingWhite | AllowThousands |
AllowTrailingWhite. If TryParse fails, then it's not convertible.
To version: runs C#'s Version.TryParse . Must contain Major and Minor component at minimum. If
TryParse fails, then it's not convertible.
Version
To Boolean: True
To string: Major.Minor or Major.Minor.Build or Major.Minor.Build.Revision.
FAQ
I want to do something that is not supported by expressions. What options do I have for extending
Pipelines functionality?
You can customize your Pipeline with a script that includes an expression. For example, this snippet takes the
BUILD_BUILDNUMBER variable and splits it with Bash. This script outputs two new variables, $MAJOR_RUN and
$MINOR_RUN , for the major and minor run numbers. The two variables are then used to create two pipeline
variables, $major and $minor with task.setvariable. These variables are available to downstream steps. To
share variables across pipelines see Variable groups.
steps:
- bash: |
MAJOR_RUN=$(echo $BUILD_BUILDNUMBER | cut -d '.' -f1)
echo "This is the major run number: $MAJOR_RUN"
echo "##vso[task.setvariable variable=major]$MAJOR_RUN"
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS 2015
Pattern syntax
A pattern is a string or list of newline-delimited strings. File and directory names are compared to patterns to
include (or sometimes exclude) them in a task. You can build up complex behavior by stacking multiple
patterns. See fnmatch for a full syntax guide.
Match characters
Most characters are used as exact matches. What counts as an "exact" match is platform-dependent: the
Windows filesystem is case-insensitive, so the pattern "ABC" would match a file called "abc". On case-sensitive
filesystems, that pattern and name would not match.
The following characters have special behavior.
* matches zero or more characters within a file or directory name. See examples.
? matches any single character within a file or directory name. See examples.
[] matches a set or range of characters within a file or directory name. See examples.
** recursive wildcard. For example, /hello/**/* matches all descendants of /hello .
Extended globbing
?(hello|world) - matches hello or world zero or one times
*(hello|world) - zero or more occurrences
+(hello|world) - one or more occurrences
@(hello|world) - exactly once
!(hello|world) - not hello or world
Note, extended globs cannot span directory separators. For example, +(hello/world|other) is not valid.
Comments
Patterns that begin with # are treated as comments.
Exclude patterns
Leading ! changes the meaning of an include pattern to exclude. You can include a pattern, exclude a subset
of it, and then re-include a subset of that: this is known as an "interleaved" pattern.
Multiple ! flips the meaning. See examples.
You must define an include pattern before an exclude one. See examples.
Escaping
Wrapping special characters in [] can be used to escape literal glob characters in a file name. For example the
literal file name hello[a-z] can be escaped as hello[[]a-z] .
Slash
/ is used as the path separator on Linux and macOS. Most of the time, Windows agents accept / . Occasions
where the Windows separator ( \ ) must be used are documented.
Examples
Basic pattern examples
Asterisk examples
Example 1: Given the pattern *Website.sln and files:
ConsoleHost.sln
ContosoWebsite.sln
FabrikamWebsite.sln
Website.sln
ContosoWebsite.sln
FabrikamWebsite.sln
Website.sln
ContosoWebsite/index.html
ContosoWebsite/ContosoWebsite.proj
FabrikamWebsite/index.html
FabrikamWebsite/FabrikamWebsite.proj
ContosoWebsite/ContosoWebsite.proj
FabrikamWebsite/FabrikamWebsite.proj
log1.log
log2.log
log3.log
script.sh
log1.log
log2.log
log3.log
image.tiff
image.png
image.ico
SampleA.dat
SampleB.dat
SampleC.dat
SampleD.dat
SampleA.dat
SampleC.dat
SampleA.dat
SampleB.dat
SampleC.dat
SampleD.dat
SampleA.dat
SampleB.dat
SampleC.dat
SampleA.dat
SampleB.dat
SampleC.dat
SampleD.dat
SampleE.dat
SampleF.dat
SampleG.dat
SampleH.dat
SampleA.dat
SampleB.dat
SampleC.dat
SampleE.dat
SampleG.dat
sample1/A.ext
sample1/B.ext
sample2/C.ext
*
!*.xml
and files:
ConsoleHost.exe
ConsoleHost.pdb
ConsoleHost.xml
Fabrikam.dll
Fabrikam.pdb
Fabrikam.xml
ConsoleHost.exe
ConsoleHost.pdb
Fabrikam.dll
Fabrikam.pdb
Double exclude
Given the pattern:
*
!*.xml
!!Fabrikam.xml
and files:
ConsoleHost.exe
ConsoleHost.pdb
ConsoleHost.xml
Fabrikam.dll
Fabrikam.pdb
Fabrikam.xml
Folder exclude
Given the pattern:
**
!sample/**
and files:
ConsoleHost.exe
ConsoleHost.pdb
ConsoleHost.xml
sample/Fabrikam.dll
sample/Fabrikam.pdb
sample/Fabrikam.xml
ConsoleHost.exe
ConsoleHost.pdb
ConsoleHost.xml
File transforms and variable substitution reference
11/2/2020 • 8 minutes to read • Edit Online
Azure Pipelines | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 | TFS 2017
NOTE
In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, runs
are called builds, service connections are called service endpoints, stages are called environments, and jobs are called phases.
Some tasks, such as the Azure App Service Deploy task version 3 and later and the IIS Web App Deploy task, allow
users to configure the package based on the environment specified. These tasks use msdeploy.exe , which
supports the overriding of values in the web.config file with values from the parameters.xml file. However, file
transforms and variable substitution are not confined to web app files . You can use these techniques with any
XML or JSON files.
NOTE
File transforms and variable substitution are also supported by the separate File Transform task for use in Azure Pipelines.
You can use the File Transform task to apply file transformations and variable substitutions on any configuration and
parameters files.
Configuration substitution is specified in the File Transform and Variable Substitution Options section of the
settings for the tasks. The transformation and substitution options are:
XML transformation
XML variable substitution
JSON variable substitution
When the task runs, it first performs XML transformation, XML variable substitution, and JSON variable
substitution on configuration and parameters files. Next, it invokes msdeploy.exe , which uses the
parameters.xml file to substitute values in the web.config file.
XML Transformation
XML transformation supports transforming the configuration files ( *.config files) by following Web.config
Transformation Syntax and is based on the environment to which the web package will be deployed. This option is
useful when you want to add, remove or modify configurations for different environments. Transformation will be
applied for other configuration files including Console or Windows service application configuration files (for
example, FabrikamSer vice.exe.config ).
Configuration transform file naming conventions
XML transformation will be run on the *.config file for transformation configuration files named
*.Release.config or *.<stage>.config and will be executed in the following order:
Transform file
<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<connectionStrings>
<add name="MyDB"
connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated
Security=True"
xdt:Transform="Insert" />
</connectionStrings>
<appSettings>
<add xdt:Transform="Replace" xdt:Locator="Match(key)" key="webpages:Enabled" value="true" />
</appSettings>
<system.web>
<compilation xdt:Transform="RemoveAttributes(debug)" />
</system.web>
</configuration>
For more information, see Web.config Transformation Syntax for Web Project Deployment Using Visual
Studio
/WebPackage(.zip)
/---- content
/----- website
/---- appsettings.json
/---- web.config
/---- [other folders]
/--- archive.xml
/--- systeminfo.xml
and you want to substitute values in appsettings.json , enter the relative path from the root folder; for example
content/website/appsettings.json . Alternatively, use wildcard patterns to search for specific JSON files. For
example, **/appsettings.json returns the relative path and name of files named appsettings.json .
JSON variable substitution example
As an example, consider the task of overriding values in this JSON file:
{
"Data": {
"DefaultConnection": {
"ConnectionString": "Data Source=(LocalDb)\\MSDB;AttachDbFilename=aspcore-local.mdf;"
},
"DebugMode": "enabled",
"DBAccess": {
"Administrators": ["Admin-1", "Admin-2"],
"Users": ["Vendor-1", "vendor-3"]
},
"FeatureFlags": {
"Preview": [
{
"newUI": "AllAccounts"
},
{
"NewWelcomeMessage": "Newusers"
}
]
}
}
}
The task is to override the values of ConnectionString , DebugMode , the first of the Users values, and
NewWelcomeMessage at the respective places within the JSON file hierarchy.
Classic
YAML
1. Create a release pipeline with a stage named Release .
2. Add an Azure App Ser vice Deploy task and enter a newline-separated list of JSON files to substitute the
variable values in the JSON variable substitution textbox. Files names must be relative to the root folder.
You can use wildcards to search for JSON files. For example: **/*.json means substitute values in all the
JSON files within the package.
"first" : {
"second": {
"third" : "value"
}
}
NOTE
Use UTF-8 formatting for logging commands.
Overview
Logging commands are how tasks and scripts communicate with the agent. They cover actions like creating new
variables, marking a step as failed, and uploading artifacts.
The general format for a logging command is:
##vso[area.action property1=value;property2=value;...]message
There are also a few formatting commands with a slightly different syntax:
##[command]message
#!/bin/bash
echo "##vso[task.setvariable variable=testvar;]testvalue"
File paths should be given as absolute paths: rooted to a drive on Windows, or beginning with / on Linux and
macOS.
Formatting commands
These commands are messages to the log formatter in Azure Pipelines. They mark specific log lines as errors,
warnings, collapsible sections, and so on.
The formatting commands are:
##[group]Beginning of a group
##[warning]Warning message
##[error]Error message
##[debug]Debug text
##[command]Command-line being run
##[endgroup]
Task commands
LogIssue: Log an error or warning
##vso[task.logissue]error/warning message
Usage
Log an error or warning message in the timeline record of the current task.
Properties
type = or warning (Required)
error
sourcepath = source file location
linenumber = line number
columnnumber = column number
code = error or warning code
#!/bin/bash
echo "##vso[task.logissue type=error]Something went very wrong."
exit 1
TIP
exit 1 is optional, but is often a command you'll issue soon after an error is logged. If you select Control Options:
Continue on error , then the exit 1 will result in a partially successful build instead of a failed build.
#!/bin/bash
echo "##vso[task.logissue
type=warning;sourcepath=consoleapp/main.cs;linenumber=1;columnnumber=1;code=100;]Found something that could be
a problem."
Usage
Set progress and current operation for the current task.
Properties
value = percentage of completion
Example
Bash
PowerShell
To see how it looks, save and queue the build, and then watch the build run. Observer that a progress indicator
changes when the task runs this script.
Complete: Finish timeline
##vso[task.complete]current operation
Usage
Finish the timeline record for the current task, set task result and current operation. When result not provided, set
result to succeeded.
Properties
result =
Succeeded The task succeeded.
SucceededWithIssues The task ran into problems. The build will be completed as partially succeeded at
best.
Failed The build will be completed as failed. (If the Control Options: Continue on error option is
selected, the build will be completed as partially succeeded at best.)
Example
##vso[task.complete result=Succeeded;]DONE
Usage
Creates and updates timeline records. This is primarily used internally by Azure Pipelines to report about steps,
jobs, and stages. While customers can add entries to the timeline, they won't typically be shown in the UI.
The first time we see ##vso[task.detail] during a step, we create a "detail timeline" record for the step. We can
create and update nested timeline records base on id and parentid .
Task authors must remember which GUID they used for each timeline record. The logging system will keep track of
the GUID for each timeline record, so any new GUID will result a new timeline record.
Properties
id = Timeline record GUID (Required)
parentid = Parent timeline record GUID
type = Record type (Required for first time, can't overwrite)
name = Record name (Required for first time, can't overwrite)
order = order of timeline record (Required for first time, can't overwrite)
starttime = Datetime
finishtime = Datetime
progress = percentage of completion
state = Unknown | Initialized | InProgress | Completed
result = Succeeded | SucceededWithIssues | Failed
Examples
Create new root timeline record:
Usage
Sets a variable in the variable service of taskcontext. The first task can set a variable, and following tasks are able to
use the variable. The variable is exposed to the following tasks as an environment variable.
When issecret is set to true , the value of the variable will be saved as secret and masked out from log. Secret
variables are not passed into tasks as environment variables and must instead be passed as inputs.
Properties
variable = variable name (Required)
issecret = boolean (Optional, defaults to false)
isoutput = boolean (Optional, defaults to false)
isreadonly = boolean (Optional, defaults to false)
Examples
Bash
PowerShell
Set the variables:
Usage
Set a service connection field with given value. Value updated will be retained in the endpoint for the subsequent
tasks that execute within the same job.
Properties
id = service connection ID (Required)
field = field type, one of authParameter , dataParameter , or url (Required)
key = key (Required, unless field = url )
Examples
##vso[task.setendpoint id=000-0000-0000;field=authParameter;key=AccessToken]testvalue
##vso[task.setendpoint id=000-0000-0000;field=dataParameter;key=userVariable]testvalue
##vso[task.setendpoint id=000-0000-0000;field=url]https://example.com/service
Usage
Upload and attach attachment to current timeline record. These files are not available for download with logs.
These can only be referred to by extensions using the type or name values.
Properties
type = attachment type (Required)
name = attachment name (Required)
Example
##vso[task.addattachment type=myattachmenttype;name=myattachmentname;]c:\myattachment.txt
Usage
Upload and attach summary markdown to current timeline record. This summary shall be added to the
build/release summary and not available for download with logs. The summary should be in UTF-8 or ASCII
format.
Examples
##vso[task.uploadsummary]c:\testsummary.md
Usage
Upload user interested file as additional log information to the current timeline record. The file shall be available
for download along with task logs.
Example
##vso[task.uploadfile]c:\additionalfile.log
Usage
Update the PATH environment variable by prepending to the PATH. The updated environment variable will be
reflected in subsequent tasks.
Example
##vso[task.prependpath]c:\my\directory\path
Artifact commands
Associate: Initialize an artifact
##vso[artifact.associate]artifact location
Usage
Create an artifact link. Artifact location must be a file container path, VC path or UNC share path.
Properties
artifactname = artifact name (Required)
type = container | filepath | versioncontrol | gitref | tfvclabel , artifact type (Required)
Examples
##vso[artifact.associate type=container;artifactname=MyServerDrop]#/1/build
##vso[artifact.associate type=filepath;artifactname=MyFileShareDrop]\\MyShare\MyDropLocation
##vso[artifact.associate type=versioncontrol;artifactname=MyTfvcPath]$/MyTeamProj/MyFolder
##vso[artifact.associate type=gitref;artifactname=MyTag]refs/tags/MyGitTag
##vso[artifact.associate type=tfvclabel;artifactname=MyTag]MyTfvcLabel
Usage
Upload a local file into a file container folder, and optionally publish an artifact as artifactname .
Properties
containerfolder = folder that the file will upload to, folder will be created if needed. (Required)
artifactname = artifact name
Example
##vso[artifact.upload containerfolder=testresult;artifactname=uploadedresult;]c:\testresult.trx
Build commands
UploadLog: Upload a log
##vso[build.uploadlog]local file path
Usage
Upload user interested log to build's container " logs\tool " folder.
Example
##vso[build.uploadlog]c:\msbuild.log
Usage
You can automatically generate a build number from tokens you specify in the pipeline options. However, if you
want to use your own logic to set the build number, then you can use this logging command.
Example
##vso[build.updatebuildnumber]my-new-build-number
Usage
Add a tag for current build.
Example
##vso[build.addbuildtag]Tag_UnitTestPassed
Release commands
UpdateReleaseName: Rename current release
##vso[release.updatereleasename]release name
Usage
Update the release name for the running release.
NOTE
Supported in Azure DevOps and Azure DevOps Server beginning in version 2020.
Example
##vso[release.updatereleasename]my-new-release-name
Artifact policy checks
6/12/2020 • 2 minutes to read • Edit Online
Artifact policies are enforced before deploying to critical environments such as production. These policies are
evaluated against all the deployable artifacts in the given pipeline run and block the deployment if the artifacts
don't comply. Adding a check to evaluate Artifact requires the custom policy to be configured. This guide describes
how custom policies can be created.
NOTE
Currently, the supported artifact types are for container images and Kubernetes environments
Prerequisites
Use Rego for defining policy that is easy to read and write.
Familiarize yourself with Rego query language. Basics will do.
To support structured document models like JSON, Rego extends Datalog. Rego queries are assertions on data
stored in OPA. These queries can be used to define policies that enumerate instances of data that violate the
expected state of the system.
allowedBuilder := "AzureDevOps_pipeline-foo"
checkBuilder[errors] {
trace("Check if images are built by Azure Pipelines")
resourceUri := values[index].build.resourceUri
image := fetchImage(resourceUri)
builder := values[index].build.build.provenance.builderVersion
trace(sprintf("%s: builder", [builder]))
not startswith(builder, "allowedBuilder")
errors := sprintf("%s: image not built by Azure Pipeline [%s]", [image,builder])
}
fetchRegistry(uri) = reg {
out := regex.find_n("//.*/", uri, 1)
reg = trim(out[0], "/")
}
fetchImage(uri) = img {
out := regex.find_n("/.*@", uri, 1)
img := trim(out[0], "/@")
}
Check allowed registries
This policy checks if the images are from allowed registries only.
allowlist = {
"gcr.io/myrepo",
"raireg1.azurecr.io"
}
checkregistries[errors] {
trace(sprintf("Allowed registries: %s", [concat(", ", allowlist)]))
resourceUri := values[index].image.resourceUri
registry := fetchRegistry(resourceUri)
image := fetchImage(resourceUri)
not allowlist[registry]
errors := sprintf("%s: source registry not permitted", [image])
}
fetchRegistry(uri) = reg {
out := regex.find_n("//.*/", uri, 1)
reg = trim(out[0], "/")
}
fetchImage(uri) = img {
out := regex.find_n("/.*@", uri, 1)
img := trim(out[0], "/@")
}
forbiddenPorts = {
"80",
"22"
}
checkExposedPorts[errors] {
trace(sprintf("Checking for forbidden exposed ports: %s", [concat(", ", forbiddenPorts)]))
layerInfos := values[index].image.image.layerInfo
layerInfos[x].directive == "EXPOSE"
resourceUri := values[index].image.resourceUri
image := fetchImage(resourceUri)
ports := layerInfos[x].arguments
trace(sprintf("exposed ports: %s", [ports]))
forbiddenPorts[ports]
errors := sprintf("%s: image exposes forbidden port %s", [image,ports])
}
fetchRegistry(uri) = reg {
out := regex.find_n("//.*/", uri, 1)
reg = trim(out[0], "/")
}
fetchImage(uri) = img {
out := regex.find_n("/.*@", uri, 1)
img := trim(out[0], "/@")
}
checkDeployedEnvironments[errors] {
trace(sprintf("Checking if the image has been pre-deployed to one of: [%s]", [concat(", ",
predeployedEnvironments)]))
deployments := values[index].deployment
deployedAddress := deployments[i].deployment.address
trace(sprintf("deployed to : %s",[deployedAddress]))
resourceUri := deployments[i].resourceUri
image := fetchImage(resourceUri)
not predeployedEnvironments[deployedAddress]
trace(sprintf("%s: fails pre-deployed environment condition. found %s", [image,deployedAddress]))
errors := sprintf("image %s fails pre-deployed environment condition. found %s", [image,deployedAddress])
}
fetchRegistry(uri) = reg {
out := regex.find_n("//.*/", uri, 1)
reg = trim(out[0], "/")
}
fetchImage(uri) = img {
out := regex.find_n("/.*@", uri, 1)
img := trim(out[0], "/@")
}
Securing Azure Pipelines
2/26/2020 • 2 minutes to read • Edit Online
Azure Pipelines poses unique security challenges. You can use a pipeline to run scripts or deploy code to production
environments. But you want to ensure your CI/CD pipelines don't become avenues to run malicious code. You also
want to ensure only code you intend to deploy is deployed. Security must be balanced with giving teams the
flexibility and power they need to run their own pipelines.
NOTE
Azure Pipelines is one among a collection of Azure DevOps services, all built on the same secure infrastructure in Azure. To
understand the main concepts around security for all of Azure DevOps services, see Azure DevOps Data Protection Overview
and Azure DevOps Security and Identity.
Traditionally, organizations implemented security through draconian lock-downs. Code, pipelines, and production
environments had severe restrictions on access and use. In small organizations with a small number of users and
projects, this stance was relatively easy to manage. However, that's not the case in larger organizations. Where
many users have contributor access to code, one must "assume breach". Assuming breach means behaving as if an
adversary has contributor access to some (if not all) of the repositories.
The goal in this case is to prevent that adversary from running malicious code in the pipeline. Malicious code may
steal secrets or corrupt production environments. Another goal is to prevent lateral exposure to other projects,
pipelines, and repositories from the compromised pipeline.
This series of topics outlines recommendations to help you put together a secure YAML-based CI/CD pipeline. It
also covers the places where you can make trade-offs between security and flexibility. The series also assumes
familiarity with Azure Pipelines, the core Azure DevOps security constructs, and Git.
Topics covered:
Incremental approach to improving security
Repository protection
Pipeline resources
Project structure
Security through templates
Variables and parameters
Shared infrastructure
Other security considerations
Plan how to secure your YAML pipelines
2/26/2020 • 2 minutes to read • Edit Online
We recommend that you use an incremental approach to secure your pipelines. Ideally, you would implement all of
the guidance that we offer. But don't be daunted by the number of recommendations. And don't hold off making
some improvements just because you can't make all the changes right now.
Next steps
After you plan your security approach, consider how your repositories provide protection.
Repository protection
5/14/2020 • 2 minutes to read • Edit Online
Source code, the pipeline's YAML file, and necessary scripts & tools are all stored in a version control repository.
Permissions and branch policies must be employed to ensure changes to the code and pipeline are safe. Also, you
should review default access control for repositories.
Because of Git's design, protection at a branch level will only carry you so far. Users with push access to a repo can
usually create new branches. If you use GitHub open-source projects, anyone with a GitHub account can fork your
repository and propose contributions back. Since pipelines are associated with a repository and not with specific
branches, you must assume the code and YAML files are untrusted.
Forks
If you build public repositories from GitHub, you must consider your stance on fork builds. Forks are especially
dangerous since they come from outside your organization. To protect your products from contributed code,
consider the following recommendations.
NOTE
The following recommendations apply primarily to building public repos from GitHub.
User branches
Users in your organization with the right permissions can create new branches containing new or updated code.
That code can run through the same pipeline as your protected branches. Further, if the YAML file in the new
branch is changed, then the updated YAML will be used to run the pipeline. While this design allows for great
flexibility and self-service, not all changes are safe (whether made maliciously or not).
If your pipeline consumes source code or is defined in Azure Repos, you must fully understand the Azure Repos
permissions model. In particular, a user with Create Branch permission at the repository level can introduce code
to the repo even if that user lacks Contribute permission.
Next steps
Next, learn about the additional protection offered by checks on protected resources.
Pipeline resources
11/2/2020 • 3 minutes to read • Edit Online
Azure Pipelines offers security mechanisms beyond just protecting the YAML file and source code. When pipelines
run, access to resources goes through a system called checks. Checks can suspend or even fail a pipeline run in
order to keep resources safe. A pipeline can access two types of resources, protected and open.
Protected resources
Your pipelines often have access to secrets. For instance, to sign your build, you need a signing certificate. To
deploy to a production environment, you need a credential to that environment. In Azure Pipelines, all of the
following are considered protected resources:
agent pools
variable groups
secure files
service connections
environments
"Protected" means:
They can be made accessible to specific users and specific pipelines within the project. They cannot be accessed
by users and pipelines outside of a project.
You can run additional manual or automated checks every time a pipeline uses one of these resources.
Open resources
All the other resources in a project are considered open resources. Open resources include:
artifacts
pipelines
test plans
work items
You'll learn more about which pipelines can access what resources in the section on projects.
User permissions
The first line of defense for protected resources is user permissions. In general, ensure that you only give
permissions to users who require them. All protected resources have a similar security model. A member of user
role for a resource can:
Remove approvers and checks configured on that resource
Grant access to other users or pipelines to use that resource
Pipeline permissions
When you use YAML pipelines, user permissions are not enough to secure your protected resources. You can
easily copy the name of a protected resource (for example, a service connection for your production environment)
and include that in a different pipeline. Pipeline permissions protect against such copying. For each of the
protected resources, ensure that you have disabled the option to grant access to "all pipelines". Instead, explicitly
granted access to specific pipelines that you trust.
Checks
In YAML, a combination of user and pipeline permissions is not enough to fully secure your protected resources.
Pipeline permissions to resources are granted to the whole pipeline. Nothing prevents an adversary from creating
another branch in your repository, injecting malicious code, and using the same pipeline to access that resource.
Even without malicious intent, most pipelines need a second set of eyes look over changes (especially to the
pipeline itself) before deploying to production. Checks allow you to pause the pipeline run until certain
conditions are met:
Manual approval check . Every run that uses a project protected resource is blocked for your manual
approval before proceeding. This gives you the opportunity to review the code and ensure that it is coming
from the right branch.
Protected branch check . If you have manual code review processes in place for some of your branches, you
can extend this protection to pipelines. Configure a protected branch check on each of your resources. This will
automatically stop your pipeline from running on top of any user branches.
Next steps
Next, consider how you group resources into a project structure.
Recommendations to securely structure projects in
your pipeline
2/26/2020 • 2 minutes to read • Edit Online
Beyond the scale of individual resources, you should also consider groups of resources. In Azure DevOps,
resources are grouped by team projects. It's important to understand what resources your pipeline can access
based on project settings and containment.
Every job in your pipeline receives an access token. This token has permissions to read open resources. In some
cases, pipelines might also update those resources. In other words, your user account might not have access to a
certain resource, but scripts and tasks that run in your pipeline might have access to that resource. The security
model in Azure DevOps also allows access to these resources from other projects in the organization. If you
choose to shut off pipeline access to some of these resources, then your decision applies to all pipelines in a
project. A specific pipeline can't be granted access to an open resource.
Separate projects
Given the nature of open resources, you should consider managing each product and team in a separate project.
This practice ensures that a pipeline from one product can't access open resources from another product. In this
way, you prevent lateral exposure. When multiple teams or products share a project, you can't granularly isolate
their resources from one another.
If your Azure DevOps organization was created before August 2019, then runs might be able to access open
resources in all of your organization's projects. Your organization administrator must review a key security setting
in Azure Pipelines that enables project isolation for pipelines. You can find this setting at Azure DevOps >
Organization settings > Pipelines > Settings . Or go directly to this Azure DevOps location:
https://dev.azure.com/ORG-NAME/_settings/pipelinessettings.
Next steps
After you've set up the right project structure, enhance runtime security by using templates.
Security through templates
11/2/2020 • 5 minutes to read • Edit Online
Checks on protected resources are the basic building block of security for Azure Pipelines. Checks work no matter
the structure - the stages and jobs - of your pipeline. If several pipelines in your team or organization have the
same structure, you can further simplify security using templates.
Azure Pipelines offers two kinds of templates: includes and extends . Included templates behave like #include in
C++: it's as if you paste the template's code right into the outer file, which references it. To continue the C++
metaphor, extends templates are more like inheritance: the template provides the outer structure of the pipeline
and a set of places where the template consumer can make targeted alterations.
# template.yml
parameters:
- name: usersteps
type: stepList
default: []
steps:
- ${{ each step in parameters.usersteps }}:
- ${{ step }}
# azure-pipelines.yml
resources:
repositories:
- repository: templates
type: git
name: MyProject/MyTemplates
ref: refs/tags/v1
extends:
template: template.yml@templates
parameters:
usersteps:
- script: echo This is my first step
- script: echo This is my second step
When you set up extends templates, consider anchoring them to a particular Git branch or tag. That way, if
breaking changes need to be made, existing pipelines won't be affected. The examples above use this feature.
resources:
containers:
- container: builder
image: mysecurebuildcontainer:latest
steps:
- script: echo This step runs on the agent host, and it could use docker commands to tear down or limit the
container's network
- script: echo This step runs inside the builder container
target: builder
# this task will fail because its `target` property instructs the agent not to allow publishing artifacts
- task: PublishBuildArtifacts@1
inputs:
artifactName: myartifacts
target:
commands: restricted
jobs:
- job: buildNormal
steps:
- script: echo Building the normal, unsensitive part
- ${{ if eq(variables['Build.SourceBranchName'], 'refs/heads/master') }}:
- job: buildMasterOnly
steps:
- script: echo Building the restricted part that only builds for master branch
WARNING
In the example below, only the literal step type "script" is prevented. For full lockdown of ad-hoc scripts, you would also need
to block "bash", "pwsh", "powershell", and the tasks which back these steps.
# template.yml
parameters:
- name: usersteps
type: stepList
default: []
steps:
- ${{ each step in parameters.usersteps }}:
- ${{ each pair in step }}:
${{ if ne(pair.key, 'script') }}:
${{ pair.key }}: ${{ pair.value }}
# azure-pipelines.yml
extends:
template: template.yml
parameters:
usersteps:
- task: MyTask@1
- script: echo This step will be stripped out and not run!
- task: MyOtherTask@2
# template.yml
parameters:
- name: userpool
type: string
default: Azure Pipelines
values:
- Azure Pipelines
- private-pool-1
- private-pool-2
# azure-pipelines.yml
extends:
template: template.yml
parameters:
userpool: private-pool-1
Here the template params.yml is required with an approval on the resource. To trigger the pipeline to fail,
comment out the reference to params.yml .
# params.yml
parameters:
- name: yesNo
type: boolean
default: false
- name: image
displayName: Pool Image
type: string
default: ubuntu-latest
values:
- windows-latest
- vs2017-win2016
- ubuntu-latest
- ubuntu-16.04
- macOS-latest
- macOS-10.14
steps:
- script: echo ${{ parameters.yesNo }}
- script: echo ${{ parameters.image }}
# azure-pipeline.yml
resources:
containers:
- container: my-container
endpoint: my-service-connection
image: mycontainerimages
extends:
template: params.yml
parameters:
yesNo: true
image: 'windows-latest'
Additional steps
A template can add steps without the pipeline author having to include them. These steps can be used to run
credential scanning or static code checks.
# template to insert a step before and after user steps in every job
parameters:
jobs: []
jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: CredScan@1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry@1 # Post steps
condition: always()
Next steps
Next, learn about taking inputs safely through variables and parameters.
How to securely use variables and parameters in
your pipeline
2/26/2020 • 2 minutes to read • Edit Online
This article discusses how to securely use variables and parameters to gather input from pipeline users.
Variables
Variables can be a convenient way to collect information from the user up front. You can also use variables to pass
data from step to step within a pipeline.
But use variables with caution. Newly created variables, whether they're defined in YAML or written by a script, are
read-write by default. A downstream step can change the value of a variable in a way that you don't expect.
For instance, imagine your script reads:
A preceding step could set MyConfig to Debug & deltree /y c: . Although this example would only delete the
contents of your build agent, you can imagine how this setting could easily become far more dangerous.
You can make variables read-only. System variables like Build.SourcesDirectory , task output variables, and queue-
time variables are always read-only. Variables that are created in YAML or created at run time by a script can be
designated as read-only. When a script or task creates a new variable, it can pass the isReadonly=true flag in its
logging command to make the variable read-only.
In YAML, you can specify read-only variables by using a specific key:
variables:
- name: myReadOnlyVar
value: myValue
readonly: true
Queue-time variables are exposed to the end user who manually runs a pipeline. As originally designed, this
concept was only for the UI. The underlying API would accept user overrides of any variable, even variables that
weren't designated as queue-time variables. This arrangement was confusing and insecure. So we've added a
setting that makes the API accept only variables that can be set at queue time. We recommend that you turn on
this setting.
Parameters
Unlike variables, pipeline parameters can't be changed by a pipeline while it's running. Parameters have data types
such as number and string , and they can be restricted to a subset of values. Restricting the parameters is useful
when a user-configurable part of the pipeline should take a value only from a constrained list. The setup ensures
that the pipeline won't take arbitrary data.
Next steps
After you secure your inputs, you also need to secure your shared infrastructure.
Recommendations to secure shared infrastructure in
Azure Pipelines
2/26/2020 • 2 minutes to read • Edit Online
Protected resources in Azure Pipelines are an abstraction of real infrastructure. Follow these recommendations to
protect the underlying infrastructure.
There are a handful of other things you should consider when securing pipelines.
Relying on PATH
Relying on the agent's PATH setting is dangerous. It may not point where you think it does, since a previous script
or tool could have altered it. For security-critical scripts and binaries, always use a fully qualified path to the
program.
Logging of secrets
Azure Pipelines attempts to scrub secrets from logs wherever possible. This filtering is on a best-effort basis and
cannot catch every way that secrets can be leaked. Avoid echoing secrets to the console, using them in command
line parameters, or logging them to files.
resources:
containers:
- container: example
image: ubuntu:18.04
mountReadOnly:
externals: true
tasks: true
tools: true
work: false # the default; shown here for completeness
Most people should mark the first three read-only and leave work as read-write. If you know you won't write to
the work directory in a given job or step, go ahead and make work read-only as well. If you have tasks in your
pipeline which self-modify, you may need to leave tasks read-write.
Next steps
Return to the overview and make sure you've covered every topic.
Learn how to add continuous security validation to
your CI/CD pipeline
11/2/2020 • 9 minutes to read • Edit Online
Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2015
Are you planning Azure DevOps continuous integration and deployment pipelines? You probably have a few
questions, such as:
How do you ensure your application is safe?
How do you add continuous security validation to your CI/CD pipeline?
DevOps practices are allowing businesses to stay ahead of the competition by delivering new features faster than
ever before. As the frequency of production deployments increases, this business agility cannot come at the
expense of security. With continuous delivery, how do you ensure your applications are secure and stay secure?
How can you find and fix security issues early in the process? This begins with practices commonly referred to as
DevSecOps. DevSecOps incorporates the security team and their capabilities into your DevOps practices making
security a responsibility of everyone on the team. This article will walk you through how to help ensure your
application is secure by adding continuous security validation to your CI/CD pipeline.
Security needs to shift from an afterthought to being evaluated at every step of the process. Securing applications
is a continuous process that encompasses secure infrastructure, designing an architecture with layered security,
continuous security validation, and monitoring for attacks.
Continuous security validation should be added at each step from development through production to help ensure
the application is always secure. The goal of this approach is to switch the conversation with the security team from
approving each release to approving the CI/CD process and having the ability to monitor and audit the process at
any time. When building greenfield applications, the diagram below highlights the key validation points in the
CI/CD pipeline. Depending on your platform and where your application is at in its lifecycle, you may need to
consider implementing the tools gradually. Especially if your product is mature and you haven't previously run any
security validation against your site or application.
IDE / Pull Request
Validation in the CI/CD begins before the developer commits his or her code. Static code analysis tools in the IDE
provide the first line of defense to help ensure that security vulnerabilities are not introduced into the CI/CD
process. The process for committing code into a central repository should have controls to help prevent security
vulnerabilities from being introduced. Using Git source control in Azure DevOps with branch policies provides a
gated commit experience that can provide this validation. By enabling branch policies on the shared branch, a pull
request is required to initiate the merge process and ensure that all defined controls are being executed. The pull
request should require a code review, which is the one manual but important check for identifying new issues
being introduced into your code. Along with this manual check, commits should be linked to work items for
auditing why the code change was made and require a continuous integration (CI) build process to succeed before
the push can be completed.
CI (Continuous Integration)
The CI build should be executed as part of the pull request (PR-CI) process discussed above and once the merge is
complete. Typically, the primary difference between the two runs is that the PR-CI process doesn't need to do any of
the packaging/staging that is done in the CI build. These CI builds should run static code analysis tests to ensure
that the code is following all rules for both maintenance and security. Several tools can be used for this.
Visual Studio Code Analysis and the Roslyn Security Analyzers
Checkmarx - A Static Application Security Testing (SAST) tool
BinSkim - A binary static analysis tool that provides security and correctness results for Windows portable
executables
Other 3rd party tools
Many of the tools seamlessly integrate into the Azure Pipelines build process. Visit the VSTS Marketplace for more
information on the integration capabilities of these tools.
In addition to code quality being verified with the CI build, two other tedious or ignored validations are scanning
3rd party packages for vulnerabilities and OSS license usage. Often when we ask about 3rd party package
vulnerabilities and the licenses, the response is fear or uncertainty. Those organizations that are trying to manage
3rd party packages vulnerabilities and/or OSS licenses, explain that their process for doing so is tedious and
manual. Fortunately, there are a couple of tools by WhiteSource Software that can make this identification process
almost instantaneous. The tool runs through each build and reports all of the vulnerabilities and the licenses of the
3rd party packages. WhiteSource Bolt is a new option, which includes a 6-month license with your Visual Studio
Subscription. Bolt provides a report of these items but doesn't include the advanced management and alerting
capabilities that the full product offers. With new vulnerabilities being regularly discovered, your build reports
could change even though your code doesn't. Checkmarx includes a similar WhiteSource Bolt integration so there
could be some overlap between the two tools. See, Manage your open source usage and security as reported by
your CI/CD pipeline for more information about WhiteSource and the Azure Pipelines integration.
In addition to validating the application, the infrastructure should also be validated to check for any vulnerabilities.
When using the public cloud such as Azure, deploying the application and shared infrastructure is easy, so it is
important to validate that everything has been done securely. Azure includes many tools to help report and prevent
these vulnerabilities including Security Center and Azure Policies. Also, we have set up a scanner that can ensure
any public endpoints and ports have been added to an allow list or else it will raise an infrastructure issue. This is
run as part of the Network pipeline to provide immediate verification, but it also needs to be executed each night to
ensure that there aren't any resources publicly exposed that should not be.
Once the scans have completed, the Azure Pipelines release is updated with a report that includes the results and
bugs are created in the team's backlog. Resolved bugs will close if the vulnerability has been fixed and move back
into in-progress if the vulnerability still exists.
The benefit of using this is that the vulnerabilities are created as bugs that provide actionable work that can be
tracked and measured. False positives can be suppressed using OWASP ZAP's context file, so only vulnerabilities
that are true vulnerabilities are surfaced.
Even with continuous security validation running against every change to help ensure new vulnerabilities are not
introduced, hackers are continuously changing their approaches, and new vulnerabilities are being discovered.
Good monitoring tools allow you to help detect, prevent, and remediate issues discovered while your application is
running in production. Azure provides a number of tools that provide detection, prevention, and alerting using
rules such as OWASP Top 10 / modSecurity and now even using machine learning to detect anomalies and unusual
behavior to help identify attackers.
Minimize security vulnerabilities by taking a holistic and layered approach to security including secure
infrastructure, application architecture, continuous validation, and monitoring. DevSecOps practices enable your
entire team to incorporate these security capabilities throughout the entire lifecycle of your application. Establishing
continuous security validation into your CI/CD pipeline can allow your application to stay secure while you are
improving the deployment frequency to meet needs of your business to stay ahead of the competition.
Reference information
BinSkim - A binary static analysis tool that provides security and correctness results for Windows portable
executables
Checkmarx - A Static Application Security Testing (SAST) tool
Manage your open source usage and security as reported by your CI/CD pipeline
OWASP
OWASP ZAP VSTS extension
WhiteSource Software
Visual Studio Code Analysis and the Roslyn Security Analyzers
Authors: Mike Douglas | Find the origin of this article and connect with the ALM | DevOps Rangers here
(c) 2017 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views
expressed in this document, including URL and other Internet Web site references, may change without notice. You
bear the risk of using it.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You
may copy and use this document for your internal, reference purposes.
Build and Deployment Automation Case Study for
World Wide Time Keeping: Higher Quality and Faster
Delivery in an Increasingly Agile World
11/2/2020 • 8 minutes to read • Edit Online
October 2015
In an Agile world, delivering quick and frequent releases for large, complex systems with multiple components
becomes cumbersome and time-consuming if done manually, because each component has a high degree of
complexity and requires a lot of resource intervention and configuration to ensure that it works as expected.
That's why many teams opt for Build and Deployment Automation to ensure faster releases and reduce manual
intervention. However, automating multiple components of a system has its own challenges. Even though releases
can be automated in silos, if we need a one-click deployment for the entire system, we need to have an automation
framework that can automate an entire custom workflow.
Throughout this paper, we give insight on our project - World Wide Time Keeping - and how we implemented build
and deployment automation using Gated Check-ins, Code Analysis, and Fortify Integrations. We discuss build and
deployment automation by using PowerShell scripts and how we can create custom workflows and deploy all at
once using Release Management. We also talk about how these can help you cut down your engineering cycle time
and play an important role in hitting Production Ready at Code Complete (PRCC) goals. This lets you have a
Continuous Integration Continuous Delivery (CICD) Project and helps you go faster, without introducing issues.
This content is useful for SWE teams who are working in an Agile model for large, complex systems and want to cut
down their release cycles and deliver faster. We assume that readers have a fundamental knowledge of Engineering
Cycles and their phases (Develop/Test/Build/Deploy) and a fundamental knowledge of Agile practices and delivery
cycles.
Build automation
Many teams have multiple requirements for build, but the following practices can be applied to most teams. You
may adopt the whole approach or just implement the components that work out best for you.
Daily Builds: Have a build pipeline for scheduled builds. Aim for a daily schedule with builds released to the
internal SWE environment by the end of each day.
One-click builds for non-internal environments: For Integration/UAT environments, you automate the builds.
Instead of scheduling them on a per day basis, you can trigger them by queuing them in VSTF. (The reason for not
scheduling them is that a build is not required on Integration/UAT environments on a daily basis. Rather, they tend
to happen on an as-needed basis. This will depend on your team's needs and you can adopt the rhythm that works
best for your team.)
Gated Check-ins: Set up gated check-ins to ensure that only code that complies and passed unit testing gets
checked in. It ensures that code quality remains high and that there are no broken builds. Integrate Fortify and
Code Analysis to get further insight into code quality.
Code Analysis Integrations: To get insight into whether the code is of good quality or if any changes need to be
made, integrate Code Analysis into the build pipelines and set the threshold to low. The changes can be identified
and fixed early, which is required in the Agile world.
For tify Integrations: Use Fortify for security-based checks of the build pipelines associated with your check-ins
and daily builds. This ensures that any security vulnerabilities are identified as soon as possible and can be fixed
quickly.
Deployment automation
Use deployment scripts
Deployments for internal SWE environment: Set up the internal SWE environments deployments with the
daily automated builds by integrating the build pipelines with the deployment scripts. All the checked-in changes
will then be deployed at the end of each day, without any manual intervention.
This way, the latest build is present in the SWE environment in case you would like to demo the product to
stakeholders.
Deployments for Integration/UAT environments: For Integration/UAT environments, you can integrate the
scripts with the build pipelines without scheduling them and trigger them on an as-needed basis. Because you have
set up one-click builds for them, when the build completes successfully, the scripts get executed at the end and the
product is deployed. Therefore, you do not have to manually deploy the system. Instead it's deployed automatically
by simply queuing a build.
The release pipeline
In theory, a release pipeline is a process that dictates how you deliver software to your end users. In practice, a
release pipeline is an implementation of that pattern. The pipeline begins with code in version control and ends
with code deployed to the production environment. In between, a lot can happen. Code is compiled, environments
are configured, many types of tests run, and finally, the code is considered "done". By done, we mean that the code
is in production. Anything you successfully put through the release pipeline should be something you would give to
your customers. Here is a diagram based on the one you will see on Jez Humble's Continuous Delivery website. It is
an example of what can occur as code moves through a release pipeline.
Use Release Management
If your team is working on Azure-based components - web apps, services, web jobs, and so on - you can use
Release Management for automating deployments.
Release Management consists of various pre-created components which you can configure and use either
independently or in conjunction with other components through workflows.
You might face pain points when you manually deploy an entire system. For a large complex system with multiple
components, like service, web jobs, and dacpac scripts, here are example pain points:
A large amount of time goes into configuration of each component
Deployment needs to be done separately for each, adding to the overall deployment time.
Multiple resources have to be engaged to ensure that the deployments happen as expected.
How Release Management (RM) solves them:
RM allows you to create custom workflows which sequence the deployment to ensure that the components
get deployed as soon as their dependencies have been deployed.
Configurations can be stored in RM to ensure that configuration per deployment is not required.
It automates the entire workflow which ensures manual intervention is not required and resources can be
utilized for functional tasks.
Key takeaways
Set up Automated Builds scheduled for the rhythm that works best for your product and Implement Gated
Check-ins.
Integrate Code Analysis and Fortify into the build setup to improve the code quality and security of the
application
Set up daily automated deployments to the internal SWE environments and set up one click deployments to
environments like UAT and Prod.
Use Release Management to set up custom workflows for your releases and triggering them with a single
click.
To use Release Management, you need to set up the following components:
RM Ser ver : Is the central repository for configuration and release information.
Build Agent : This is a machine (physical or VM) that you set up at your end on which you will run all your
builds and deployments.
Environments : This signifies the environment which will be used in conjunction with your machine that you
have set up.
Release Paths : You need to create Release Paths for the multiple releases that you want to automate for
multiple environments - internal SWE envs, INT, UAT, and so on.
Build Components : The build component is used configure the build and change any environment specific
configurations. It picks up the build from the remote machine in which VSTF auto-generates the builds as
per the build pipeline and runs the configuration changes that are defined within it.
Release Templates : Release template defines the workflow that you have set up as per your specific needs
of deployment. It also defines the sequence in which the RM components are to get executed. You need to
integrate your build pipeline from Team Foundation Server (TFS) with the release template to enable
continuous delivery. You can either pick up the latest build or select the build.
Conclusion
In this paper, we discussed the various engineering practices we can use for enabling faster product delivery with
higher quality. We discussed:
Build Automation : Builds can be set up for triggering on a schedule or on an ad-hoc basis just by a single
click. It can vary based on the rhythm that works best for your team. Gated check-ins should be set up on top
of the build pipelines to accept only the check-ins which meet the criteria bar.
Code Analysis and For tify Integration : The build pipelines should be integrated with Code Analysis and
Fortify to trigger on a schedule and also with the Gated Check-ins. Code Analysis will improve the code
quality and Fortify will point out the security-based gaps in the application, if any.
Deployment Automation : You can integrate PowerShell scripts with your build pipelines to achieve
deployment automation. You can also use Release Management to set up custom workflows and integrate it
with your TFS to pick up the latest builds or even select builds.
We also discussed the benefits that we found by taking up these practices:
Minimal wastage of time due to automations of build, deploy phases
Higher code quality due to Gated check-ins (with integrated Test Automation), Code Analysis, and Fortify
Integration
Faster delivery
Will enable you to hit Production Ready at Code Complete (PRCC)
Will enable you to hit Continuous Integration & Continuous Delivery targets (CI/CD)
References
[1] Visual Studio team, Automate deployments with Release Management, MSDN Article
[2] Visual Studio team, Build and Deploy Continuously, MSDN Article
[3] Visual Studio team, Building a Release Pipeline with Team Foundation Server 2012, MSDN Article
(c) 2015 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views
expressed in this document, including URL and other Internet Web site references, may change without notice. You
bear the risk of using it.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You
may copy and use this document for your internal, reference purposes.
Explore how to progressively expose your Azure
DevOps extension releases in production to validate,
before impacting all users
11/2/2020 • 7 minutes to read • Edit Online
Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2013
In today's fast-paced, feature-driven markets, it's important to continuously deliver value and receive feedback on
features quickly and continuously. Partnering with end users to get early versions of features vetted out is valuable.
Are you planning to build and deploy Azure DevOps extensions to production? You probably have a few questions,
such as:
How do you embrace DevOps to deliver changes and value faster?
How do you mitigate the risk of deploying to production?
How do you automate the build and deployment?
This topic aims to answer these questions and share learnings using rings with Azure DevOps extensions. For an
insight into the Microsoft guidelines, read Configuring your release pipelines for safe deployments.
Considerations
Before you convert your deployment infrastructure to a ringed deployment model, it's important to consider:
Who are your primary types of users? For example, early adopters and users.
What's your application topology?
What's the value of embracing ringed deployment model?
What's the cost to convert your current infrastructure to a ringed deployment model?
User types
In the shown example, users fall into three general buckets in production:
Canaries who voluntarily test bleeding edge features as soon as they are available.
Early adopters who voluntarily preview releases, considered more refined than the canary bits.
Users who consume the products, after passing through canaries and early adopters.
NOTE
It's important to weigh out which users in your value chain are best suited for each of these buckets. Communicating the
opportunity to provide feedback, as well as the risk levels at each tier, is critical to setting expectations and ensuring success.
Application topology
Next you need to map the topology of your application to the ringed deployment model. Limit the impact of
change on end users and to continuously deliver value. Value includes both the value delivered to the end user and
the value (return-on-investment) of converting your existing infrastructure.
NOTE
The ringed deployment model is not a silver bullet! Start small, prototype, and continuously compare impact, value, and cost.
At the application level, the composition of Azure DevOps extensions is innocuous, easy to digest, scale, and deploy
independently. Each extension:
Has one of more web and script files
Interfaces with Core client
Interfaces with REST client and REST APIs
Persists state in cache or resilient storage
At the infrastructure level, the extensions are published to the Visual Studio marketplace. Once installed in
organization, they are hosted by the Azure DevOps service portal, with state persisted to Azure storage and/or the
extension data storage.
The extension topology is perfectly suited for the ring deployment model and to publish the extension to each
deployment ring:
A private development version for the canary ring
A private preview version for the early adopter ring
A public production version for the Users ring
TIP
By publishing your extension as private, you're effectively limiting and controlling their exposure for users you explicitly invite.
1. A developer from the Countdown Widget extension project commits a change to the GitHub repository.
2. The commit triggers a continuous integration build.
3. The new build triggers a continuous deployment trigger, which automatically starts the Canaries
environment deployment.
4. The Canaries deployment publishes a private extension to the marketplace and shares it with predefined
organizations. Only the Canaries are impacted by the change.
5. The Canaries deployment triggers the Early Adopter environment deployment. A pre-deployment
approval gate requires any one of the authorized users to approve the release.
6. The Early Adopter deployment publishes a private extension to the marketplace and shares it with
predefined organizations. Both the Canaries and Early Adopter are impacted by the change.
7. The Early Adopter deployment triggers the Users environment deployment. A stricter pre-deployment
approval gate requires all of the authorized users to approve the release.
8. The Users deployment publishes a public extension to the marketplace. At this stage, everyone who has
installed the extension in their organization is affected by the change.
9. It's key to realize that the impact ("blast radius") increases as your change moves through the rings. Exposing
the change to the Canaries and the Early Adopters , is giving two opportunities to validate the change and
hotfix critical bugs before a release to production.
NOTE
Review CI/CD Pipelines and Approvals for detailed documentation of pipelines and the approval features for releases.
TIP
Start with high-level views of your data, visual dashboards that you can watch from afar, and drill-down as needed. Perform,
regular housekeeping of your views and remove all noise. A visual dashboard tells a far better story than hundreds of
notification emails, often filtered and forgotten by email rules.
Using the Team Project Health and out-of-the-box extensions you can build overview of your pipelines, lead and
cycle times, and other information. In the sample dashboard, it's evident that there are 34 successful builds, 21
successful releases, 1 failed release, and 2 releases in progress.
What's the value?
Using a ring-deployment strategy you can gather feedback to validate your hypothesis. You can decommission old
releases and distribute new releases without the risk of affecting all users.
Here's a summary of how the ALM | DevOps Ranger engineering process evolved with ring deployment models.
Key takeaways:
Consistent and reliable automation
Reduced response times
Canaries experience the pain, not the users
LaunchDarkly provides an extension for Azure DevOps Services & Team Foundation Server. It integrates with Azure
Pipelines and gives you "run-time" control of features deployed with your ring deployment process.
Conclusion
Now that you've covered the concepts of rings, you should be confident to explore ways to improve your CI/CD
pipelines. While the use of rings adds a level of complexity, having a game plan to address feature management
and rapid customer feedback is invaluable.
Q&A
How do you know that a change can be deployed to the next ring?
Your goal should be to have a consistent checklist for the users approving a release. See aka.ms/vsarDoD for an
example definition of done checklist.
How long do you wait before you push a change to the next ring?
There is no fixed duration or "cool off" period. It depends on how long it takes for you to complete all release
validations successfully.
How do you manage a hotfix?
The ring deployment model allows you to process a hotfix like any other change. The sooner an issue is caught, the
sooner a hotfix can be deployed, with no impact to downstream rings.
How do you deal with variables that span (shared) release environments?
Refer to Default and custom release variables.
How can you manage secrets used by the pipeline?
Refer to Azure Key Vault to safeguard cryptographic keys and other secrets used by your pipelines.
Reference information
CI/CD pipeline examples
Configuring your release pipelines for safe deployments
DevOps @ Microsoft
NOTE
Authors: Josh Garverick, Willy Schaub Find the origin of this article and connect with the ALM DevOps Rangers
here
(c) 2017 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views
expressed in this document, including URL and other Internet Web site references, may change without notice. You
bear the risk of using it.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You
may copy and use this document for your internal, reference purposes.
Explore how to progressively expose your features in
production for some or all users
11/2/2020 • 6 minutes to read • Edit Online
Azure DevOps Ser vices | Azure DevOps Ser ver 2020 | Azure DevOps Ser ver 2019 | TFS 2018 - TFS
2013
In today's fast-paced, feature-driven markets, it's important to continuously deliver value and receive feedback on
features quickly and continuously. Partnering with end users to get early versions of features vetted out is valuable.
Are you planning to continuously integrate features into your application while they're under development? You
probably have a few questions, such as:
How can you toggle features to hide, disable, or enable features at run-time?
How can you revert a change deployed to production without rolling back your release?
How can you present users with variants of a feature, to determine which one performs better?
This topic aims to answer these questions and share an implementation of feature flags (FF) and A|B testing used
with Azure DevOps extensions.
Considerations
Before you introduce feature flags to your engineering process, it's important to consider:
Which users are you planning to target? For example, do you want to target specific or all users?
Would you like users to decide which features they want to use?
What's the value of embracing feature flags as part of your engineering process?
What's the cost to implement feature flags in your engineering process?
Before you flip your first feature flag in production, take the time to read:
"A Rough Patch", by Brian Harry
"Feature Flags with Branching", by LaunchDarkly
Feature flags support a customer-first DevOps mindset, to enable (expose) and disable (hide) features in a solution,
even before they are complete and ready for release.
View a feature flag as an ON | OFF switch for a specific feature. As shown, you can deploy a solution to production
that includes both an email and a print feature. If the feature flag is set (ON), you'll email, else you'll print.
When you combine a feature flag with an experiment, led by a hypothesis, you introduce A|B testing. For example,
you could run an experiment to determine if the email (A) or the print (B) feature will result in a higher user
satisfaction.
NOTE
A|B testing is also known as Split Testing. It's based on a hypothesis that's defined as:
For {user} who {action} the {solution} is a {how} that {value} unlike {competition} we {do better}
As shown, the email feature (option A) is more popular with your users and wins.
Common scenarios
You have a CI/CD pipeline for every Azure DevOps extension you're hosting on the marketplace. You are using a
ring deployment model and manual release approval checkpoints. The checkpoints are manual and time
consuming, but necessary to minimize the chance of breaking the early-adopter and production user
environments, forcing an expensive roll-back. You're looking for an engineering process, which enables you to:
Continuously deploy to production
Never roll back in production
Fine-tune the user experience in production
You have probably guessed it - feature flags!
Enable or disable a feature for everyone
You would like to include hidden features in your release and enable them for all users in production. For example,
you want to be able to collect verbose logging data for troubleshooting. Using a feature flag, you can enable and
disable verbose logging as needed.
TIP
To minimize the costs associated with the use of feature flags, keep feature flags short lived and prevent multiple feature flags
from interfering with each other by affecting the same functionality.
Conclusion
Now that you've covered the concepts and considerations of feature flags, you should be confident to explore ways
to improve your CI/CD pipelines. While feature flags come at a cost, having a game plan to manage exposed
features at run-time is invaluable.
Q&A
How does the Azure DevOps team use feature flags?
Buck’s feature flags blog post and the presentation/article are great sources to get an understanding of the custom-
built feature flag system used with Team Foundation Server (TFS) and Azure DevOps Services.
How do the ALM | DevOps Rangers use feature flags?
The Rangers use the LaunchDarkly SaaS solution. You can find their learnings in this blog series.
When should you remove feature flags?
As Buck states, “Many feature flags go away and the teams themselves take care of that." The feature teams decide
when to go delete the feature flags. It can get unwieldy after a while, so there’s some natural motivation to go clean
it up.
Is there a dependency on deployment rings?
No, rings and feature flags are symbiotic. Read Feature Flags or Rings for details.
Reference information
CI/CD pipeline examples
DevOps @ Microsoft
How to implement feature flags and A|B testing
Authors: Willy Schaub | Find the origin of this article and connect with the ALM | DevOps Rangers here
(c) 2017 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views
expressed in this document, including URL and other Internet Web site references, may change without notice. You
bear the risk of using it.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You
may copy and use this document for your internal, reference purposes.
Get started with Azure DevOps CLI
11/2/2020 • 2 minutes to read • Edit Online
NOTE
The Azure DevOps Command Line Interface (CLI) is available for Azure DevOps Server 2020 and Azure DevOps
Services.
To start using the Azure DevOps extension for Azure CLI, perform the following steps:
1. Install Azure CLI: Follow the instructions provided in Install the Azure CLI to set up your Azure CLI
environment. At a minimum, your Azure CLI version must be 2.10.1. You can use az --version to
validate.
2. Add the Azure DevOps extension:
You can use az extension list or az extension show --name azure-devops to confirm the installation.
3. Sign in: Run az login to sign in. Note that we support only interactive or log in using user name
and password with az login . To sign in using a Personal Access Token (PAT), see Sign in via Azure
DevOps Personal Access Token (PAT). When connecting to an on-premises server instance, sign in
using a PAT may be required to run select commands.
4. Configure defaults: We recommend you set the default configuration for your organization and
project. Otherwise, you can set these within the individual commands themselves.
If you're connecting to an Azure DevOps Server, specify the URL for your server instance. For
example:
Command usage
Adding the Azure DevOps Extension adds devops , pipelines , artifacts , boards , and repos groups. For
usage and help content for any command, enter the -h parameter, for example:
$ az devops -h
Group
az devops : Manage Azure DevOps organization level operations.
Related Groups
az pipelines: Manage Azure Pipelines
az boards: Manage Azure Boards
az repos: Manage Azure Repos
az artifacts: Manage Azure Artifacts.
Subgroups:
admin : Manage administration operations.
extension : Manage extensions.
project : Manage team projects.
security : Manage security related operations.
service-endpoint : Manage service endpoints/service connections.
team : Manage teams.
user : Manage users.
wiki : Manage wikis.
Commands:
configure : Configure the Azure DevOps CLI or view your configuration.
feedback : Displays information on how to provide feedback to the Azure DevOps CLI team.
invoke : This command will invoke request for any DevOps area and resource. Please use
only json output as the response of this command is not fixed. Helpful docs -
https://docs.microsoft.com/rest/api/azure/devops/.
login : Set the credential (PAT) to use for a particular organization.
logout : Clear the credential for all or a particular organization.
This command shows the details of build with id 1 on the command-line and also opens it in the default
browser.
Related articles
Sign in via Azure DevOps Personal Access Token (PAT)
Output formats
Command Reference
Azure DevOps CLI Extension GitHub Repo