Enable and use Runtime v3
Bitbucket Pipelines Runtime v3 provides greater flexibility when using the Docker service. Thanks to recent security improvements, we’ve been able to relax some of the previous Docker restrictions.
What does the v3 runtime do?
Allows for more direct access to the Docker API within Pipelines, including the usage of
privilaged
containers Go to the Privileged containers section.Enables customers to access
docker buildx
, significantly improving efficiency when building multi-architecture images Go to the Docker Buildx section.Allows for direct control of the docker CLI, letting customers specify the exact version of the CLI they need for their builds rather than relying on the version provided by Pipelines. Go to the BYO Docker CLI section.
Allows for customers to bring their own Docker-in-Docker (DIND) images, granting them more control over the build environment. Go to BYO DIND section.
Enable Runtime v3
To enable Runtime v3, you need to opt into the feature by specifying the runtime version in your pipeline configuration. Add the following to your bitbucket-pipelines.yml
file:
options:
runtime:
cloud:
version: 3
To enable it for a specific step, add the following to your bitbucket-pipelines.yml
file:
- step:
runtime:
cloud:
version: 3
Below are the key changes introduced in Runtime v3 with further detailed information and examples of how to use them to your benefit.
Limitations
Docker CLI no longer being mounted
As mentioned earlier, the Docker CLI is no longer automatically mounted in the build container. Therefore, to use Docker in Runtime v3, make sure the CLI is either included in the image or installed during the build step.
Docker cache deprecation
The Bitbucket Pipelines Docker cache feature has been deprecated. This means that if a step tries to use it, it will be ignored and a warning message will be displayed.
We recommend using Buildx caching for this purpose.
Privileged containers
It is possible to run more advanced container workflows by leveraging privileged containers with Docker.
You are able to use docker run
with any argument, including the --privileged
flag. In earlier runtime versions, this was restricted to prevent potential security risks within shared environments. However, recent changes improved container isolation and hardened sandboxing, which now allows safe use of privileged mode in your build steps.
Why This Matters
Using --privileged
unlocks several advanced Docker capabilities such as:
Running Docker-in-Docker (DinD) without limitations
Accessing low-level system operations within the container
Using software that requires full capabilities or mounts, like
loopback
devices or custom kernel modules
privileged-docker-run:
- step:
runtime:
cloud:
version: 3
image: "<my-image-with-docker-cli>"
services:
- docker
script:
- docker run --privileged alpine:3 sh -c "echo 20 > /proc/sys/vm/swappiness && cat /proc/sys/vm/swappiness"
While privileged mode is now supported, it should still be used with caution. Only use it when strictly necessary and ensure that the image you’re running is trusted.
By enabling --privileged
, Runtime v3 provides users with more flexibility to match local development environments or run complex tools that were not previously supported in Pipelines.
Docker Buildx
With Runtime v3, Bitbucket Pipelines fully supports Docker Buildx - a powerful extension for building multi-platform images, advanced caching, and fine-tuned Docker builds. Buildx unlocks faster and more flexible builds, which allows you to take full advantage of Docker BuildKit features.
Enabling Buildx in Your Pipeline
To use Buildx, first ensure your pipeline image has the Docker CLI and the Buildx plugin installed. Once enabled, you can run advanced docker buildx build
commands directly in your steps.
Installing Docker CLI and Buildx
There are two approaches to having the Docker CLI present in your build container:
One option is to install it as part of the script section.
However, the more efficient approach is to install it as part of your build image.
In the example below, the Dockerfile copies the Docker CLI along with other useful dependencies, such as docker-compose
and docker-buildx
plugin.
FROM docker:28.1.1-cli AS docker-cli
FROM ubuntu:24.04
# Copy Docker CLI, Buildx plugin and docker-compose
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
COPY --from=docker-cli /usr/local/libexec/docker/cli-plugins/docker-buildx /usr/local/libexec/docker/cli-plugins/docker-buildx
COPY --from=docker-cli /usr/local/bin/docker-compose /usr/local/bin/docker-compose
# etc..
Multi -arch Build
You can now leverage Docker Buildx to build multi-platform Docker images directly within your pipeline. Docker Buildx is an extension of Docker's build capabilities that allows you to build images for multiple architectures (e.g., x86_64
, arm64
, armv7
) from a single pipeline run, making it a powerful tool for cross-platform development.
Docker Buildx also provides enhanced caching features, better performance, and the ability to work with more advanced build options, such as building images with BuildKit.
options:
runtime:
cloud:
version: "3"
pipelines:
custom:
multi-arch-build:
- step:
image: "<my-image-with-docker-cli-and-buildx>"
services:
- docker
script:
- docker buildx create --name multiarch-builder --driver docker-container --use
- docker buildx inspect --bootstrap
- docker run --rm --privileged tonistiigi/binfmt --install all
- docker buildx build -t test:local --platform=linux/amd64,linux/arm64 .
Build Caching with Buildx
Buildx also enables advanced caching strategies to speed up your pipelines, reduce redundant computation, and improve cross-step performance. Bitbucket Pipelines supports the following caching types:
Remote S3 Cache
Use S3 buckets to persist cache layers across builds, Runners, or even repositories.
options:
runtime:
cloud:
version: "3"
pipelines:
custom:
remote-cache:
- step:
image: "<my-image-with-docker-cli-and-buildx>"
services:
- docker
name: "upload cache"
oidc: true
script:
- export AWS_DEFAULT_REGION="us-east-1"
- export AWS_ROLE_ARN="arn:aws:iam::123456789012:role/pipeline-role"
- export AWS_WEB_IDENTITY_TOKEN_FILE="$(pwd)/web-identity-token"
- echo $BITBUCKET_STEP_OIDC_TOKEN > $(pwd)/web-identity-token
- docker buildx create --name dc --driver docker-container --use
- docker buildx build -t test:local --builder=dc --load --platform=linux/amd64 --cache-to type=s3,region=us-west-2,bucket=my-s3-bucket,name=${BITBUCKET_BUILD_NUMBER} .
- docker run test:local
- step:
image: "<my-image-with-docker-cli-and-buildx>"
services:
- docker
name: "use cache on build"
oidc: true
script:
- export AWS_DEFAULT_REGION="us-east-1"
- export AWS_ROLE_ARN="arn:aws:iam::123456789012:role/pipeline-role"
- export AWS_WEB_IDENTITY_TOKEN_FILE="$(pwd)/web-identity-token"
- echo $BITBUCKET_STEP_OIDC_TOKEN > $(pwd)/web-identity-token
- docker buildx create --name dc --driver docker-container --use
- docker buildx build -t test:local --builder=dc --load --platform=linux/amd64 --cache-from type=s3,region=us-west-2,bucket=my-s3-bucket,name=${BITBUCKET_BUILD_NUMBER} .
- docker run test:local
Your build environment must have AWS credentials with permission to access the S3 bucket.
Local Cache
Local cache stores build layers in a directory that can be shared between pipeline steps (on the same Runner).
options:
runtime:
cloud:
version: "3"
pipelines:
custom:
local-cache:
- step:
image: "<my-image-with-docker-cli-and-buildx>"
services:
- docker
script:
- docker buildx create --use
- docker buildx build --tag my-image:latest --cache-to=type=local,dest=/tmp/.buildx-cache --cache-from=type=local,src=/tmp/.buildx-cache .
Use the artifacts
section in your pipeline if you want to persist this cache directory across steps.
Avoid rate limits
To avoid hitting rate limits during Buildx builds, a common approach is to login to your docker registry.
options:
runtime:
cloud:
version: "3"
pipelines:
custom:
docker-hub-login:
- step:
script:
- echo "$DOCKERHUB_PASSWORD" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
- docker buildx create --name mybuilder --driver docker-container --use
- docker buildx build -t myimage:latest .
Another way is to configure a custom Docker registry mirror. Since our servers run in AWS us-east-1, we recommend setting up the mirror in the same region or nearby to reduce latency and ensure optimal performance.
options:
runtime:
cloud:
version: "3"
pipelines:
custom:
custom-registry-mirror:
- step:
script:
- |
cat <<EOF > buildkitd.toml
debug = true
[registry."docker.io"]
mirrors = ["<your-registry-mirror>"]
EOF
- docker buildx create --config ./buildkitd.toml --name mybuilder --driver docker-container --use
- docker buildx build -t myimage:latest .
- docker logs buildx_buildkit_mybuilder0
The docker logs
command is just to verify it worked.
Bring your own (BYO) Docker CLI
In previous runtimes, the Docker CLI with a pinned version was automatically mounted in the build container. This is no longer the case. Instead, to use Docker in Runtime v3, ensure the CLI is either included in the image or installed during the build step. This gives more flexibility for those who want to use new Docker features.
Installing Docker CLI
There are two approaches to having the Docker CLI present in your build container:
One option is to install it as part of the script section.
However, the more efficient approach is to install it as part of your build image.
In the example below, the Dockerfile copies the Docker CLI along with other useful dependencies, such as docker-compose
and docker-buildx
plugin.
FROM docker:28.1.1-cli AS docker-cli
FROM ubuntu:24.04
# Copy Docker CLI, Buildx plugin and docker-compose
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
COPY --from=docker-cli /usr/local/libexec/docker/cli-plugins/docker-buildx /usr/local/libexec/docker/cli-plugins/docker-buildx
COPY --from=docker-cli /usr/local/bin/docker-compose /usr/local/bin/docker-compose
# etc..
New Default Image
The new atlassian/default-image:5
includes the Docker CLI. However, we strongly recommend using your own custom image, as we cannot guarantee that this image will remain unchanged in the future.
BYO Docker-in-Docker (DIND)
Runtime v3 supports bringing your own (BYO) Docker-in-Docker (DinD) image. This feature provides more flexibility by allowing you to specify a custom Docker image that fits your specific needs. You can either use the official Docker DinD image or a private image hosted on a container registry, such as AWS ECR, or another registry of your choice.
Using the Official Docker DinD Image
In the following example, we use the official docker:28-dind
image for the Docker service. This is the simplest way to set up DinD within your pipeline.
definitions:
services:
docker:
image:
name: docker:28-dind
type: docker
Using a Custom Docker Image from AWS ECR
In this example, you can specify a custom Docker image hosted on AWS ECR, along with the appropriate AWS IAM role for OIDC-based authentication.
definitions:
services:
docker:
image:
name: 0123456789012.dkr.ecr.us-west-2.amazonaws.com/my-image-from-ecr:latest
aws:
oidc-role: arn:aws:iam::0123456789012:role/my-pipeline-role
type: docker
Default DinD Image
In previous runtime environments, for security reasons, Pipelines used our private DinD image, which could become outdated. Now, if the image is not specified, it defaults to docker:dind
. However, we recommend always pinning the Docker image when using the Docker service. The BYO Docker in Docker (DinD) section above describes how to do this.
Default DinD Args
Back in previous runtime versions, DinD was executed using the following arguments:
--authorization-plugin=pipelines \
--storage-driver=overlay2 \
--registry-mirror http://${DOCKER_REGISTRY_MIRROR_HOST}:5000 \
--insecure-registry ${DOCKER_REGISTRY_MIRROR_HOST}:5000 \
--userns-remap=default \
--tls=false \
--log-level info
Now with Runtime v3, the following arguments are used. In the example below, the DOCKER_REGISTRY_MIRROR_HOST
environment variable points to our internal Docker registry mirror IP.
--storage-driver=overlay2
--registry-mirror http://${DOCKER_REGISTRY_MIRROR_HOST}:5000
--insecure-registry ${DOCKER_REGISTRY_MIRROR_HOST}:5000
--tls=false
--log-level info
At this time, these arguments cannot be overridden.
User Namespace Remapping Disabled
User Namespace Remapping (i.e., --userns-remap
) is disabled by default for the Docker service, which improves compatibility with build processes that require consistent user IDs or write access within containers..
Why this matters:
This change simplifies volume permissions and avoids issues when mounting volumes or running
useradd
in containers.You no longer need to worry about UID/GID mapping inside Dind builds.
Write Access to Dind Root Filesystem
Unlike previous versions, it allows write access to the root filesystem of the DinD container.
What this enables:
You can install packages or modify files directly inside the DinD container.
Makes it easier to customize the DinD image on the fly during builds.
Was this helpful?