We use cookies to understand how people use Depot.
Container Builds

Remote container builds

When using the Depot remote container build service, a given Docker image build is routed to a fast builder instance with a persistent layer cache. When using our container build service, you can download the image locally or push it to your registry.

Switching to Depot for your container builds is usually a one-line code change once you've created an account:

  1. You need to install the Depot CLI wherever you're running your build
  2. Run depot init in the root directory of the Docker image you want to build
  3. Switch your docker build or docker buildx build to use depot build instead

That's it! You can now build your Docker images up to 40x faster than building them on your local machine or inside a generic CI provider. Our depot build command accepts all the same arguments as docker buildx build, so you can use it in your existing workflows without any changes.

Best of all, Depot's build infrastructure for container builds requires zero configuration on your part; everything just works, including the build cache!

Take a look at the quickstart to get started.

Key features

Build isolation & acceleration

A remote container build runs on an ephemeral EC2 instance running an optimized version of BuildKit. We launch a builder on-demand in response to your depot build command and terminate it when the build is complete. You only pay for the compute you use, and builders are never shared across Depot customers or projects.

Each image builder has 16 CPUs, 32GB of memory, and a fast NVMe SSD for layer caching. The SSD is 50GB by default but can be expanded up to 500GB.

Native Intel & Arm builds

We support native multi-platform Docker image builds for both Intel & Arm without the need for emulation. We build Intel images on fast Xeon Scalable Ice Lake CPUs and Arm images on AWS Graviton3 CPUs. You can build multi-platform images with zero emulation and without running additional infrastructure.

Persistent shared caching

We automatically persist your Docker layer cache to fast NVMe storage and make it instantly available across builds. The layer cache is also shared across your entire team with access to the same project, so you can also benefit from your team's work.

Drop-in replacement

Using Depot for your Docker image builds is as straightforward as replacing your docker build command with depot build. We support all the same flags and options as docker build. If you're using GitHub Actions, we also have our own version of the build-push-action and bake-action that you can use as a drop-in replacement.

Integrate with any CI provider

We have extensive integrations with most major CI providers and developer tools to make it easy to use Depot remote container builds in your existing workflows. You can read more about how to leverage our remote container build service in your existing CI provider:

OIDC support

We support OIDC trust relationships with GitHub, CircleCI, Buildkite, and Mint so that you don't need any static access tokens in your CI provider to leverage Depot. You can learn more about configuring a trust relationship in our authentication docs.

Integrate with your existing dev tools

We can accelerate your image builds for other developer tools like Dev Containers & Docker Compose. You can either use our drop-in replacements for docker build and docker bake, or configure Docker to use Depot as the remote builder.

Build autoscaling

We offer autoscaling for our remote container builds. By default, all builds for a project are routed to a single BuildKit host per architecture you're building. With build autoscaling, you can configure the maximum number of builds to run on a single host before launching another host with a copy of your layer cache. This can help you parallelize builds across multiple hosts and reduce build times even further by giving them dedicated resources.

Ephemeral registry

We offer a built-in ephemeral registry that you can use to save the images from your depot build and depot bake commands to a temporary registry. You can then pull those images back down or push them to your final registry as you see fit.

Learn more about the ephemeral registry

How does it work?

Container builds must be associated with a project in your organization. Projects usually represent a single application, repository, or Dockerfile. Once you've made your project, you can leverage Depot builders from your local machine or an existing CI workflow by swapping docker for depot.

Builder instances come with 16 CPUs, 32GB of memory, and an SSD disk for layer caching (the default size is 50GB, but you can expand this up to 500GB). A builder instance runs an optimized version of BuildKit, the advanced build engine that backs Docker.

We offer native Intel and Arm builder instances for all projects. Hence, both architectures build with zero emulation, and you don't have to run your own build runners to get native multi-platform images.

Once built, the image can be left in the build cache (the default) or downloaded to the local Docker daemon with --load or pushed to a registry with --push. If --push is specified, the image is pushed to the registry directly from the remote builder via high-speed network links and does not use your local network connection. See our private registry guide for more details on pushing to private Docker registries like Amazon ECR or Docker Hub.

You can generally plug Depot into your existing Docker image build workflows with minimal changes, whether you're building locally or in CI.

Architecture

Depot architecture

The general architecture for Depot remote container builds consists of our depot CLI, a control plane, an open-source cloud-agent, and builder virtual machines running our open-source machine-agent and BuildKit with associated cache volumes. This design provides faster Docker image builds with as little configuration change as possible.

The flow of a given Docker image build when using Depot looks like this:

  1. The Depot CLI asks the Depot API for a new builder machine connection (with organization ID, project ID, and the required architecture) and polls the API for when a machine is ready
  2. The Depot API stores that pending request for a builder
  3. A cloud-agent process periodically reports the current status to the Depot API and asks for any pending infrastructure changes
    • For a pending build, it receives a description of the machine to start and launches it
  4. When the machine launches, a machine-agent process running inside the VM registers itself with the Depot API and receives the instruction to launch BuildKit with specific mTLS certificates provisioned for the build
  5. After the machine-agent reports that BuildKit is running, the Depot API returns a successful response to the Depot CLI, along with new mTLS certificates to secure and authenticate the build connection
  6. The Depot CLI uses the new mTLS certificates to directly connect to the builder instance, using that machine and cache volume for the build

The same architecture is used for self-hosted builders, the only difference being where the cloud-agent and builder virtual machines launch.

Local commands

If you're running build or bake commands locally, you can swap to using the same commands in depot:

depot build -t my-image:latest --platform linux/amd64,linux/arm64 .
depot bake -f docker-bake.hcl

CI integrations

We have built several integrations to make it easy to plug Depot into your existing CI workflows. For example, we have drop-in replacements for the GitHub Actions like docker/build-push-action and docker/bake-action

- uses: docker/build-push-action
+ uses: depot/build-push-action
- uses: docker/bake-action
+ uses: depot/bake-action

You can read more about how to leverage our remote container build service in your existing CI provider of choice:

Common opportunities to use Depot remote container builds

We built Depot based on our experience with Docker as both application and platform engineers, primarily as the tool we wanted to use ourselves — a fast container builder service that supported all Dockerfile features without additional configuration or maintenance.

Depot works best in the following scenarios:

  1. Building the Docker image is slow in CI — common CI providers often do not have native support for the Docker build cache. Instead, they require layer cache to be saved to and loaded from tarballs over slow networks. Often, CI providers offer limited resources as well, causing overall build time to be long.

    Depot works within your existing CI workflow by swapping out the call to docker build with depot build. Or by configuring docker in your environment to leverage Depot. See our continuous integration guides for more information.

  2. You need to build images for multiple platforms/multiple architectures (Intel and Arm) — today, you're often stuck with managing your own build runner or relying on slow emulation in order to build multi-platform images. For example, CI providers usually run their workflows on Intel machines. So, to create a Docker image for Arm, you either have to launch your own BuildKit builder for Arm and connect to it from your CI provider. Or build your Arm image with slow QEMU emulation.

    Depot can build multi-platform and Arm images natively with zero-emulation and without running additional infrastructure.

  3. Building the Docker image on your local machine is slow or expensive — Docker can hog resources on developer machines, taking up valuable network, CPU, and memory resources. Depot executes builds on remote compute infrastructure; it offloads the CPU, memory, disk, and network resources required to that remote builder. If builds on your local machine are slow due to constrained compute, disk, or network, depot build eliminates the need to rely on your local environment.

    Additionally, since the project build cache is available remotely, multiple people can send builds to the same project and benefit from the same cache. If your coworker has already built the same image, your depot build command will re-use the previous result. This is especially useful for very slow builds, or for example, in reviewing a coworker's branch, you can pull their Docker image from the cache without an expensive rebuild.