We use cookies to understand how people use Depot.


Welcome to Depot!

Depot is a hosted Docker build service — container builds are sent to a fast builder instance, with a persistent cache. The resulting image can then be downloaded locally or pushed to a registry. Adopting Depot is easy, as the Depot CLI depot build command accepts the same arguments as docker buildx build.

Best of all, Depot's build infrastructure requires zero configuration on your part, everything just works, including build cache! You can think of Depot as a specialized CI service, focusing on Docker containers.

Check out the quickstart to get started.

How does it work?

First, you will create a project, underneath an organization. Projects usually represent a single application, repository, or Dockerfile. Once you've created your project, you can use the depot build command, either from your local machine or from an existing CI workflow, to execute the container build remotely using your project's builder instance.

Builder instances are equiped with 4 CPUs, 8GB of memory, and 50GB of SSD disk. They run the latest version of BuildKit, the advanced build engine that backs Docker. Our CLI in turn is able to remotely connect to that instance of BuildKit to execute the build.

We offer both Intel and Arm builder instances for all projects, so both architectures build without slow emulation.

Once built, the image can be left in the build cache (the default), or alternatively can be downloaded to the local Docker daemon with --load, or pushed to a registry with --push. If --push is specified, the image is pushed to the registry directly from the remote builder via high-speed network links and does not use your local network connection. Example:

$ cd path/to/project
$ depot build -t repo/project:tag . --push # build and push to registry

See the core concepts page for more information.

When to use Depot?

We built Depot based on our experience with Docker as both application and platform engineers, primarily as the tool we wanted to use ourselves — a fast container builder service that supported all Dockerfile features without any configuration or maintenance.

Depot works best in one of the following scenarios:

  1. Building the Docker image is slow in CI — common CI providers often do not have native support for Docker build cache, instead requiring cache to be save to and loaded from tarballs, which can be quite slow. CI providers often offer limited resources, causing overall build time to be long.

    For an example of what "slow" could mean, in many of our projects, we have seen a 2-3 times build-time speedup by switching to Depot. Docker builds that use optimized Dockerfiles are often able to achieve even greater speedups, some projects have reduced their build time from 12 minutes to only 1 minute. 🚀

    You do not have to switch to a different CI provider, Depot works within your existing workflow by swapping out the call to docker build with depot build. See our guides for more information.

  2. You need to build images for multiple platforms (Intel and Arm) — Depot's Intel and Arm builder instances can build both CPU architectures natively, without any slow emulation. This is especially valuable if you need to build Docker images for a platform other than your current host, for instance if you are on an M1 Mac and need to build an Intel image, or if you need to build an Arm image from your CI provider that only offers Intel runners.

    Depot can build multi-architecture images in a single pass, so if you need to build and push a multi-architecture image to your registry to be used by both CPU architectures, Depot can do this.

  3. Building the Docker image on your local machine is slow or expensive — since Depot executes builds on remote compute infrastructure, it offloads the CPU, memory, disk, and network resources required to that remote builder. If builds on your local machine are slow due to constrained compute, disk, or network, depot build eliminates the need to rely on your local environment. This also applies to CPU architecture, if you need to build a Docker image without CPU emulation, offloading the build to Depot vastly speeds up the build.

    Additionally, since the project build cache is available remotely, multiple people can send builds to the same project and benefit from the same cache. If your coworker has already built the same image, your depot build command will re-use the previous result. This is especically useful for very slow builds, or for example in reviewing a coworker's branch, you are able to pull their Docker image from cache without an expensive rebuild.

When not to use Depot?

Depot is not the best option in a few scenarios:

  1. You need to self-host your CI infrastructure — Depot is only offered as a hosted service, where all configuration and maintenance of a fleet of build servers is managed on your behalf. If you have a requirement to self-host your own CI infrastructure, you should not use Depot.

  2. You want to repeatedly build a fast image locally — If you plan to docker run an image built with Depot, the resulting image must be transferred over the network from Depot's remote builder to your local machine. As such, if you have a Dockerfile that builds in just a few seconds, and you plan to docker build && docker run in a fast loop, the time to download the image over the network may be slower than just running docker build.

    If so, Depot may still be useful to you in a few other cases. In CI, where you don't have access to things like persistent base image caches, Depot may provide build speed similar to your local machine. And if after your core development loop you need to build the Docker image for a new platform (i.e. for deployment to an Arm environment like AWS Graviton or Azure Ampere instances), you can use Depot to build the image with similar speed, skipping slow emulation.