We use cookies to understand how people use Depot.
🚀 Introducing Ultra Runners — Up to 3x faster GitHub Actions jobs
Changelog

We've released a new Depot Cache Explorer page that allows you to view cache entries of all kinds in one place. Available in the sidebar, the Cache Explorer replaces Docker- and Github-specific pages and offers capabilities to help you stay in control of your Depot Cache. Highlights include:

  • Filter cache entries by their type (GitHub vs Docker), architecture (x86 vs arm64), or name
  • Bulk delete all entries matching current filter criteria, or specific entries by checkbox selection
  • Expand Docker cache entries to view your layer cache in greater detail
  • View your average storage usage for the past 30 days

Depot Cache Explorer

The build.platforms key has been part of the Compose spec since 2022 but has gone unimplemented in upstream buildx.

The build.platforms key allows you to specify the platforms you want to build for in your compose file for a defined service. This is useful when you want to build a multi-platform image for a service defined in your Compose file:

backend:
  build:
    context: .
    platforms:
      - 'linux/amd64'
      - 'linux/arm64'

But, with docker buildx bake -f <your-compose-file>, this will fail to build the image for the specified platforms. This is because docker buildx bake does not support the build.platforms key in the Compose file.

We've upgraded our depot bake command to fix this annoyance and fully support build.platforms in a Compose file.

Project build health in Depot

We've added a nice UX improvement to the Project build page. You can now see the general health and performance of your Docker image builds for that project at a high level. We surface the duration of the build, success, failures, and average build time over the last 30 builds.

We also surface information about your cache and average hit rates over the last 30 builds.

The latest release of the depot CLI in v2.75.0 includes a lot of enhancements and bug fixes:

  • Add support for shm-size to the depot bake command
  • Cleanup of temporary certificates left behind after a build
  • Update support for compose to correctly generate a project name if not specified

We've upgraded our GitHub Actions Cache UI to offer organization owners better control over their cache entries. In addition to viewing all entries currently in your cache, you can now filter by name, delete all entries matching that filter, as well as select multiple specific entries for bulk deletion.

GitHub Cache bulk delete

We've made your usage history for the past year available on your Organization Settings page. Click any invoice date to download a detailed report of your organization's usage for the corresponding billing period.

The report contains detailed usage information for your container builds, broken down by project. It also includes detailed usage information for your GitHub Actions, broken down by repository, workflow, and runner. Finally, we also include a summary of your total storage usage for the period.

Detailed Usage Reports

You can now cap the monthly minutes your team can use to run GitHub Actions Workflows! Usage caps are a great way to ensure you stay within budget and help you plan for the future.

Github Actions usage caps in Depot

You can configure a usage cap for your team in the Current usage section of your Organization Settings.

With the v2.72.0 release of the depot CLI, the Compose v2 spec is now supported, including depot bake of Compose files with additional_contexts:

$ depot bake -f ./docker-compose.yml
services:
  app:
    build:
      context: .
      additional_contexts:
        base: ../base

Purge GitHub Actions cache inside of Depot

We've made clearing all your GitHub Action cache entries easier with a single button click. On the Cache tab of your GitHub Actions dashboard in Depot, you will now see all your cache entries, total cache size, and the option to purge all of your cache entries at the top via a trash icon button.

We've heard your feedback and have launched a streamlined onboarding experience for new users! Our upgraded account provisioning system now automates some previously manual steps, and we've added a simple landing page to help you orient yourself within the app.

Fewer forms to fill out means you can start building with Depot faster than ever.

Screenshot showing the new Landing Page

Ubuntu 24.04 GitHub Actions runners are now available in beta, using the beta runner image definition from GitHub. These runners use the same instance types as the existing Ubuntu 22.04 runners.

Intel Ubuntu 24.04 runners

LabelCPUsMemoryDisk sizeMinute multiplePer-minute price
depot-ubuntu-24.04-small22 GB100 GB0.5x$0.002
depot-ubuntu-24.0428 GB100 GB1x$0.004
depot-ubuntu-24.04-4416 GB150 GB2x$0.008
depot-ubuntu-24.04-8832 GB300 GB4x$0.016
depot-ubuntu-24.04-161664 GB600 GB8x$0.032
depot-ubuntu-24.04-3232128 GB1200 GB16x$0.064
depot-ubuntu-24.04-6464256 GB2400 GB32x$0.128

Arm Ubuntu 24.04 runners

LabelCPUsMemoryDisk sizeMinute multiplePer-minute price
depot-ubuntu-24.04-arm-small22 GB100 GB0.5x$0.002
depot-ubuntu-24.04-arm28 GB100 GB1x$0.004
depot-ubuntu-24.04-arm-4416 GB150 GB2x$0.008
depot-ubuntu-24.04-arm-8832 GB300 GB4x$0.016
depot-ubuntu-24.04-arm-161664 GB600 GB8x$0.032
depot-ubuntu-24.04-arm-3232128 GB1200 GB16x$0.064
depot-ubuntu-24.04-arm-6464256 GB2400 GB32x$0.128

We've rolled out a lot of features and bug fixes to the depot CLI that are now all rolled into v2.70.0. Here's a quick rundown of the changes:

  • Enhancements to --load for the fast load path to work with specified targets when using the bake command
  • Ability to leverage the Ephemeral Registry when using depot bake
  • Upgraded buildx imagetools to pick up additional fixes
  • Default to doing a pull/load of all targets when using bake with --load
  • Added support for depot pull-token to generate a temporary pull token to pull images from the Depot Ephemeral Registry
  • Bug fix for the BuildKit proxy with buildx 0.13
  • Added native support for GitHub Actions OIDC so that the depot CLI can exchange this token type outside of our depot/* actions
  • Added support for an experimental depot exec command to invoke a command in the remote BuildKit instance
  • Bug fix for resolving the GitHub Cache endpoint when running inside of a Depot GitHub Actions Runners
  • Bug fix for OCI mediatypes when pushing images from Depot to Heroku's registry
  • Log additional visibility into git provenance information
  • Added support for client-side server name verification for mTLS verification outside of EC2 instances
  • Bug fix for image.name in the metadata file when one is requested
  • Bug fix to correctly report --load progress back to the UI
  • Added support for multiple projects with depot bake with a new Docker Compose extension
  • Bug fix to only close the progress channel after all error retries are exhausted
  • Added support for allowing remote cancelation of builds

We're excited to release a new specialized GitHub Actions runner optimized for I/O-bound workflows. This new runner is designed to handle workflows that are bottlenecked by disk I/O, such as those involving large file transfers, database operations, or other disk-intensive tasks.

The new I/O-optimized runner comes with fast local NVMe SSDs for higher IOPS and disk throughput than the traditional runners. They are configured with a local SSD as the write cache, and the reads are distributed between the EBS root volume and the local SSD.

They are available now in beta via the nee -io label for both Intel & Arm:

Intel I/O-optimized runners

LabelCPUsMemoryDisk sizeIOPS (read/write)Minute multiplePer-minute price
depot-ubuntu-22.04-4-io416 GB237 GB67,083 / 33,5422x$0.008
depot-ubuntu-22.04-8-io832 GB474 GB134,167 / 67,0844x$0.016
depot-ubuntu-22.04-16-io1664 GB950 GB268,333 / 134,1678x$0.032
depot-ubuntu-22.04-32-io32128 GB1900 GB536,666 / 268,33416x$0.064
depot-ubuntu-22.04-64-io64256 GB1425 GB536,666 / 268,33432x$0.128

Arm I/O-optimized runners

LabelCPUsMemoryDisk sizeIOPS (read/write)Minute multiplePer-minute price
depot-ubuntu-22.04-arm-4-io416 GB237 GB67,083 / 33,5422x$0.008
depot-ubuntu-22.04-arm-8-io832 GB474 GB134,167 / 67,0844x$0.016
depot-ubuntu-22.04-arm-16-io1664 GB950 GB268,333 / 134,1678x$0.032
depot-ubuntu-22.04-arm-32-io32128 GB1900 GB536,666 / 268,33416x$0.064
depot-ubuntu-22.04-arm-64-io64256 GB1425 GB536,666 / 268,33432x$0.128

You can now view the contents of your organization's GitHub Actions cache, including the cache entry keys, sizes, and when they were last accessed.

Additionally, organization admins can delete individual cache entries if needed.

Screenshot showing the GitHub Actions cache management

You may have noticed that we've rolled out a new full-width dashboard UI for Depot. This new design is cleaner and gives you more space to view your projects, dive into your builds, and monitor your GitHub Actions in real time.

New Depot dashboard UI

You can start playing with the new Dashboard design by going directly into your Depot organization. If you have any feedback or things you'd like to see, let us know in our Discord Community.

Our latest release of the depot CLI now has a faster way to build Docker Compose files that was previously not possible.

With a new x-depot bake extension, you can now specify multiple projects to build in a single depot bake command. Allowing each project to build in parallel on its own BuildKit builder with its own isolated cache!

Similar to x-bake the x-depot key is a Docker Compose extension that allows you to optionally specify the project ID for each service in your docker-compose.yml file.

services:
  srv1:
    build:
      dockerfile: ./Dockerfile.srv1
      x-depot:
        project-id: project-id-1
  srv2:
    build:
      dockerfile: ./Dockerfile.srv2
      x-depot:
        project-id: project-id-2

Just like before, if you run depot bake -f docker-compose.yaml all targets are built, but now each project-id will be built in parallel on its own dedicated builder and cache.

We now have Nydus support available in private beta for your Docker image builds. Nydus is an accelerated container image format by the Dragonfly image-service project distribution technology that can pull image data on-demand without waiting for the entire pull before starting the container.

You can run your depot build command and specify Nydus as the output format via the --output flag:

depot build \
 --output "type=image,compression=nydus,oci-mediatypes=true,force-compression=true" \
    .

If you'd like to try out Nydus with your Depot project, reach out in our Discord and let us know.

Now available in the Depot API is the ability to manage project tokens for all your projects. You can create, list, and delete project tokens via the API.

To create a new project token, you can use the following API example from our Node SDK:

const headers = {Authorization: `Bearer ${process.env.DEPOT_TOKEN}`}
 
const result = await depot.core.v1.ProjectService.createToken(
  {
    projectId: 'project-id',
    description: 'my-token',
  },
  {headers},
)

See our API reference for docs on all of the Depot API endpoints.

You can now launch Depot GitHub Actions Runners with 2 CPUs & 2 GB of memory for half the cost of our default runner for a price per minute of $0.002/minute. These runners are great for lightweight build workloads where you want to further optimize for cost.

We've also launched a new 64 CPU & 256 GB memory runner for the most demanding workloads. This runner is great for large Rust builds where you want to leverage as many CPUs as possible. It is priced at $0.128/minute.

Both of these new runner types are publicly available for all plans. You can use them today using any of the new labels below. Check out our GitHub Actions Runners documentation for the complete list of available labels.

LabelCPUsMemoryDisk SizeArchitecturePer Minute Price
depot-ubuntu-22.04-small22 GB100 GBIntel$0.002
depot-ubuntu-22.04-6464256 GB2400 GBIntel$0.128
depot-ubuntu-22.04-arm-small22 GB100 GBarm64$0.002
depot-ubuntu-22.04-arm-6464256 GB2400 GBarm64$0.128

We've published a new integration guide on how to use Depot with Fly.io to speed up your container builds. You can use Depot to build and push your container images to your Fly application registry and then run a single command to deploy them. You can read the complete guide in our Fly.io integration doc.

We've shipped another update to Depot-managed GitHub Actions Runners, this time giving your jobs large disk sizes based on the number of CPUs you request. This change is available for both Intel and our ARM runners in beta.

Below is the full breakdown of disk sizes based on the label you choose:

LabelCPUsMemoryDisk SizeOSArchitecture
depot-ubuntu-22.0428 GB100 GBUbuntu 22.04Intel
depot-ubuntu-22.04-4416 GB150 GBUbuntu 22.04Intel
depot-ubuntu-22.04-8832 GB300 GBUbuntu 22.04Intel
depot-ubuntu-22.04-161664 GB600 GBUbuntu 22.04Intel
depot-ubuntu-22.04-3232128 GB1200 GBUbuntu 22.04Intel
depot-ubuntu-22.04-arm28 GB100 GBUbuntu 22.04arm64
depot-ubuntu-22.04-arm-4416 GB150 GBUbuntu 22.04arm64
depot-ubuntu-22.04-arm-8832 GB300 GBUbuntu 22.04arm64
depot-ubuntu-22.04-arm-161664 GB600 GBUbuntu 22.04arm64
depot-ubuntu-22.04-arm-3232128 GB1200 GBUbuntu 22.04arm64

You can read more about configuring Depot GitHub Actions Runners in our GitHub Actions quickstart, and feel free to ask any questions or report any issues in our Community Discord.

We've shipped a new way to get help or ask questions directly from within Depot to make it easier to get help when you need it. You can click Contact us in the top right corner of Depot and submit a bug report, feature request, or general question directly to us.

You can also join our Community Discord to chat with other Depot users and our team, or check out our documentation for more information on how to use Depot.

It's been a few short weeks since we took the covers off our latest product, Depot-managed GitHub Actions Runners, for faster CI jobs in GitHub Actions. We shipped the initial version focused on Intel runners with 30% faster compute, 10x faster caching, and half the price of GitHub-hosted runners.

But today, we're announcing that we now have ARM runners in public beta for everyone to use in their existing GitHub Actions jobs. ARM runners are great for building artifacts and binaries for Arm-based devices like Apple M3 chips, Raspberry Pi, and more.

We've always had the ability to build Docker images natively for ARM, but now you can run your CI jobs on ARM runners as well. Getting started is easy. Just update your runs-on label to specify an ARM runner:

runs-on: depot-ubuntu-latest-arm

You can read more about all supported labels in our GitHub Actions quickstart and feel free to ask any questions or report any issues in our community Discord.

We've rolled out a new best practice integration guide for building Docker images for Rust. The guide is now available in our new Rust section of our docs. In the guide, we walk through how to configure a Dockerfile for a Rust project, including how to leverage cargo-chef for dependency management, use sccache for finer-grained caching, and use BuildKit cache mounts in Depot for even faster builds.

You can read the complete guide in our docs.

In the latest version of the depot CLI, we've added a new command: depot pull-token. This command allows you to generate a short-lived token for pulling container images from the Depot ephemeral registry.

This command comes in handy if you need to pull container images you have built with Depot from systems that utilize docker pull underneath the hood. One example of this is specifying a container for a GitHub Actions job.

Here is an example of how you can use this command:

depot pull-token --project your-project-id

This will generate a short-lives pull token for the given project ID. You can optionally specify a build ID to generate a pull token for a specific build:

depot pull-token --project your-project-id buildID

We're excited to announce that we've now made Depot ephemeral registries available to depot bake commands so that you can save built images for multiple targets for later use in your CI workflows, to share with your team, or to push to remote registries.

You can read more about how to leverage the ephemeral registry for all of your bake commands in our announcement blog post and get a full rundown on ephemeral registries in our docs.

Our latest release of the depot CLI includes an excellent new enhancement to depot bake --load that was previously not possible.

Before the latest release, depot bake --load would always pull back all targets in a bake file rather than just the target specified in the build. For example, if you had a bake file with 10 targets and you only wanted to build one of them, you would still have to pull back all 10 targets.

Instead, we now only pull back the targets specified in the build. This means that if you have a bake file with 10 targets and you only want to build one of them, you will only pull back the one target.

depot bake --load <target>

It works for groups as well! If you have a group with two targets and you request that group in your bake command, we will only pull back the two targets in that group.

group "test" {
  targets = ["app", "db"]
}
 
target "app" {
  dockerfile = "Dockerfile.app"
}
 
target "db" {
  dockerfile = "Dockerfile.db"
}

So if you run depot bake --load for the test group, we will only pull back the app and db targets.

depot bake --load test

A new release of our depot CLI is now available with a few improvements and bug fixes. The biggest one is the ability to create Depot projects directly from the CLI:

depot projects create "your new project name"

This creates a new project in your Depot organization with the default region of us-east-1 and the default cache storage policy of 50 GB per architecture. If you want to customize the region and cache storage policy, you can use the --region and --cache-storage-policy flags:

depot projects create \
  --region eu-central-1 \
  --cache-storage-policy 100 \
  "your new project name"

In addition, we also shipped a few other improvements and bug fixes in v2.54.1:

  • Better error reporting for merging manifests... steps
  • Allow tag overrides to apply when building Compose files via depot bake

Depot managed GitHub Actions runners are now available in beta 🎉

Our runners are faster, half the cost of GitHub's runners, and fully managed inside AWS. They allow you to get the maximum performance out of your GitHub Actions workflows by being closest to your repositories & infrastructure, while also saving you money.

Runners live next to your existing Depot builders. This means every workflow that leverages Depot runners will have the fastest possible network connection to your BuildKit builder and layer cache.

If you'd like to join the beta and try them out for yourself, please reach out to us via email. You can learn how to configure your GitHub Actions to use Depot runners in our documentation.

Build logs view facets

We've added a new feature to the build details view that allows you to filter your builds logs by different facets:

  • Successful steps: Show the steps that were completed successfully.
  • Failed steps: Show the steps that failed.
  • Canceled steps: Show the steps that were canceled.
  • Cached steps: Show the logs for the steps that were a cache hit.
  • Uncached steps: Show the logs for the steps that were a cache miss and had to be rerun.

This should make debugging large Dockerfile or depot bake builds much more usable. Please let us know if you have any other feedback for things we could add here!

We've rolled out a new version of the depot CLI, which includes a number of improvements and bug fixes. The biggest one is an updated depot bake command to support the matrix block in a bake file. You can use it to reduce some duplication in your bake files or even dynamically generate targets.

# generates two targets from the `matrix` array
target "app" {
  name       = "app-${tgt}"
  dockerfile = "Dockerfile"
  tags       = ["org/${tgt}"]
  target     = tgt
  matrix     = {
    tgt = ["frontend", "backend"]
  }
}

In addition, we also shipped a few other improvements and bug fixes:

  • Fix for reporting build errors when using docker build and docker buildx build commands
  • Fix for parsing out cwd:// from incoming files
  • Added support for pushing multiple tags with depot push command
  • Add explicit support for linux/arm/v8 via the --platform flag

We shipped some performance improvements and bug fixes to our depot CLI.

  • Improved error reporting when a COPY statement tries to copy a file that can't be found
  • Introduce a depot push command to push images out of ephemeral registries to destination registries
  • Added the ability to disable OpenTelemetry in BuildKit
  • Clarification of --token parameter in various commands
  • Default to CSV output if the environment doesn't have tty
  • Improve build context transfer performance via gzip compression (experimental)

We shipped one of our most requested features, the ability to visualize your build context. The new Context tab in your build insights lets you see exactly what files were shipped to Depot for a given build.

Example of build context in Depot

Want to know all of the files in your build context? We've got you covered. You can reset your project cache to wipe out the existing build context. Your next depot build will then be your full context so you can easily debug everything that is in it. Subsequent builds only transfer what changed in your context.

To go with our new ephemeral registry, we introduced a new depot/pull-action GitHub Action that can be used to pull an image into a workflow via a given build ID.

permissions:
  contents: read
  id-token: write
 
steps:
  - uses: depot/setup-action@v1
 
  - uses: depot/build-push-action@v1
    id: image-build
    with:
      project: <your-depot-project-id>
      save: true
 
  - uses: depot/pull-action@v1
    with:
      build-id: ${{ steps.image-build.outputs.build-id }}
      tags: |
        org/report:tag

The depot/build-push-action has the build ID stored in its output. So you can use that output to pull the image into your workflow to run integration tests, deploy to a staging environment, or whatever else you need to do.

We started the month with our much anticipated ephemeral registries. You can now include a --save flag in your build commands to persist your built image to a temporary registry. We introduced our new depot pull command to pull images from this registry and use them in your CI/CD pipelines. We also added a new depot push command to forward images from the registry to your destination registry.

depot pull <build-id>
depot push -t <your-registry> <build-id>

You can read up on the new commands in our CLI docs.

Trust policies allow you to configure a connection between your Depot project and GitHub Actions, CircleCI, or Buildkite. This connection will enable you to perform an OIDC token exchange with your CI provider to dynamically authenticate to your Depot project without storing static access tokens in your CI configuration.

We've added these trust policies to our ProjectService API so that you can list, create, and remove trust policies via the API.

Filter the logs of a given build in Depot

We shipped some updates to the build insights we launched last month!

You can now filter the logs of a given build to search for specific build steps or commands quickly. This is helpful if you have a large Dockerfile with many steps and want to find a particular step in the build quickly.

You will also notice that we now show the size of each layer in the logs view as well. This allows you to quickly see how large each layer is for each step in your Dockerfile.

We've launched a new CI integration guide that dives into how you can use Depot with AWS CodeBuild for faster Docker image builds. As a bonus, we show you how to use AWS CodeBuild's Lambda compute type to build Docker images via Depot so that you get even faster CodeBuild builds without the overhead of EC2 instance provisioning.

You can check out the complete integration guide in our AWS CodeBuild docs.

We mentioned this last month, but we've added some final touches to our beta feature, allowing you to save a build in a temporary registry. You can now run a build and save the resulting Docker image in a temporary registry for later use.

depot build --save .

This will store the image in a temporary registry. You can use the depot pull command to pull it back out by build ID.

depot pull <build-id>

The save and pull workflow is great for folks who need to build an image once and then use it multiple times in different integration tests or environments.

If you have installed the latest version of the depot CLI, you can try it out now.

  • New sbom and sbom-dir flags are now available for depot build and depot bake to generate SBOMs during a build and write them to a local directory
  • Additional metrics and logging of machine boot and image load/push times during a build
  • Improved onboarding to prompt for login and project selection when a user runs depot build without authentication or a project selected
  • Updates to depot bake to automatically name the built images with the names docker-compose expects
  • New beta feature, --save to persist a given Docker image produced by a build in a temporary registry for later use
  • New beta feature, depot pull <build-id> to pull a Docker image out of the temporary registry without having to run another build

AWS Marketplace

Depot is now available on the AWS Marketplace for folks looking to integrate with their existing enterprise contracts at AWS. We offer the ability to purchase Depot in the marketplace for those interested in our Enterprise plan.

We're very excited to release our Dockerfile Explorer that allows you to introspect the low-level build (LLB) steps that a Dockerfile transforms into. It's great for visualizing what each step in your Dockerfile is doing at a file system level and how different aspects of your build impact the LLB operations and, ultimately, the Docker layers produced during a build.

If you're interested in how we built it and how it works, we have a detailed blog post that goes into the details.

All builds are now running on our latest infrastructure provisioner, which is designed to further reduce the time to start a build. We wrote a detailed history of how our backend build architecture has evolved. You can read it here.

In short, we've optimized our provisioning system to leverage a new standby pool architecture, and it has significantly reduced the time it takes for us to start a given build by avoiding EC2 cold boot time.

We launched a new organization usage visualization that allows you to track your monthly Depot usage. Get insights into how many builds you're running, how much build time you've saved, and how much cache storage you use.

We've added a new flag, --sbom, to both the build and bake commands in our CLI. It can generate a Software Bill of Materials (SBOM) on every build. In addition, you can also specify a --sbom-dir parameter to have the generated SBOMs written to a local directory that you can then upload to your own SBOM analysis tools.

depot build --sbom=true --sbom-dir=sboms .
depot bake --sbom=true --sbom-dir=sboms -f docker-bake.hcl

You can read more about downloading SBOMs in Depot in our SBOM announcement post.

For Depot Drop Week #02, we wanted to bring better visibility into the entire Docker build. We launched a new feature called Build Insights that gives you a detailed view of what's happening inside a Docker build. You can see exactly what happened during a build via the raw Docker logs, analyze each step in the build, visualize the parent/child relationships between steps, and get automatic suggestions to improve your Dockerfile.

The latest version of the depot CLI is now available. This release includes a few bug fixes and improvements, including:

  • Improved --platform detection for arm/v7 support
  • Finer grain logging when --load is used so you can see what layers are being pulled, extracted, etc., during a build
  • Improved performance of --load when using a local Docker daemon

We launched a new section in our documentation, languages & frameworks, that will be a one-stop shop for our recommended best practices when building Docker images for a given language or framework. To kick things off, we documented the best practices for building Docker images for Node.js & pnpm.

We will add more over the coming weeks, but we also welcome anyone from the community to submit their ideas on our docs repo.

We shipped a new Node.js package @depot/cli that you can install into your Node projects to invoke CLI calls directly from your code. No more needing to install the CLI, configure it, etc. You can now install the package and start using it.

pnpm add @depot/cli
import {depot, depotBinaryPath} from '@depot/cli'
 
async function example() {
  console.log(depotBinaryPath())
 
  await depot(['build', '-t', 'org/repo:tag', '.'])
}

We got a new logo and changed a few style things across Depot to better align with what we're building. We hope you like it! We also have a new brand assets section if you want to use our logo anywhere.

The latest version of the depot CLI updates the configure-docker command to now configure Depot as the default buildx driver for all docker buildx build commands. This is in addition to the existing docker build support we released last month.

With this new driver, you can now use docker buildx build to build your Docker images with Depot and take advantage of all the benefits of Depot's caching and insights. So you can now use Depot with other developer tools that call docker buildx build under the hood, like Dev Containers, AWS CDK, and Docker Compose.

We have integrated the Semgrep Dockerfile ruleset into our existing --lint flag.

depot build --lint --lint-fail-on warn .

The Semgrep integration is in addition to our existing Hadolint integration. When you run depot build --lint, we will run Hadolint and Semgrep and return a combined list of issues. You can also use the --lint-fail-on flag to set the severity level at which you want to fail your build.

We released a new depot configure-docker command that installs Depot as a Docker CLI plugin and makes Depot the default builder for docker build and docker buildx build commands. Making it even easier to get faster Docker image builds locally and in CI without changing a single line of code. This unlocks a lot of Depot integrations with other great developer tools like Dev Containers, goreleaser, and AWS CDK. Check out our blog post for more details.

Upgrade to our latest CLI version to access this command: depot/cli.

depot/use-action GitHub Action

To go with our new Docker CLI plugin, we also released a new GitHub Action, depot/use-action, that makes it easy to use Depot as the default builder for your GitHub Actions workflows. You can use this action to get faster Docker image builds in your GitHub Actions workflows by dropping the new action above your Docker build steps. Nothing else needs to be added or changed.

We also released Llama 2 on depot.ai thanks to a kind pull request shortly after launch 🙂

Embed Llama 2 into your Docker image via the following COPY command:

COPY --link --from=depot.ai/meta-llama/llama-2-70b-chat-hf:latest / .

To close our Drop Week #01, we announced a new authentication mechanism for open-source maintainers looking to get faster Docker image builds for public fork pull requests in GitHub Actions. The new mechanism allows maintainers to route Docker image builds for public fork pull requests to ephemeral Depot builders. Allowing those builds to get faster Docker image builds without compromising the main layer cache. Read more about our new OIDC issuer that makes it all work.

We released our free open-source Docker registry for Hugging Face's top 100 public AI models. You can use depot.ai to pull top models into your Docker image via a single COPY command in your Dockerfile. Any Docker image build that needs a generative AI model is orders of magnitude faster. Check out our announcement blog post for more technical details.

We rolled out our new cache storage architecture to all Depot-hosted regions. Cache storage v2 moves away from our old EBS volume-based architecture to a new one using a Ceph storage cluster, allowing us to scale storage to meet your project's needs and provide 10x the write throughput and 20x the read throughput for each project's cache.

View what's in your cache

Depot cache view

There is a new Cache view in the Depot UI when you click on any of your projects. This view shows you exactly what is in your cache, how large each entry is, which line in your Dockerfile it's associated with, and which architecture that cache entry is for.

Choose your cache size

As a bonus, all projects can now be configured to have the cache size that makes sense for what you're building. Need to build an image that has Stable Diffusion embedded in it? No problem. Select our largest cache size of 500 GB.

We also rolled out a variety of new features and enhancements to our CLI throughout the month:

  • Various performance improvements to depot build and depot bake manifest generation
  • Fixed race condition in depot bake --push operation
  • Added a new depot logout command to remove your authentication token
  • Added a new environment variable, DEPOT_NO_SUMMARY_LINK, to turn off the build summary links in depot build and depot bake output

We removed the beta flag and made our public API available to everyone so you can access the fastest place to build Docker images from your own code. If you're looking to build Docker images on behalf of your users, this is the API for you. You can call our build API from your code to acquire a Depot builder and run an entire Docker image build via Depot.

We've already seen a few folks build integrations with Depot and are excited to see more. If you're interested in building Docker images from code, check out our API docs and reach out if you have any questions.

We have built up quite a few integration guides over time. These are helpful for folks who are looking to quickly get faster Docker image builds in their existing CI workflows.

This month we added an additional guide to the list, integrating Depot with Jenkins. We have eight guides to help you get faster Docker builds in CI providers.

If you have a CI provider that you're looking to plug Depot into and we still need a guide for it, hop into our Community Discord and let us know

updated build visualization

We made quite a few updates to our build visualization UI that we launched at the beginning of the month. You can now see for any given build:

  • What tags were specified (i.e., herault in the screenshot)
  • Was the image pushed to a registry or loaded back into the Docker daemon
  • What line in your Dockerfile busted the cache

We plan to add filter functionality to this page so you can quickly find builds by tag, status, or whether or not the image was pushed to a registry.

You can now jump directly into the insights and visualization of a given build executed on Depot by clicking the Build summary link that both depot build and depot bake now output.

Folks leveraging self-hosted Depot builders can now reset the actual builders directly from Depot. Like resetting the cache for a given project, you can now reset the entire BuildKit machine backing your builds. Navigate to your Project Settings and click the Reset Machines button at the bottom.

  • Improved error message when your .dockerignore isn't formatted correctly
  • Added --lint flag to lint your Dockerfile before building
  • Improved build start time by removing unnecessary API calls
  • Removed garbage collection race condition during --load operations
  • Fix depot bake --print operation
  • Report build options and flags to the API for better insights into what each build did
  • Fix duplicate message logging when running depot build in CircleCI
  • Additional resiliency for --load operations when a builder is under heavy load
  • Fix panic when using depot cache reset with no arguments
  • Improved robustness of our depot install script
  • Add build summary link to depot build and depot bake output

We launched on Product Hunt on May 17th with our accelerated local builds with instant cache sharing, and it was a blast! We had a ton of support from the community, and we appreciate everyone dropping in to share their experiences and show their support.

We also hosted a Show HN over on Hacker News to chat in more detail about our accelerated local builds. It was awesome to dive into the technical details of how we accelerate local builds and unlock instant shared caching across teams. As always, we got a lot of great feedback from the community, and we're excited to continue to iterate.

As mentioned, we launched our accelerated local builds with instant cache sharing. It is a huge step forward for developers who want to build their Docker images faster locally and share the layer cache across their team. We've rethought what it means to load an image back after it's built and make Depot turbo builders available for local builds.

You can read the full details of how we did it and all of the bonus features that come with our new --load functionality on our announcement blog post.

We're rapidly iterating on enhancements for accelerating builds both locally and in CI. For example, we now have a significantly faster --load that makes it possible to load your image back into your local Docker daemon in seconds.

We've added a few more enhancements to make things even faster. We made exporting layers for both --load and --push significantly quicker. We effectively made the export run in parallel rather than serially. The net effect is 2x faster builds on average.

We also updated build and bake in how they search for a depot.json file. Previously, it would search for a depot.json file in the root directory. Now, it will check the filepath specified for a given file first and then recursively search up from there. You can still pass in the project ID for either command via --project instead of using a depot.json file.

To reduce the time taken to create image layers, Depot builders now hash layer contents using SIMD-accelerated SHA256 computations, AVX512 on Intel CPUs, and SHA2 instructions on Arm CPUs. This change can result in an additional 15% time savings for larger layers, especially important when packaging machine learning models in containers.

You can now view a full breakdown of your build-minute usage across all your projects. This is a great way to see which projects are using the most build minutes and get an idea of your estimated monthly bill.

The first week of April was jam-packed as we closed our YC W23 batch with the famous Demo Day. It was an excellent opportunity to share what we've been working on with the world and get feedback from the YC community. We get asked a lot about our YC experience, and we can't say enough good things about it. The community is fantastic, and we're excited to be a part of it! We're planning on writing some more things about our experience in the future to help inspire others to apply. In the meantime, if you're considering applying, feel free to contact us; we'd be happy to share our experience.

If you follow our depot/cli repository, you may have noticed that we've been shipping many new features in our CLI. We've been adding the capabilities to depot that we've wished existed for docker build itself. Here are the highlights of what we've added:

List your projects and builds with depot list

We've added a top-level depot list projects command that allows you to see all your organization's projects. You can then select a given project to see all of the builds for that project.

We've also added a depot list builds command that will list all of the builds for the project defined in your depot.json config file, or you can pass the --project flag to list the builds for a specific project ID. If you want to parse the output of this command, you can use the --output flag to get the result in JSON or CSV format.

Push your image to multiple registries at once with --push

We've added the ability to push your image to multiple registries simultaneously. This is a massive speed improvement for organizations that need to push their image to multiple registries.

depot build -t registry1.com/test:latest -t registry2.com/test:latest --push .

Before this release, running a build that tagged and pushed to multiple registries would push to registry1 and then push to registry2 serially. With this new release, we can push to both registries in parallel.

Build your image with --load and --push at the same time

We've also added the ability to simultaneously build your image with --load and --push, which means the image will be both downloaded to the local machine as well as pushed to a remote registry in one step. Previously this required running two separate builds. This is a massive speed improvement for organizations that need to push their image to a registry and load it into their local Docker daemon.

Intelligent loading of only changed layers with --load

By default, docker buildx build --load . returns the tarball of the entire image to the client, even when the client may already have some or all of the layers cached locally. This is a massive waste of bandwidth and time compared to only loading the the new or changed layers.

With this release, we've made this more intelligent. When you run depot build --load . locally, we send back the diff between what you have locally and what the build has produced. This means that only new or changed layers need to be downloaded to the client.

This is a massive speed improvement for organizations that need to load their image back into their local development environment.

This optimized diff also skips the need to produce a single tarball for the whole image, so even in environments that may be not have any local layers downloaded, like CI, we are able to skip the slow tar process and download the layers directly, in parallel.

cache stats

You can now see exactly how much of your cache you're using on a project-by-project basis, with visibility into both Intel & Arm caches. You can also see how much time you save with each build and how much of it was cached. We also have an initial view into the exact steps of your build that got executed, whether they were cached, and how long each step took.

We're really excited about this initial version and are already working on several more insights that we can surface on every build. So if you have things that you would like to see here, please let us know!

New Depot landing page

We made a lot of landing page improvements to Depot, but this one is our favorite. It is a live snapshot of the time users have saved over the past seven days using Depot to build their Docker images. We are really proud of this one, and we hope you like it too.

Depot builder machines now come with 16 CPUs and 32 GB of memory, 4x the size of our previous machines! With our goal of being the fastest place to build Docker images, we are always looking at new ideas that make fast builds on Depot even faster, and turbo builders are one of those ideas. They are available for both Intel and Arm builds without any additional configuration.

Depot builds Mastodon 53x faster

We have always believed in showing rather than telling, which is the philosophy behind the benchmarks on our landing page. We benchmark real-world open-source projects, building them with both depot build and docker build in GitHub Actions, for every upstream commit. And the benchmarks themselves are open source, you can click on any of them and see the side-by-side comparison of every run.

Mastodon is a free, open-source social network server based on ActivityPub, where users can follow friends and discover new ones. They have been building multi-platform images in GitHub Actions, with 3-hour build times.

We set up a benchmark using Depot and the results shattered our existing records. We built Mastodon's multi-platform image 53x faster than building it in GitHub Actions with Docker. We hope to contribute this back upstream to Mastodon in the weeks ahead.

We have been working on a public API for Depot for a while now, and we are excited to announce that it is now in private beta. You can now use the Depot API to build images from your own applications and services. Check out our API documentation for more details. If you're interested in building Docker images quickly from your own applications and services, contact us.

We have a major new release of our depot CLI. This release includes many new features to expose more of Depot to the command line and make builds even faster. Here are some of the highlights:

  • depot bake comes to Depot! Build all of the images that compose your application from a single HCL, JSON, or Compose file. Check out our announcement blog post for more details.
  • depot cache reset allows you to reset the cache for a specific Depot project. This is useful if you want to clear the cache of your project before running a build in CI for cases where you want a totally uncached environment.
  • --build-platform is now available for both our build and bake commands. By default, we run on Intel or Arm builders depending on the container platform or both in the case of multi-platform builds. This flag allows you to force builds to run on Intel or Arm builders, regardless of the requested container platform.

We launched our official Discord community this week! We use Discord internally to communicate amongst ourselves. We figured it would be great to let those that are excited about Docker containers and/or Depot hop into a dedicated space where you chat with others that share your interest, like us 😊

Join our community here.

We did a bit of cleanup around our documentation and added more information about what is happening at an architecture level. We already had the latter in our self-hosted documentation, but we have never documented or shared that this is the exact same architecture we use internally as well. So now you can get an idea of what is happening under the hood when you use Depot, when to use it, and when it makes sense to not use it. Check it out in our introduction docs.

When you build your Docker image in CI, you usually want to push it to your registry afterward. However, the further your builder is from your registry, the slower the network latency. With project region selection, you can choose to have your Depot builders launch in the region that is closest to your registry so that you minimize the latency of pushing your image to your registry. Read more about this new feature in our announcement blog post.

We applied to the YC W23 batch at the end of September and got accepted three days before Kyle picked up his life to move to France. We are excited to join the YC family and are looking forward to the next 3 months of the program. Be sure to check out our Launch YC.

It's a foundational goal to make Depot as easy as possible to integrate into your existing tools and processes. On that front, we wanted to save several clicks when trying to plug Depot into your existing CI provider. So, now when you create a project, you can choose your CI provider and get the step-by-step configuration with the workflow config to route your Docker image builds to Depot.

New CI onboarding

We rolled out our new infrastructure provisioning system that allows for faster build starts, faster CLI connections, and builders running on the latest generation of AWS compute. We will be rolling out new platform releases every few months as we upgrade and improve the entire build system behind the scenes.

Managing different CLI and tool versions can be annoying. There are numerous tools out there for improving this, but they always seem to be specific to the actual tool (i.e., tfenv for Terraform). asdf allows you to manage multiple versions of multiple tools with a single CLI. So, we added a plugin for asdf, that allows you to install and manage the depot CLI versions. Check out the plugin repo for details.

October was a big month for us as we announced that we were leaving our day jobs and going full-time at Depot. It's been a busy year juggling day-to-day work with Depot on our nights and weekends, but it's been entirely worth it, and we are really proud of what we have been able to bootstrap so far. This is just the beginning for Depot, and we have a much bigger vision that we are putting into motion with this change. More news on our new venture, now named Depot Technologies Inc, in the coming weeks as we start to close out the year.

Building a company

Not a feature or bug fix, but just it's just as important. We have largely been getting everything set up for Depot to be a full-time company. We are putting all the things in place so that we can continue to build Depot and add in the new capabilities we have been dreaming about since January. There has been a lot of work put in to make sure we are optimizing our documentation, setting up custom onboarding for everyone, and speaking with you about what other things Depot can help you solve. We have also been working on fundraising so that we can keep this ship afloat while we shoot for the moon. More on that soon.

Kyle moved to France

What's scarier than starting a new company, building a new product, and leaving your day job? Doing all of that while simultaneously moving your family to another country. We decided to relocate to France from Portland, Oregon, and it's been a wild ride. We are now settled in and have been enjoying getting familiar with our new city and new routines. It's all very exciting, and maybe a bit overwhelming at times. If you ever find yourself in Montpellier, France, and want to discuss slow builds taking years off our lives, please let me know, and we can grab a coffee, croissant, or a bottle of wine.

In our experience, the lower the bar for users to try out a new product or service, the easier it is to get in there and see if it's valuable. So, we added the ability to log in with Google and Microsoft to make it even easier to get started with Depot. We are also excited to announce that we now have SSO capabilities for those looking for that kind of thing. Reach out to us at contact@depot.dev and we can help get you set up.

GitLab CI is a pain to build Docker images with because of the tradeoffs you have to make to get it done. We wrote a blog post that talks about these tradeoffs, and how they can cause build times to explode and open security holes that you would rather keep closed. Depot makes this much simpler because your image builds get routed to our remote builders with a persistent cache. So, you can build your images without Docker-in-Docker (dind) and full root permissions.

You can read the blog post or check out our new GitLab CI integration guide.

A big milestone for our depot CLI, which is a drop-in replacement for docker build, is that it is now at 1.0.0. There are no breaking changes in this release, we jumped to 1.0 so that we can release new versions with proper semver versioning (major.minor.path).

We have been working hard to make Depot more stable and reliable. We have been running Depot in production for a few months now and have been able to identify and resolve several issues. The one we have been working on the most is the stability of builds across cloud providers. Today, we support image builds for Intel and Arm architectures by routing builds for each given architecture to their respective cloud provider (AWS for Arm and Fly for Intel).

However, this creates a coupling to cloud providers that isn't ideal for operating our remote builders at scale. The solution we have in beta currently is to route builds to different cloud providers based on outages at our existing ones, capacity restrictions, etc. This is a much more robust solution that allows us to always be ready to process a build without interruption.

Self-hosted Depot builders are here for everyone! We worked with our early adopters to design a simple and secure way to leverage the performance of Depot on your own infrastructure. It took a few iterations, but we are excited about what this can unlock for folks and for the opportunity to make this available on other cloud providers.

You can check out our self-hosted getting started guide for details on how to configure a project to use self-hosted builders.

We recently contacted PostHog after benchmarking around a 2x speedup on one of their Actions workflows. They were intrested in the switch, and we collaborated to convert their Actions workflows to use the Depot actions for Docker builds.

After the switch, their main Docker build workflow went from around sixteen minutues on average to only three, a 5x speedup! You can read more about the switch on PostHog's blog.

If you have an open-source project that could use faster Docker builds, definitely contact us. We're happy to work with you on free or discounted access to Depot.

Work continues on self-hosted Depot builders. As we revealed last month, we are developing the ability for organizations to connect an AWS account to their Depot organization, then project builds run inside the connected account instead of inside Depot's infrastructure providers. This allows organizations with special requirements to utilize Depot, while keeping their project data entirely inside their own account.

As we are nearing a beta release of self-hosted builders, we have settled on the following architecture:

  • Organizations create a cloud connection in their Depot organization, providing their AWS account ID
  • Organizations launch a set of AWS resources (VPC, launch templates, etc.) inside their account — we will provide an open-source Terraform module to make this easy
  • An open-source cloud-agent process runs inside the organization's AWS account — it is responsible for launching and managing instances needed for project builds, with minimal IAM permissions
  • Inside the launched instances, an open-source machine-agent is responsible for communicating with the Depot API and running any software needed for the build

We've chosen this architecture primarily to minimize blast radius and security footprint. All software running inside organization cloud accounts is open-source and auditable, and we do not share AWS account credentials or cross-account roles with the hosted Depot service.

We expect to have support for self-hosted builders completed for AWS by the end of August, and expect to expand to other cloud providers in the future.

We experienced several disruptions and outages with our infrastructure provider for Intel builds this past month. We are working to extend our automatic failover systems to support cross-provider failover, in addition to their current in-provider failover capabilities. This will mean that if one of our hosting providers is experiencing an outage, your builds will automatically be rerouted to a backup provider.

Project tokens have launched, allowing you to create an API token that can be used to build just a single project. We now support three ways you can authenticate builds: user access tokens, OIDC tokens, and project tokens.

Project tokens provide a better method for authenticating builds from CI providers where OIDC tokens are not supported. They are tied to a specific project in a single organization, unlike user access tokens that are tied to a user and grant access to all projects and organizations that user can access.

In GitHub Actions, we support OIDC tokens and recommend them over project or user tokens. OIDC trust relationships allow GitHub Actions to retrieve a short-lived access token for the build that, similar to project tokens, can only access the projects that have been allowed for that repository.

For all other CI providers, we recommend using project tokens for authentication.

We are working on the option to launch Depot builder instances inside your own cloud account. We are starting with initial support for AWS and the new CircleCI builders, but plan to expand to other builder types (e.g., Docker) and other clouds in the future.

Today, we launch and manage all aspects of your builder instances for you. However, some organizations have specialized needs that require them to self-host their CI builders. With our new self-hosted support, those organizations can continue to use Depot as the "management plane" for their CI builders, but the builders will launch inside the customer's cloud account instead.

We're planning to support self-hosted builders on a per-project basis, so organizations can additionally choose for each project where its builds should execute.

More details about self-hosted builders will be available soon.

We wanted to make it simple to try Depot in your existing GitHub Action workflows. So, we released depot/build-push-action, that implements the same inputs and outputs as docker/build-push-action but makes uses of our depot CLI to run your build.

Bonus: We now support OIDC token authentication in GitHub Actions 🎉

Our new GitHub Action also allows you to use GitHub's OIDC token as authentication to depot build. No more static access keys in GitHub Actions!

If you set the permissions block in your action workflow and make use of depot/build-push-action you can authenticate builds via OIDC and don't need to generate a user access token.

jobs:
  build:
    runs-on: ubuntu-20.04
    permissions:
      contents: read
      id-token: write
    steps:
      - uses: actions/checkout@v3
      # The depot CLI still needs to be available in your workflow
      - uses: depot/setup-action@v1
      - uses: depot/build-push-action@v1
        with:
          context: .
          push: true

If you want to see an example of this new authentication method in action, you can check out our moby/moby benchmark workflow.

From the first line of code we wrote for depot we wanted it to be very easy to switch to it from docker. It is critical, in our opinion, that the ability to try out new tools and technologies have the lowest barrier to entry as possible. So, we have built our CLI with that in mind, it takes all the same flags as docker build right out of the box.

We released depot 0.1.0 which makes a small change to the built image transfer. With this release, we now leave the image on the remote builder instance. This was previously done by passing the --no-load flag. We decided to switch this behavior so that when you are running builds in your CI environment you are not unnecessarily waiting for the image to be transferred back to you when you may not need it. If you do need the built image for running it locally or running integration tests in CI, you can use the --load flag to tell our remote builder to transfer the built image back. You can read the full release notes here.