You can now create public, sharable links to your builds right from the build page! You can either click the new Share build button in the top right corner, or you can use the new ShareBuild API.
As a nice side effect, users of the Depot API can also view the metadata of specific builds.
We've released a new Depot Cache Explorer page that allows you to view cache entries of all kinds in one place.
Available in the sidebar, the Cache Explorer replaces Docker- and Github-specific pages and offers capabilities to help you stay in control of your Depot Cache.
Highlights include:
Filter cache entries by their type (GitHub vs Docker), architecture (x86 vs arm64), or name
Bulk delete all entries matching current filter criteria, or specific entries by checkbox selection
Expand Docker cache entries to view your layer cache in greater detail
View your average storage usage for the past 30 days
The build.platforms key has been part of the Compose spec since 2022 but has gone unimplemented in upstream buildx.
The build.platforms key allows you to specify the platforms you want to build for in your compose file for a defined service. This is useful when you want to build a multi-platform image for a service defined in your Compose file:
But, with docker buildx bake -f <your-compose-file>, this will fail to build the image for the specified platforms. This is because docker buildx bake does not support the build.platforms key in the Compose file.
We've upgraded our depot bake command to fix this annoyance and fully support build.platforms in a Compose file.
We've added a nice UX improvement to the Project build page. You can now see the general health and performance of your Docker image builds for that project at a high level. We surface the duration of the build, success, failures, and average build time over the last 30 builds.
We also surface information about your cache and average hit rates over the last 30 builds.
We've upgraded our GitHub Actions Cache UI to offer organization owners better control over their cache entries.
In addition to viewing all entries currently in your cache, you can now filter by name, delete all entries matching that filter, as well as select multiple specific entries for bulk deletion.
We've made your usage history for the past year available on your Organization Settings page.
Click any invoice date to download a detailed report of your organization's usage for the corresponding billing period.
The report contains detailed usage information for your container builds, broken down by project. It also includes detailed usage information for your GitHub Actions, broken down by repository, workflow, and runner. Finally, we also include a summary of your total storage usage for the period.
You can now cap the monthly minutes your team can use to run GitHub Actions Workflows! Usage caps are a great way to ensure you stay within budget and help you plan for the future.
You can configure a usage cap for your team in the Current usage section of your Organization Settings.
We've made clearing all your GitHub Action cache entries easier with a single button click. On the Cache tab of your GitHub Actions dashboard in Depot, you will now see all your cache entries, total cache size, and the option to purge all of your cache entries at the top via a trash icon button.
We've heard your feedback and have launched a streamlined onboarding experience for new users!
Our upgraded account provisioning system now automates some previously manual steps, and we've added
a simple landing page to help you orient yourself within the app.
Fewer forms to fill out means you can start building with Depot faster than ever.
Ubuntu 24.04 GitHub Actions runners are now available in beta, using the beta runner image definition from GitHub. These runners use the same instance types as the existing Ubuntu 22.04 runners.
We're excited to release a new specialized GitHub Actions runner optimized for I/O-bound workflows. This new runner is designed to handle workflows that are bottlenecked by disk I/O, such as those involving large file transfers, database operations, or other disk-intensive tasks.
The new I/O-optimized runner comes with fast local NVMe SSDs for higher IOPS and disk throughput than the traditional runners. They are configured with a local SSD as the write cache, and the reads are distributed between the EBS root volume and the local SSD.
They are available now in beta via the nee -io label for both Intel & Arm:
You may have noticed that we've rolled out a new full-width dashboard UI for Depot. This new design is cleaner and gives you more space to view your projects, dive into your builds, and monitor your GitHub Actions in real time.
You can start playing with the new Dashboard design by going directly into your Depot organization. If you have any feedback or things you'd like to see, let us know in our Discord Community.
Our latest release of the depot CLI now has a faster way to build
Docker Compose files that was previously not possible.
With a new x-depot bake extension, you can now specify multiple projects to build in a single depot bake command. Allowing each project to build in parallel on its own BuildKit builder with its own isolated cache!
Similar to x-bake the x-depot key is a Docker Compose extension that allows you to optionally specify the project ID for each service in your docker-compose.yml file.
Just like before, if you run depot bake -f docker-compose.yaml all targets are built, but now each project-id will be built in parallel on its own dedicated builder and cache.
We now have Nydus support available in private beta for your Docker image builds. Nydus is an accelerated container image format by the Dragonfly image-service project distribution technology that can pull image data on-demand without waiting for the entire pull before starting the container.
You can run your depot build command and specify Nydus as the output format via the --output flag:
If you'd like to try out Nydus with your Depot project, reach out in our Discord and let us know.
Now available in the Depot API is the ability to manage project tokens for all your projects. You can create, list, and delete project tokens via the API.
To create a new project token, you can use the following API example from our Node SDK:
See our API reference for docs on all of the Depot API endpoints.
You can now launch Depot GitHub Actions Runners with 2 CPUs & 2 GB of memory for half the cost of our default runner for a price per minute of $0.002/minute. These runners are great for lightweight build workloads where you want to further optimize for cost.
We've also launched a new 64 CPU & 256 GB memory runner for the most demanding workloads. This runner is great for large Rust builds where you want to leverage as many CPUs as possible. It is priced at $0.128/minute.
Both of these new runner types are publicly available for all plans. You can use them today using any of the new labels below. Check out our GitHub Actions Runners documentation for the complete list of available labels.
We've published a new integration guide on how to use Depot with Fly.io to speed up your container builds. You can use Depot to build and push your container images to your Fly application registry and then run a single command to deploy them. You can read the complete guide in our Fly.io integration doc.
We've shipped another update to Depot-managed GitHub Actions Runners, this time giving your jobs large disk sizes based on the number of CPUs you request. This change is available for both Intel and our ARM runners in beta.
Below is the full breakdown of disk sizes based on the label you choose:
Label
CPUs
Memory
Disk Size
OS
Architecture
depot-ubuntu-22.04
2
8 GB
100 GB
Ubuntu 22.04
Intel
depot-ubuntu-22.04-4
4
16 GB
150 GB
Ubuntu 22.04
Intel
depot-ubuntu-22.04-8
8
32 GB
300 GB
Ubuntu 22.04
Intel
depot-ubuntu-22.04-16
16
64 GB
600 GB
Ubuntu 22.04
Intel
depot-ubuntu-22.04-32
32
128 GB
1200 GB
Ubuntu 22.04
Intel
depot-ubuntu-22.04-arm
2
8 GB
100 GB
Ubuntu 22.04
arm64
depot-ubuntu-22.04-arm-4
4
16 GB
150 GB
Ubuntu 22.04
arm64
depot-ubuntu-22.04-arm-8
8
32 GB
300 GB
Ubuntu 22.04
arm64
depot-ubuntu-22.04-arm-16
16
64 GB
600 GB
Ubuntu 22.04
arm64
depot-ubuntu-22.04-arm-32
32
128 GB
1200 GB
Ubuntu 22.04
arm64
You can read more about configuring Depot GitHub Actions Runners in our GitHub Actions quickstart, and feel free to ask any questions or report any issues in our Community Discord.
We've added support for AWS PrivateLink to Depot remote container builds. Now, you can connect to your private resources in your AWS account from a dedicated Depot builder connection without having to run the infrastructure yourself or expose your resources to the public internet.
This feature is available to all Depot customers on our Business plan. If you'd like to learn more about configuring AWS PrivateLink for your builds or have any questions, please shoot us an email.
We've shipped a new way to get help or ask questions directly from within Depot to make it easier to get help when you need it. You can click Contact us in the top right corner of Depot and submit a bug report, feature request, or general question directly to us.
You can also join our Community Discord to chat with other Depot users and our team, or check out our documentation for more information on how to use Depot.
It's been a few short weeks since we took the covers off our latest product, Depot-managed GitHub Actions Runners, for faster CI jobs in GitHub Actions. We shipped the initial version focused on Intel runners with 30% faster compute, 10x faster caching, and half the price of GitHub-hosted runners.
But today, we're announcing that we now have ARM runners in public beta for everyone to use in their existing GitHub Actions jobs. ARM runners are great for building artifacts and binaries for Arm-based devices like Apple M3 chips, Raspberry Pi, and more.
We've always had the ability to build Docker images natively for ARM, but now you can run your CI jobs on ARM runners as well. Getting started is easy. Just update your runs-on label to specify an ARM runner:
We've rolled out a new best practice integration guide for building Docker images for Rust. The guide is now available in our new Rust section of our docs. In the guide, we walk through how to configure a Dockerfile for a Rust project, including how to leverage cargo-chef for dependency management, use sccache for finer-grained caching, and use BuildKit cache mounts in Depot for even faster builds.
We've launched a new version of our Daggerverse module for interacting with Depot from your own Dagger functions. The new version includes an upgrade to bake to support the latest features, like our new ability to save images to our ephemeral registry.
You can grab the latest version and see an example of the new bake functionality in our Daggerverse module repository.
In the latest version of the depot CLI, we've added a new command: depot pull-token. This command allows you to generate a short-lived token for pulling container images from the Depot ephemeral registry.
This command comes in handy if you need to pull container images you have built with Depot from systems that utilize docker pull underneath the hood. One example of this is specifying a container for a GitHub Actions job.
Here is an example of how you can use this command:
This will generate a short-lives pull token for the given project ID. You can optionally specify a build ID to generate a pull token for a specific build:
We've rolled out a new OIDC trust relationship with Mint. This allows you to use Mint as your CI provider and authenticate to your Depot project for fast container image builds via an OIDC token exchange. It's a great way to authenticate to Depot without needing to manage static credentials in CI.
We're excited to announce that we've now made Depot ephemeral registries available to depot bake commands so that you can save built images for multiple targets for later use in your CI workflows, to share with your team, or to push to remote registries.
You can read more about how to leverage the ephemeral registry for all of your bake commands in our announcement blog post and get a full rundown on ephemeral registries in our docs.
Our latest release of the depot CLI includes an excellent new enhancement to depot bake --load that was previously not possible.
Before the latest release, depot bake --load would always pull back all targets in a bake file rather than just the target specified in the build. For example, if you had a bake file with 10 targets and you only wanted to build one of them, you would still have to pull back all 10 targets.
Instead, we now only pull back the targets specified in the build. This means that if you have a bake file with 10 targets and you only want to build one of them, you will only pull back the one target.
It works for groups as well! If you have a group with two targets and you request that group in your bake command, we will only pull back the two targets in that group.
So if you run depot bake --load for the test group, we will only pull back the app and db targets.
A new release of our depot CLI is now available with a few improvements and bug fixes. The biggest one is the ability to create Depot projects directly from the CLI:
This creates a new project in your Depot organization with the default region of us-east-1 and the default cache storage policy of 50 GB per architecture. If you want to customize the region and cache storage policy, you can use the --region and --cache-storage-policy flags:
In addition, we also shipped a few other improvements and bug fixes in v2.54.1:
Better error reporting for merging manifests... steps
Allow tag overrides to apply when building Compose files via depot bake
Depot managed GitHub Actions runners are now available in beta 🎉
Our runners are faster, half the cost of GitHub's runners, and fully managed inside AWS. They allow you to get the maximum performance out of your GitHub Actions workflows by being closest to your repositories & infrastructure, while also saving you money.
Runners live next to your existing Depot builders. This means every workflow that leverages Depot runners will have the fastest possible network connection to your BuildKit builder and layer cache.
If you'd like to join the beta and try them out for yourself, please reach out to us via email. You can learn how to configure your GitHub Actions to use Depot runners in our documentation.
We've added a new feature to the build details view that allows you to filter your builds logs by different facets:
Successful steps: Show the steps that were completed successfully.
Failed steps: Show the steps that failed.
Canceled steps: Show the steps that were canceled.
Cached steps: Show the logs for the steps that were a cache hit.
Uncached steps: Show the logs for the steps that were a cache miss and had to be rerun.
This should make debugging large Dockerfile or depot bake builds much more usable. Please let us know if you have any other feedback for things we could add here!
We've rolled out a new version of the depot CLI, which includes a number of improvements and bug fixes. The biggest one is an updated depot bake command to support the matrix block in a bake file. You can use it to reduce some duplication in your bake files or even dynamically generate targets.
In addition, we also shipped a few other improvements and bug fixes:
Fix for reporting build errors when using docker build and docker buildx build commands
Fix for parsing out cwd:// from incoming files
Added support for pushing multiple tags with depot push command
Add explicit support for linux/arm/v8 via the --platform flag
We shipped one of our most requested features, the ability to visualize your build context. The new Context tab in your build insights lets you see exactly what files were shipped to Depot for a given build.
Want to know all of the files in your build context? We've got you covered. You can reset your project cache to wipe out the existing build context. Your next depot build will then be your full context so you can easily debug everything that is in it. Subsequent builds only transfer what changed in your context.
To go with our new ephemeral registry, we introduced a new depot/pull-action GitHub Action that can be used to pull an image into a workflow via a given build ID.
The depot/build-push-action has the build ID stored in its output. So you can use that output to pull the image into your workflow to run integration tests, deploy to a staging environment, or whatever else you need to do.
We started the month with our much anticipated ephemeral registries. You can now include a --save flag in your build commands to persist your built image to a temporary registry. We introduced our new depot pull command to pull images from this registry and use them in your CI/CD pipelines. We also added a new depot push command to forward images from the registry to your destination registry.
You can read up on the new commands in our CLI docs.
Trust policies allow you to configure a connection between your Depot project and GitHub Actions, CircleCI, or Buildkite. This connection will enable you to perform an OIDC token exchange with your CI provider to dynamically authenticate to your Depot project without storing static access tokens in your CI configuration.
We've added these trust policies to our ProjectService API so that you can list, create, and remove trust policies via the API.
We shipped some updates to the build insights we launched last month!
You can now filter the logs of a given build to search for specific build steps or commands quickly. This is helpful if you have a large Dockerfile with many steps and want to find a particular step in the build quickly.
You will also notice that we now show the size of each layer in the logs view as well. This allows you to quickly see how large each layer is for each step in your Dockerfile.
We've launched a new CI integration guide that dives into how you can use Depot with AWS CodeBuild for faster Docker image builds. As a bonus, we show you how to use AWS CodeBuild's Lambda compute type to build Docker images via Depot so that you get even faster CodeBuild builds without the overhead of EC2 instance provisioning.
You can check out the complete integration guide in our AWS CodeBuild docs.
We mentioned this last month, but we've added some final touches to our beta feature, allowing you to save a build in a temporary registry. You can now run a build and save the resulting Docker image in a temporary registry for later use.
This will store the image in a temporary registry. You can use the depot pull command to pull it back out by build ID.
The save and pull workflow is great for folks who need to build an image once and then use it multiple times in different integration tests or environments.
If you have installed the latest version of the depot CLI, you can try it out now.
Depot is now available on the AWS Marketplace for folks looking to integrate with their existing enterprise contracts at AWS. We offer the ability to purchase Depot in the marketplace for those interested in our Enterprise plan.
We're very excited to release our Dockerfile Explorer that allows you to introspect the low-level build (LLB) steps that a Dockerfile transforms into. It's great for visualizing what each step in your Dockerfile is doing at a file system level and how different aspects of your build impact the LLB operations and, ultimately, the Docker layers produced during a build.
If you're interested in how we built it and how it works, we have a detailed blog post that goes into the details.
All builds are now running on our latest infrastructure provisioner, which is designed to further reduce the time to start a build. We wrote a detailed history of how our backend build architecture has evolved. You can read it here.
In short, we've optimized our provisioning system to leverage a new standby pool architecture, and it has significantly reduced the time it takes for us to start a given build by avoiding EC2 cold boot time.
We launched a new organization usage visualization that allows you to track your monthly Depot usage. Get insights into how many builds you're running, how much build time you've saved, and how much cache storage you use.
We've added a new flag, --sbom, to both the build and bake commands in our CLI. It can generate a Software Bill of Materials (SBOM) on every build. In addition, you can also specify a --sbom-dir parameter to have the generated SBOMs written to a local directory that you can then upload to your own SBOM analysis tools.
For Depot Drop Week #02, we wanted to bring better visibility into the entire Docker build. We launched a new feature called Build Insights that gives you a detailed view of what's happening inside a Docker build. You can see exactly what happened during a build via the raw Docker logs, analyze each step in the build, visualize the parent/child relationships between steps, and get automatic suggestions to improve your Dockerfile.
We launched a new section in our documentation, languages & frameworks, that will be a one-stop shop for our recommended best practices when building Docker images for a given language or framework. To kick things off, we documented the best practices for building Docker images for Node.js & pnpm.
We will add more over the coming weeks, but we also welcome anyone from the community to submit their ideas on our docs repo.
We shipped a new Node.js package @depot/cli that you can install into your Node projects to invoke CLI calls directly from your code. No more needing to install the CLI, configure it, etc. You can now install the package and start using it.
We released a new integration with CircleCI OIDC this past month! Bringing our total number of CI providers that now support trust relationships and the OIDC token exchange up to three:
We got a new logo and changed a few style things across Depot to better align with what we're building. We hope you like it! We also have a new brand assets section if you want to use our logo anywhere.
We now have usage caps! You can now cap the monthly build minutes your team can use. Usage caps are a great way to ensure you stay within budget and help you plan for the future.
The latest version of the depot CLI updates the configure-docker command to now configure Depot as the default buildx driver for all docker buildx build commands. This is in addition to the existing docker build support we released last month.
With this new driver, you can now use docker buildx build to build your Docker images with Depot and take advantage of all the benefits of Depot's caching and insights. So you can now use Depot with other developer tools that call docker buildx build under the hood, like Dev Containers, AWS CDK, and Docker Compose.
To go with our latest updates to our configure-docker command, we've rolled out two new integration guides to help you get started with Depot in your existing workflows and devtools.
We have joined the Cloud Native Compute Foundation (CNCF) as a silver member to help bring cloud native technologies to everyone. We're excited to be featured on the famous CNCF landscape and to be part of the CNCF community at large. You can read more about our membership in our full announcement post.
The Semgrep integration is in addition to our existing Hadolint integration. When you run depot build --lint, we will run Hadolint and Semgrep and return a combined list of issues. You can also use the --lint-fail-on flag to set the severity level at which you want to fail your build.
We released a new depot configure-docker command that installs Depot as a Docker CLI plugin and makes Depot the default builder for docker build and docker buildx build commands. Making it even easier to get faster Docker image builds locally and in CI without changing a single line of code. This unlocks a lot of Depot integrations with other great developer tools like Dev Containers, goreleaser, and AWS CDK. Check out our blog post for more details.
Upgrade to our latest CLI version to access this command: depot/cli.
depot/use-action GitHub Action
To go with our new Docker CLI plugin, we also released a new GitHub Action, depot/use-action, that makes it easy to use Depot as the default builder for your GitHub Actions workflows. You can use this action to get faster Docker image builds in your GitHub Actions workflows by dropping the new action above your Docker build steps. Nothing else needs to be added or changed.
To close our Drop Week #01, we announced a new authentication mechanism for open-source maintainers looking to get faster Docker image builds for public fork pull requests in GitHub Actions. The new mechanism allows maintainers to route Docker image builds for public fork pull requests to ephemeral Depot builders. Allowing those builds to get faster Docker image builds without compromising the main layer cache. Read more about our new OIDC issuer that makes it all work.
We released our free open-source Docker registry for Hugging Face's top 100 public AI models. You can use depot.ai to pull top models into your Docker image via a single COPY command in your Dockerfile. Any Docker image build that needs a generative AI model is orders of magnitude faster. Check out our announcement blog post for more technical details.
We rolled out our new cache storage architecture to all Depot-hosted regions. Cache storage v2 moves away from our old EBS volume-based architecture to a new one using a Ceph storage cluster, allowing us to scale storage to meet your project's needs and provide 10x the write throughput and 20x the read throughput for each project's cache.
View what's in your cache
There is a new Cache view in the Depot UI when you click on any of your projects. This view shows you exactly what is in your cache, how large each entry is, which line in your Dockerfile it's associated with, and which architecture that cache entry is for.
Choose your cache size
As a bonus, all projects can now be configured to have the cache size that makes sense for what you're building. Need to build an image that has Stable Diffusion embedded in it? No problem. Select our largest cache size of 500 GB.
We removed the beta flag and made our public API available to everyone so you can access the fastest place to build Docker images from your own code. If you're looking to build Docker images on behalf of your users, this is the API for you. You can call our build API from your code to acquire a Depot builder and run an entire Docker image build via Depot.
We've already seen a few folks build integrations with Depot and are excited to see more. If you're interested in building Docker images from code, check out our API docs and reach out if you have any questions.
We have built up quite a few integration guides over time. These are helpful for folks who are looking to quickly get faster Docker image builds in their existing CI workflows.
This month we added an additional guide to the list, integrating Depot with Jenkins. We have eight guides to help you get faster Docker builds in CI providers.
What tags were specified (i.e., herault in the screenshot)
Was the image pushed to a registry or loaded back into the Docker daemon
What line in your Dockerfile busted the cache
We plan to add filter functionality to this page so you can quickly find builds by tag, status, or whether or not the image was pushed to a registry.
Build summary links
You can now jump directly into the insights and visualization of a given build executed on Depot by clicking the Build summary link that both depot build and depot bake now output.
Folks leveraging self-hosted Depot builders can now reset the actual builders directly from Depot. Like resetting the cache for a given project, you can now reset the entire BuildKit machine backing your builds. Navigate to your Project Settings and click the Reset Machines button at the bottom.
With our latest CLI version, you can now specify a --lint flag and run a linter on your Dockerfile. We also added the ability to set the lint level of error that can fail your build. Check out more details on our blog post, lint a Dockerfile on every build.
We launched on Product Hunt on May 17th with our accelerated local builds with instant cache sharing, and it was a blast! We had a ton of support from the community, and we appreciate everyone dropping in to share their experiences and show their support.
We also hosted a Show HN over on Hacker News to chat in more detail about our accelerated local builds. It was awesome to dive into the technical details of how we accelerate local builds and unlock instant shared caching across teams. As always, we got a lot of great feedback from the community, and we're excited to continue to iterate.
As mentioned, we launched our accelerated local builds with instant cache sharing. It is a huge step forward for developers who want to build their Docker images faster locally and share the layer cache across their team. We've rethought what it means to load an image back after it's built and make Depot turbo builders available for local builds.
You can read the full details of how we did it and all of the bonus features that come with our new --load functionality on our announcement blog post.
All of our Depot documentation is now open-source! We wanted to make it easier for folks looking to contribute new ideas or help us fix issues. You can click the link at the bottom of any page to edit the page on GitHub.
We're rapidly iterating on enhancements for accelerating builds both locally and in CI. For example, we now have a significantly faster --load that makes it possible to load your image back into your local Docker daemon in seconds.
We've added a few more enhancements to make things even faster. We made exporting layers for both --load and --push significantly quicker. We effectively made the export run in parallel rather than serially. The net effect is 2x faster builds on average.
We also updated build and bake in how they search for a depot.json file. Previously, it would search for a depot.json file in the root directory. Now, it will check the filepath specified for a given file first and then recursively search up from there. You can still pass in the project ID for either command via --project instead of using a depot.json file.
To reduce the time taken to create image layers, Depot builders now hash layer contents using SIMD-accelerated SHA256 computations, AVX512 on Intel CPUs, and SHA2 instructions on Arm CPUs. This change can result in an additional 15% time savings for larger layers, especially important when packaging machine learning models in containers.
We have been working on depot/kysely-planetscale for a few months now, and we cut over to Kysely from Prisma with our switch over to PlanetScale. We couldn't be happier with the extra performance this has given us. You can read the full story on why we chose Kysely and Planetscale on our blog.
Both depot/build-push-action and depot/bake-action got updates that allow you to enable/disable SBOMs, provenance, and attestations. In addition, you can specify a build-platform for both actions if you want to force a build to run on Intel or Arm builder, regardless of the requested container platform.
You can now view a full breakdown of your build-minute usage across all your projects. This is a great way to see which projects are using the most build minutes and get an idea of your estimated monthly bill.
The first week of April was jam-packed as we closed our YC W23 batch with the famous Demo Day. It was an excellent opportunity to share what we've been working on with the world and get feedback from the YC community. We get asked a lot about our YC experience, and we can't say enough good things about it. The community is fantastic, and we're excited to be a part of it! We're planning on writing some more things about our experience in the future to help inspire others to apply. In the meantime, if you're considering applying, feel free to contact us; we'd be happy to share our experience.
We're huge fans of Buildkite, and we're excited to add support for Buildkite's OIDC integration! This means you can now use Depot with Buildkite without having to create any Depot API tokens for authentication. You can find the instructions for setting up a trust relationship to leverage Depot inside your Buildkite pipelines in our integration guide.
If you follow our depot/cli repository, you may have noticed that we've been shipping many new features in our CLI. We've been adding the capabilities to depot that we've wished existed for docker build itself. Here are the highlights of what we've added:
List your projects and builds with depot list
We've added a top-level depot list projects command that allows you to see all your organization's projects. You can then select a given project to see all of the builds for that project.
We've also added a depot list builds command that will list all of the builds for the project defined in your depot.json config file, or you can pass the --project flag to list the builds for a specific project ID. If you want to parse the output of this command, you can use the --output flag to get the result in JSON or CSV format.
Push your image to multiple registries at once with --push
We've added the ability to push your image to multiple registries simultaneously. This is a massive speed improvement for organizations that need to push their image to multiple registries.
Before this release, running a build that tagged and pushed to multiple registries would push to registry1 and then push to registry2 serially. With this new release, we can push to both registries in parallel.
Build your image with --load and --push at the same time
We've also added the ability to simultaneously build your image with --load and --push, which means the image will be both downloaded to the local machine as well as pushed to a remote registry in one step. Previously this required running two separate builds. This is a massive speed improvement for organizations that need to push their image to a registry and load it into their local Docker daemon.
Intelligent loading of only changed layers with --load
By default, docker buildx build --load . returns the tarball of the entire image to the client, even when the client may already have some or all of the layers cached locally. This is a massive waste of bandwidth and time compared to only loading the the new or changed layers.
With this release, we've made this more intelligent. When you run depot build --load . locally, we send back the diff between what you have locally and what the build has produced. This means that only new or changed layers need to be downloaded to the client.
This is a massive speed improvement for organizations that need to load their image back into their local development environment.
This optimized diff also skips the need to produce a single tarball for the whole image, so even in environments that may be not have any local layers downloaded, like CI, we are able to skip the slow tar process and download the layers directly, in parallel.
You can now see exactly how much of your cache you're using on a project-by-project basis, with visibility into both Intel & Arm caches. You can also see how much time you save with each build and how much of it was cached. We also have an initial view into the exact steps of your build that got executed, whether they were cached, and how long each step took.
We're really excited about this initial version and are already working on several more insights that we can surface on every build. So if you have things that you would like to see here, please let us know!
We made a lot of landing page improvements to Depot, but this one is our favorite. It is a live snapshot of the time users have saved over the past seven days using Depot to build their Docker images. We are really proud of this one, and we hope you like it too.
Depot builder machines now come with 16 CPUs and 32 GB of memory, 4x the size of our previous machines! With our goal of being the fastest place to build Docker images, we are always looking at new ideas that make fast builds on Depot even faster, and turbo builders are one of those ideas. They are available for both Intel and Arm builds without any additional configuration.
We have always believed in showing rather than telling, which is the philosophy behind the benchmarks on our landing page. We benchmark real-world open-source projects, building them with both depot build and docker build in GitHub Actions, for every upstream commit. And the benchmarks themselves are open source, you can click on any of them and see the side-by-side comparison of every run.
Mastodon is a free, open-source social network server based on ActivityPub, where users can follow friends and discover new ones. They have been building multi-platform images in GitHub Actions, with 3-hour build times.
We set up a benchmark using Depot and the results shattered our existing records. We built Mastodon's multi-platform image 53x faster than building it in GitHub Actions with Docker. We hope to contribute this back upstream to Mastodon in the weeks ahead.
We have been working on a public API for Depot for a while now, and we are excited to announce that it is now in private beta. You can now use the Depot API to build images from your own applications and services. Check out our API documentation for more details. If you're interested in building Docker images quickly from your own applications and services, contact us.
To go with our new depot bake command, we released a depot/bake-action GitHub Action that you can drop in to replace the docker/bake-action to make use of Depot builders. This action is a drop-in replacement for the docker/bake-action. Check out our GitHub Actions integration guide for an example of how to use it.
We have a major new release of our depot CLI. This release includes many new features to expose more of Depot to the command line and make builds even faster. Here are some of the highlights:
depot bake comes to Depot! Build all of the images that compose your application from a single HCL, JSON, or Compose file. Check out our announcement blog post for more details.
depot cache reset allows you to reset the cache for a specific Depot project. This is useful if you want to clear the cache of your project before running a build in CI for cases where you want a totally uncached environment.
--build-platform is now available for both our build and bake commands. By default, we run on Intel or Arm builders depending on the container platform or both in the case of multi-platform builds. This flag allows you to force builds to run on Intel or Arm builders, regardless of the requested container platform.
We launched our official Discord community this week! We use Discord internally to communicate amongst ourselves. We figured it would be great to let those that are excited about Docker containers and/or Depot hop into a dedicated space where you chat with others that share your interest, like us 😊
We did a bit of cleanup around our documentation and added more information about what is happening at an architecture level. We already had the latter in our self-hosted documentation, but we have never documented or shared that this is the exact same architecture we use internally as well. So now you can get an idea of what is happening under the hood when you use Depot, when to use it, and when it makes sense to not use it. Check it out in our introduction docs.
When you build your Docker image in CI, you usually want to push it to your registry afterward. However, the further your builder is from your registry, the slower the network latency. With project region selection, you can choose to have your Depot builders launch in the region that is closest to your registry so that you minimize the latency of pushing your image to your registry. Read more about this new feature in our announcement blog post.
We applied to the YC W23 batch at the end of September and got accepted three days before Kyle picked up his life to move to France. We are excited to join the YC family and are looking forward to the next 3 months of the program. Be sure to check out our Launch YC.
It's a foundational goal to make Depot as easy as possible to integrate into your existing tools and processes. On that front, we wanted to save several clicks when trying to plug Depot into your existing CI provider. So, now when you create a project, you can choose your CI provider and get the step-by-step configuration with the workflow config to route your Docker image builds to Depot.
We rolled out our new infrastructure provisioning system that allows for faster build starts, faster CLI connections, and builders running on the latest generation of AWS compute. We will be rolling out new platform releases every few months as we upgrade and improve the entire build system behind the scenes.
On the topic of CI providers, we added a new integration guide on how you can get faster image builds with Bitbucket Pipelines using Depot. Take a look at the guide in our docs.
Managing different CLI and tool versions can be annoying. There are numerous tools out there for improving this, but they always seem to be specific to the actual tool (i.e., tfenv for Terraform). asdf allows you to manage multiple versions of multiple tools with a single CLI. So, we added a plugin for asdf, that allows you to install and manage the depot CLI versions. Check out the plugin repo for details.
October was a big month for us as we announced that we were leaving our day jobs and going full-time at Depot. It's been a busy year juggling day-to-day work with Depot on our nights and weekends, but it's been entirely worth it, and we are really proud of what we have been able to bootstrap so far. This is just the beginning for Depot, and we have a much bigger vision that we are putting into motion with this change. More news on our new venture, now named Depot Technologies Inc, in the coming weeks as we start to close out the year.
Building a company
Not a feature or bug fix, but just it's just as important. We have largely been getting everything set up for Depot to be a full-time company. We are putting all the things in place so that we can continue to build Depot and add in the new capabilities we have been dreaming about since January. There has been a lot of work put in to make sure we are optimizing our documentation, setting up custom onboarding for everyone, and speaking with you about what other things Depot can help you solve. We have also been working on fundraising so that we can keep this ship afloat while we shoot for the moon. More on that soon.
Kyle moved to France
What's scarier than starting a new company, building a new product, and leaving your day job? Doing all of that while simultaneously moving your family to another country. We decided to relocate to France from Portland, Oregon, and it's been a wild ride. We are now settled in and have been enjoying getting familiar with our new city and new routines. It's all very exciting, and maybe a bit overwhelming at times. If you ever find yourself in Montpellier, France, and want to discuss slow builds taking years off our lives, please let me know, and we can grab a coffee, croissant, or a bottle of wine.
In our experience, the lower the bar for users to try out a new product or service, the easier it is to get in there and see if it's valuable. So, we added the ability to log in with Google and Microsoft to make it even easier to get started with Depot. We are also excited to announce that we now have SSO capabilities for those looking for that kind of thing. Reach out to us at contact@depot.dev and we can help get you set up.
GitLab CI is a pain to build Docker images with because of the tradeoffs you have to make to get it done. We wrote a blog post that talks about these tradeoffs, and how they can cause build times to explode and open security holes that you would rather keep closed. Depot makes this much simpler because your image builds get routed to our remote builders with a persistent cache. So, you can build your images without Docker-in-Docker (dind) and full root permissions.
A big milestone for our depot CLI, which is a drop-in replacement for docker build, is that it is now at 1.0.0. There are no breaking changes in this release, we jumped to 1.0 so that we can release new versions with proper semver versioning (major.minor.path).
We have been working hard to make Depot more stable and reliable. We have been running Depot in production for a few months now and have been able to identify and resolve several issues. The one we have been working on the most is the stability of builds across cloud providers. Today, we support image builds for Intel and Arm architectures by routing builds for each given architecture to their respective cloud provider (AWS for Arm and Fly for Intel).
However, this creates a coupling to cloud providers that isn't ideal for operating our remote builders at scale. The solution we have in beta currently is to route builds to different cloud providers based on outages at our existing ones, capacity restrictions, etc. This is a much more robust solution that allows us to always be ready to process a build without interruption.
Our documentation now has search, so you can find exactly what you need. We've also added a lot more examples of how to integrate Depot into existing GitHub Actions workflows.
Self-hosted Depot builders are here for everyone! We worked with our early adopters to design a simple and secure way to leverage the performance of Depot on your own infrastructure. It took a few iterations, but we are excited about what this can unlock for folks and for the opportunity to make this available on other cloud providers.
We recently contacted PostHog after benchmarking around a 2x speedup on one of their Actions workflows. They were intrested in the switch, and we collaborated to convert their Actions workflows to use the Depot actions for Docker builds.
After the switch, their main Docker build workflow went from around sixteen minutues on average to only three, a 5x speedup! You can read more about the switch on PostHog's blog.
If you have an open-source project that could use faster Docker builds, definitely contact us. We're happy to work with you on free or discounted access to Depot.
Work continues on self-hosted Depot builders. As we revealed last month, we are developing the ability for organizations to connect an AWS account to their Depot organization, then project builds run inside the connected account instead of inside Depot's infrastructure providers. This allows organizations with special requirements to utilize Depot, while keeping their project data entirely inside their own account.
As we are nearing a beta release of self-hosted builders, we have settled on the following architecture:
Organizations create a cloud connection in their Depot organization, providing their AWS account ID
Organizations launch a set of AWS resources (VPC, launch templates, etc.) inside their account — we will provide an open-source Terraform module to make this easy
An open-source cloud-agent process runs inside the organization's AWS account — it is responsible for launching and managing instances needed for project builds, with minimal IAM permissions
Inside the launched instances, an open-source machine-agent is responsible for communicating with the Depot API and running any software needed for the build
We've chosen this architecture primarily to minimize blast radius and security footprint. All software running inside organization cloud accounts is open-source and auditable, and we do not share AWS account credentials or cross-account roles with the hosted Depot service.
We expect to have support for self-hosted builders completed for AWS by the end of August, and expect to expand to other cloud providers in the future.
We experienced several disruptions and outages with our infrastructure provider for Intel builds this past month. We are working to extend our automatic failover systems to support cross-provider failover, in addition to their current in-provider failover capabilities. This will mean that if one of our hosting providers is experiencing an outage, your builds will automatically be rerouted to a backup provider.
Project tokens have launched, allowing you to create an API token that can be used to build just a single project. We now support three ways you can authenticate builds: user access tokens, OIDC tokens, and project tokens.
Project tokens provide a better method for authenticating builds from CI providers where OIDC tokens are not supported. They are tied to a specific project in a single organization, unlike user access tokens that are tied to a user and grant access to all projects and organizations that user can access.
In GitHub Actions, we support OIDC tokens and recommend them over project or user tokens. OIDC trust relationships allow GitHub Actions to retrieve a short-lived access token for the build that, similar to project tokens, can only access the projects that have been allowed for that repository.
For all other CI providers, we recommend using project tokens for authentication.
We are working on the option to launch Depot builder instances inside your own cloud account. We are starting with initial support for AWS and the new CircleCI builders, but plan to expand to other builder types (e.g., Docker) and other clouds in the future.
Today, we launch and manage all aspects of your builder instances for you. However, some organizations have specialized needs that require them to self-host their CI builders. With our new self-hosted support, those organizations can continue to use Depot as the "management plane" for their CI builders, but the builders will launch inside the customer's cloud account instead.
We're planning to support self-hosted builders on a per-project basis, so organizations can additionally choose for each project where its builds should execute.
More details about self-hosted builders will be available soon.
When we first launched the beta, our documentation was limited to our Quickstart Guide. But we have added a lot more over the past few weeks. Notably, we have added integration guides for CI providers like GitHub Actions, CircleCI, Travis CI, and Google Cloud Build so you can quickly try out Depot inside your existing CI provider. If you prefer to try out Depot locally, we also put together a Local Development guide to get you started.
We wanted to make it simple to try Depot in your existing GitHub Action workflows. So, we released depot/build-push-action, that implements the same inputs and outputs as docker/build-push-action but makes uses of our depot CLI to run your build.
Bonus: We now support OIDC token authentication in GitHub Actions 🎉
Our new GitHub Action also allows you to use GitHub's OIDC token as authentication to depot build. No more static access keys in GitHub Actions!
If you set the permissions block in your action workflow and make use of depot/build-push-action you can authenticate builds via OIDC and don't need to generate a user access token.
If you want to see an example of this new authentication method in action, you can check out our moby/moby benchmark workflow.
From the first line of code we wrote for depot we wanted it to be very easy to switch to it from docker. It is critical, in our opinion, that the ability to try out new tools and technologies have the lowest barrier to entry as possible. So, we have built our CLI with that in mind, it takes all the same flags as docker build right out of the box.
We released depot 0.1.0 which makes a small change to the built image transfer. With this release, we now leave the image on the remote builder instance. This was previously done by passing the --no-load flag. We decided to switch this behavior so that when you are running builds in your CI environment you are not unnecessarily waiting for the image to be transferred back to you when you may not need it. If you do need the built image for running it locally or running integration tests in CI, you can use the --load flag to tell our remote builder to transfer the built image back. You can read the full release notes here.