You know the story. You start an innocent project in Rust and start using GitHub Actions as your CI provider. The project accumulates complexity and evolves into a workspace with several crates. Its compilation time grows in kind, and you're finding it harder and harder to maintain flow.
In response, you've added caching to your workflow using GitHub’s example config:
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
It’s been doing some heavy lifting, but recently results have been underwhelming. Oftentimes small code changes cause wholesale recompilation and, to make matters worse, the ballooning cache payload itself is taking significant time to handle.
You grind your teeth and wonder if it’s all downhill from here; after all, you chose Rust intentionally, knowing you were trading fast execution for slow compilation. Are you doomed to push small changes and wait an eternity while the world is rebuilt, ad infinitum?
No, friend. Just Use Sccache.
GitHub as a suboptimal default
On your laptop, you may be used to incremental builds out of the box, with cargo
saving granular building blocks to disk for later reuse. In a CI environment like GitHub Actions, however, runners are ephemeral by nature and don’t include the benefit of a persistent disk. Artifacts from one job will not carry over to the next, and it’s up to you to reconstitute important context before you begin your build.
It’s natural to reach for actions/cache
here: list the directories that are important to keep, and GitHub will manage their propagation across builds. But, although it's straightforward to install, it doesn’t take long for a Rust project to feel the limitations of this setup. The target/
directory will hoard artifacts from prior builds and grow uncontrollably without your intervention. Even if you cull stale artifacts, the whole collection is handled as a coarse unit, and builds will regularly download a cache entry containing a subset of artifacts that aren’t useful.
Beyond that, GitHub network transfer is notoriously slow, and each repo is limited to a total cache size of 10GB, which fills quickly when you’re saving whole copies of the target/
directory at a time. In short, this cache is operational but far from optimal.
Enter sccache
cargo
struggles to capitalize on its aggressive caching strategy without a persistent disk in CI and is particularly hamstrung in GitHub Actions. In contrast, sccache
was designed with ephemeral environments in mind and sidesteps GitHub's limitations with an alternative approach.
It wraps the rust compiler (rustc
) and functions like a shim, intercepting all compilation requests inbound from cargo
. Deriving a cache key from the request and its environment, sccache
then checks for its presence in its cache. A hit means the compilation task was previously completed, so sccache
simply returns the cached result. With a miss, sccache
forwards the call to rustc
and caches the result for later. sccache
will store this cache on disk by default but, crucially, it also has native support for remote content-addressable storage (CAS).
sccache
is straightforward to test locally. After installing the sccache
binary to your system, you can run your first sccache
-enhanced build like so:
RUSTC_WRAPPER=sccache cargo build --release
Your first build may take some time as sccache
warms the cache, but rerun the build and your second should be much faster:
sccache --stop-server # to isolate stats for the next build
cargo clean
RUSTC_WRAPPER=sccache cargo build --release
sccache -s
will ask for build statistics, which in this case should reveal all cache hits.
$ sccache -s
Compile requests 45
Compile requests executed 33
Cache hits 33
Cache hits (Rust) 33
Cache misses 0
Cache hits rate 100.00 %
Cache hits rate (Rust) 100.00 %
…
Non-cacheable calls 11
Non-compilation calls 1
…
sccache
is similarly straightforward to adopt in CI with the help of the official sccache GitHub Action. Add the following to your workflow to install and activate sccache
. Note the line enabling the GitHub Actions cache as the default remote backend for now.
- runs-on: ubuntu-latest
- steps:
+ - name: Run sccache-cache
+ uses: mozilla-actions/sccache-action@v0.0.7
- name: Compile project
+ env:
+ SCCACHE_GHA_ENABLED: "true"
+ RUSTC_WRAPPER: "sccache"
run: cargo build --release
With just a bit of effort, you now have functional, incremental compilation in your CI pipeline. As expected, your first build will populate the cache, and successive builds should be much faster as those cache contents are utilized.
Taking stock
This represents an improvement over the status quo. Whereas cargo
alone must wait for the whole cache blob to arrive upfront (and then also later depart), sccache
allows the build to begin immediately and concurrently fetches only what’s necessary for the current build. However, the GitHub backend still comes with many of the aforementioned caveats – slow network transfer, a 10GB maximum, etc – and its direct (ab)use as a CAS unfortunately limits us in a new and important way.
For each invocation of rustc
, sccache
will ask the cache backend if the corresponding artifact exists. If your project is large and contains a lot of dependencies, this could end up being too chatty for GitHub’s liking. To its credit, sccache
gracefully treats a 429 Too Many Requests
response as a cache miss, as opposed to failing your build midway. But this is indeed a false miss, and the corresponding compilation time during periods of high activity could result in worse overall build performance.
Enter Depot Cache
All of these issues are alleviated by choosing a backend designed specifically for this workload. sccache
is built on top of the awesome OpenDAL storage adapter and thus supports a number of backends with alternative protocols. We’ve added WebDAV support to Depot Cache, and so it plugs right into sccache
(in addition to other popular WebDAV-based build clients like gradle
and bazel
).
You can use Depot Cache by configuring an endpoint and a token:
- runs-on: ubuntu-latest
- steps:
- name: Run sccache-cache
uses: mozilla-actions/sccache-action@v0.0.7
- name: Compile project
env:
- SCCACHE_GHA_ENABLED: "true"
+ SCCACHE_WEBDAV_ENDPOINT: 'https://cache.depot.dev'
+ SCCACHE_WEBDAV_TOKEN: DEPOT_ORG_TOKEN
RUSTC_WRAPPER: "sccache"
run: cargo build --release
And with that, sccache
is free to manage its artifacts without restriction. Artifacts from one branch are available to any other as soon as they’re built, and its cache can grow to any size you need. This alone is a great milestone, but if you want ultimate performance, there’s one final low-hanging fruit to harvest.
Extra Credit: Depot Runners
We've written at length before about GitHub's CI runners. In sum, they're expensive, underpowered, and often unreliable. At Depot, we're working hard to improve this experience for everyone, and that includes smoothing out rough UX edges wherever possible. In addition to being 30% faster and 50% cheaper, our Runners have fast network access to Depot Cache, so sccache
can run at optimum speed.
On a Depot Runner, sccache
will automatically authenticate to Depot Cache, so all told, your workflow file can simplify to the following:
- - runs-on: ubuntu-latest
+ - runs-on: depot-ubuntu-latest
- steps:
- name: Run sccache-cache
uses: mozilla-actions/sccache-action@v0.0.7
- name: Compile project
env:
- SCCACHE_WEBDAV_ENDPOINT: 'https://cache.depot.dev'
- SCCACHE_WEBDAV_TOKEN: DEPOT_ORG_TOKEN
RUSTC_WRAPPER: "sccache"
run: cargo build --release
With a more powerful runner and a more capable cache backend, your builds should finish significantly faster. The time you recover can be better spent iterating on your code and making your users happier.
Compile time will certainly continue to demand your attention as your project gathers steam, but removing your CI pipeline as your main bottleneck allows you to direct your energy toward optimizations with uniform benefit to localhost and CI alike.
Related articles
- Remote build caching: The secret to lightning-fast builds
- How we launch GitHub Actions runners in 5 seconds
- How we cut GitHub Actions queue times by 4x
