For Jenkins, you can use project or user access tokens for authenticating your build with Depot. We recommend using project tokens as they are scoped to a specific project and owned by the organization.
Note: The CLI looks for the DEPOT_TOKEN
environment variable by default. For both token options, you should configure this variable for your build environment via global credentials.
You can inject project access tokens into the Pipeline environment for depot
CLI authentication. These tokens are tied to a specific project in your organization and not a user.
It is also possible to generate a user access token to inject into the Pipeline environment for depot
CLI authentication. This token is tied to a specific user and not a project. Therefore, it can be used to build all projects across all organizations that the user can access.
To build a Docker image from Jenkins, you must set the DEPOT_TOKEN
environment variable in your global credentials. You can do this through the UI for your Pipeline via Manage Jenkins > Manage Credentials
.
In addition, you must also install the depot
CLI before you run depot build
.
pipeline {
agent any
environment {
DEPOT_TOKEN = credentials('depot-token')
}
stages {
stage('Build') {
steps {
sh 'curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh'
sh 'depot build .'
}
}
}
}
This example shows how you can use the --platform
flag to build a multi-platform image for Intel and Arm architectures natively without emulation.
pipeline {
agent any
environment {
DEPOT_TOKEN = credentials('depot-token')
}
stages {
stage('Build') {
steps {
sh 'curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh'
sh 'depot build --platform linux/amd64,linux/arm64 .'
}
}
}
}
This example installs the depot
CLI to be used directly in the pipeline. Then, docker login
is invoked with the environment variables for DOCKERHUB_USERNAME
and DOCKERHUB_TOKEN
for the authentication context of the build to push to the registry.
pipeline {
agent any
environment {
DEPOT_TOKEN = credentials('depot-token')
DOCKERHUB_USERNAME = credentials('dockerhub-username')
DOCKERHUB_TOKEN = credentials('dockerhub-token')
}
stages {
stage('Build') {
steps {
sh 'curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh'
sh 'docker login --username $DOCKERHUB_USERNAME --password $DOCKERHUB_TOKEN'
sh 'depot build -t <your-registry>:<your-tag> --push .'
}
}
}
}
This example installs the depot
and aws
CLIs to be used directly in the pipeline. Then, aws ecr get-login-password
is piped into docker login
for the authentication context of the build to push to the registry.
pipeline {
agent any
environment {
DEPOT_TOKEN = credentials('depot-token')
DOCKERHUB_USERNAME = credentials('dockerhub-username')
DOCKERHUB_TOKEN = credentials('dockerhub-token')
}
stages {
stage('Build') {
steps {
sh 'curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh'
sh 'curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"'
sh 'unzip awscliv2.zip'
sh 'aws ecr get-login-password --region <your-ecr-region> | docker login --username AWS --password-stdin <your-ecr-registry>'
sh 'depot build -t <your-ecr-registry>:<your-tag> --push .'
}
}
}
}
You can download the built container image into the workflow using the --load
flag.
pipeline {
agent any
environment {
DEPOT_TOKEN = credentials('depot-token')
}
stages {
stage('Build') {
steps {
sh 'curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh'
sh 'depot build --load .'
}
}
}
}
You can simultaneously push the built image to a registry and load it back into the CI job using the --load
and --push
flags together.
pipeline {
agent any
environment {
DEPOT_TOKEN = credentials('depot-token')
}
stages {
stage('Build') {
steps {
sh 'curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh'
sh 'depot build -t <your-registry> --load --push .'
}
}
}
}