Docker Registry Setup
DockerHub’s impending download rate limit presents an interesting challenge for some.
From hobbyists to open core ecosystems, projects are trying to find ways insulate their users.
For my projects, I chose to deploy a simple registry mirror.
One nice thing about this project is that the system is largely stateless (and cheap to run).
docker-auth projects are horizontally scalable.
The only stateful system you really need to manage is a cache (which isn’t mission critical).
While Harbor was appealing, it had a lot more overhead than what I needed.
In this post, I’ll walk you through my deployment.
- An Amazon S3 like system (minio)
- A Kubernetes cluster
- A domain or subdomain to work on.
- If running locally, take a look at how to setup a local domain with kind.
Unlike many of my posts, I tried to externalize resources this time around instead of inlining them. All the configuration used in this project can be found here. Below, you’ll find an overview of the ecosystem.
Before getting into things, let’s prepare our configuration. These values will be used throughout the process and should be unique to your deployment.
In addition to that configuration, we will need to tap a couple Helm repositories.
stable provides a well-supported
bitnami provides a couple
redis charts sufficient for use with the registry.
Deploy a redis cluster
First things first, docker-registry supports caching blobs read from the S3 backend.
It supports two types of drivers:
For our deployment, we’ll be using redis.
Bitnami provides a
redis and a
redis chart offers a primary-replica scheme.
redis-cluster provides multiple primaries, improving the availability of writes.
For our sake, we’re just going to use the
All we need to do, is provide a password.
Deploy the registry
Once we have Redis and S3 ready, we should be able to deploy the registry. For this, you will need my 01-docker-registry-values.yaml file from the gist. This file configures the registry to:
- Store data in an S3 bucket
- Cache blobs in redis
- Run in a readonly mode (toggled by setting
- Proxy requests to DockerHub (toggled by setting
All we need to do is plug in our values from earlier.
Just like that, you should have a registry mirror up and running. You can test this by port-forwarding to the pod and attempting to pull any image from DockerHub. While being able to pull data from DockerHub is great, this could be used maliciously. For example, someone could start pulling every image, for every tag on DockerHub. This can result in bloating your image registry with images you didn’t intend to host.
docker_auth was built to fill in the authentication (authn) and authorization (authz) gap for Docker.
It was one of the first systems to support Docker’s token authentication flow.
The flow uses signed JWT tokens to verify identity claims by clients.
In order for it to work, we need to generate a private key and certificate.
Together, these are used to sign JWT token.
Using the commands below, we can generate the key and certificate.
Be sure to change the
-subj line to something appropriate for your project.
auth-credentials secret has been configured, we can deploy the docker-auth system.
For this example, we’re going to restrict anonymous pulls to a DockerHub user or group of our choosing (
This means that users who are unauthenticated will only be allowed to pull images from these specified groups.
For this, you will need 02-docker-auth-values.yaml from the gist.
Once running, you should be able to issue
/auth requests against the pod.
The endpoint should return a pair of tokens that represent an unauthenticated user.
Deploy an ingress
Now that both docker-registry and docker-auth are running, let’s connect them.
In kubernetes, an
Ingress is a great way to provide application layer (L7) routing to applications.
It supports both host and path based routing.
In this post we’ll configure the paths of an ingress to route to their appropriate backend.
/github_auth will all route to the docker-auth project.
/v2 will route to the docker-registry project.
To do this, we will need 03-ingress.yaml from the gist.
Be sure to change the
host to your project domain.
Once applied, you should be able to start working with the ingress definition.
/auth should resolve from docker-auth.
/v2/_catalog should resolve from docker-registry.
Remember, some annotations on the ingress are specific to my tech stack.
You may need to find the equivalent for yours.
Connecting docker-registry to docker-auth
Now that the client can speak to docker-auth and docker-registry, we can connect the two.
To do so, the docker-registry needs access to the certificate used by docker-auth.
This is used to validate the JWT token.
We can update
docker-registry-values.yaml to volume mount the file in.
Then, we just need to point the
auth block of configuration at the proper endpoints.
Below is a diff of the changes needed to add auth.
Alternatively, you can grab an updated copy from 04-docker-registry-values-yaml in the gist.
Remember to set your
Once your release has been upgraded, the registry should deny any requests for images outside of your configured group.
For example, I wound up with
ocr.sh for my projects domain.
DOCKERHUB_ALLOWED to only allow unauthenticated users to pull
This allows my project consumers to continue to use my product without being rate limited by GitHub.
Under the hood, all requests to DockerHub are authenticated.
In addition to that, I have tight control over what repositories I allow to be pulled through my proxy.
Thanks for reading! I hope you learned something by stopping by and reading this post.