Same here. I’ve been building a bootstrap script, and each time I test it, it tears down the whole cluster and starts from scratch, pulling all of the images again. Every time I hit the Docker pull limit after 10 - 12 hours of work, I treat that as my “that’s enough work for today” signal. I’m going to need to set up a caching system ASAP or the hours I work on this project are about to suddenly get a lot shorter.
Comment on Docker Hub limiting unauthenticated users to 10 pulls per hour
PassingThrough@lemm.ee 10 months ago
Huh. I was just considering establishing a caching registry for other reasons. Ferb, I know what we’re going to do today!
Uli@sopuli.xyz 10 months ago
Daughter3546@lemmy.world 10 months ago
Do you have a good resource for how one can go about this?
PassingThrough@lemm.ee 10 months ago
I’ve only done my “is it even possible” research so far, but these look promising:
medium.com/…/docker-registry-caching-a2dfefecfff5
github.com/obeone/multi-registry-cache
carzian@lemmy.ml 10 months ago
www.squid-cache.org Should work too I think
Daughter3546@lemmy.world 10 months ago
Much appreciated <3
jaxxed@lemmy.ml 10 months ago
You can host your own with harbor, and set up replication per repo (pull upstream tags) If you need a commercial product/support you can use MSR v4.
Harbor can install on any K8s cluster using helm, with just a couple of dependencies (cert-manager, postgres op, redis-op) Replication stuff you can easily add.
I have some no-warranty terraform I could share if there is some interest.
femtech@midwest.social 10 months ago
That’s what we do internally for our openshift deployment. It will reach out if not in harbor and then cache it there for everyone else to use.