Same here. I’ve been building a bootstrap script, and each time I test it, it tears down the whole cluster and starts from scratch, pulling all of the images again. Every time I hit the Docker pull limit after 10 - 12 hours of work, I treat that as my “that’s enough work for today” signal. I’m going to need to set up a caching system ASAP or the hours I work on this project are about to suddenly get a lot shorter.
Comment on Docker Hub limiting unauthenticated users to 10 pulls per hour
PassingThrough@lemm.ee 1 day ago
Huh. I was just considering establishing a caching registry for other reasons. Ferb, I know what we’re going to do today!
Uli@sopuli.xyz 1 day ago
Daughter3546@lemmy.world 1 day ago
Do you have a good resource for how one can go about this?
jaxxed@lemmy.ml 12 hours ago
You can host your own with harbor, and set up replication per repo (pull upstream tags) If you need a commercial product/support you can use MSR v4.
Harbor can install on any K8s cluster using helm, with just a couple of dependencies (cert-manager, postgres op, redis-op) Replication stuff you can easily add.
I have some no-warranty terraform I could share if there is some interest.
femtech@midwest.social 1 hour ago
That’s what we do internally for our openshift deployment. It will reach out if not in harbor and then cache it there for everyone else to use.
PassingThrough@lemm.ee 1 day ago
I’ve only done my “is it even possible” research so far, but these look promising:
medium.com/…/docker-registry-caching-a2dfefecfff5
github.com/obeone/multi-registry-cache
carzian@lemmy.ml 1 day ago
www.squid-cache.org Should work too I think
Daughter3546@lemmy.world 1 day ago
Much appreciated <3