Hi all, I’ve been noticing a pattern in self-hosting communities, and I’m curious if others see it too.
Whenever someone asks for a more beginner-friendly solution, something with a UI, automated setup, or fewer manual configs, there’s often a response like:
“If you can’t configure Docker, reverse proxies, and Yaml files, you shouldn’t be self-hosting.”
Sometimes it feels like a portion of the community views complexity as a badge of honour. Don’t get me wrong, I love the technical side of self-hosting. I enjoy tinkering, breaking things, fixing them, learning along the way. That’s how most of us got into it.
But here’s the question: Is gatekeeping slowing down the adoption of self-hosting?
If we want more people to own their data, escape Big Tech, and embrace open-source alternatives, shouldn’t we welcome solutions that lower the entry barrier?
There’s room for everyone:
-
people who want full control and custom setups,
-
people who want semi-manual but guided,
-
and people who want it to work with minimal friction.
Just like not every Linux user compiles from source, but they’re still Linux users.
Where do you stand? Should self-hosting stay DIY-only or is there value in easier, more accessible ways to self-host?
My project focuses on building a tool that makes self-hosting more accessible without sacrificing data ownership, so I genuinely want your honest take before releasing it more widely.
moonpiedumplings@programming.dev 10 hours ago
Yeah. I’m seeing a lot a it in this thread tbh. People are stylizing themselves to be IT admins or cybersec people rather than just hobbyists. Of course, maybe they do do it professionally as well, but I’m seeing an assumption from some people in this thread that its dangerous to self host even if you don’t expose anything.
Tailscale in to you machine, and then be done with it, and otherwise only have access to
Now, about actually keeping the services secure, further than just having them on a private subnet and then not really worrying about them. To be explicit, this is referring to fully/partially exposed setups (like VPN access to a significant number of people).
There are two big problems IMO: Default credentials, and a lack of automatic updates.
Default credentials are pretty easy to handle. Docker compose yaml files will put the credentials right there. Just read them and change them. It should be noted that you still should be doing this, even if you are using gui based deployment
This is where docker has really held the community back, in my opinion. It lacks automatic updates. There do exist services like watchtower to automatically update containers, but things like databases or config file schema don’t get migrated to the next version, which means the next version can break things, and there is no guarantee between of stability between two versions.
This means that most users, after they use the
docker-composemethod recommended by software, are manually, every required to every so often, log in, and run docker compose pull and up to update. Sometimes they forget. Combine this with shodan/zoomeye (internet connected search engines), you will find plenty of people who forgot, becuase docker punches stuff through firewalls as well.GUI’s don’t really make it easy to follow this promise, as well. Docker GUI’s are nice, but now you have users who don’t realize that Docker apps don’t update, but that they probably should be doing that. Same issue with Yunohost (which doesn’t use docker, which I just learned today. Interesting).
I really like Kubernetes because it lets me, do automatic upgrades (within limits), of services. But this comes at an extreme complexity cost. I have to deploy another software on top of Kubernetes to automatically upgrade the applications. And then another to automatically do some of the database migrations. And no GUI would really free me from this complexity, because you end up having to have such an understanding of the system, that requiring a pretty interface doesn’t really save you.
Another commenter said:
And I agree with them, but I think things kinda stalled with Docker, as it’s limitations have created barriers to making things easier further. The tools that try to make things “easier” on top of docker, basically haven’t really done their job, because they haven’t offered auto updates, or reverse proxies, or abstracted away the knowledge required to write YAML files.
Share your project. Then you’ll hear my thoughts on it. Although without even looking at it, my opinion is that if you have based it on docker, and that you have decided to simply run docker-compose on YAML files under the hood, you’ve kinda already fucked up, because you haven’t actually abstracted away the knowledge needed to use Docker, you’ve just hidden it from the user. But I don’t know what you’re doing.
You service should have:
Further afterthoughts:
Simple in implementation is not the same thing as simple in usage. Simple in implementation means easy to troubleshoot as well, as there will be less moving parts when something goes wrong.
I think operating tech isn’t really that hard, but I think there is a “fear” of technology, where whenever anyone sees a command line, or even just some prompt they haven’t seen before, they panic and throw a fit.