moonpiedumplings
@moonpiedumplings@programming.dev
- Comment on Is there a good email server that can be run through Docker? 1 day ago:
What about domain reputation?
- Comment on Is there a good email server that can be run through Docker? 1 day ago:
Have you considered that the reason why your mail server is trusted is because it’s been around for 20 years?
Have you tried to set up mail from scratch on a new domain/ip?
- Comment on My self hosted badges of honor 1 day ago:
these ones: www.etsy.com/shop/SoHexy ?
I think I’m in love. They have such great variety, and the artstyle is so neat. And I love stickers because they are such great conversation starters.
- Comment on Does university email give you any free server? 5 days ago:
In the old days, university IT put essentially no access controls on their networks, so students’ dorm computers were completely exposed to the internet
Dorm ethernet works this way for me right now. It’s how I host some stuff.
- Comment on What's the laziest way to create a website that looks really nice and is maintainable? 6 days ago:
Because the extensions replaced wordpress’ sitebuilder/editor. If I were to get rid of the extensions I would basically have to recreate the site anyways so I might as well switch away from wordpress.
- Submitted 6 days ago to selfhosted@lemmy.world | 29 comments
- Comment on Selfhosted Jira alternative 1 week ago:
Also check out: github.com/makeplane/plane
- Comment on How to (safely) create a Lemmy community server? 2 weeks ago:
Do you have a source or benchmarks for the last bullet point?
I am skeptical that optimizations like that wouldn’t already be implemented by postgres.
- Comment on 2 weeks ago:
have you looked at solutions which emulate github actions locally?
github.com/nektos/act this is one of them but I think I’ve seen one more.
Github actions also has self hosted runners: docs.github.com/en/actions/…/self-hosted-runners
- Comment on 2 weeks ago:
What would you use if you had a choice?
- Comment on What principles you wish to see social networks (or the fediverse) adopt in their design? 3 weeks ago:
design around ease of self-hosting. A non technical user must be able to self host easily and at a very low cost.
This may be a controversial opinion, but I actually like the way that hosting a lemmy instance is somewhat difficult to spin up. I like the way that it is requires a time investment and spammers can’t simply spin up across different domain names. I like the way that problematic instances get defederated and spammers or other problematic individuals can’t simply move domain names due to the way activitypub is tied to those.
In theory, you could set up something like digitalocean’s droplets, where a user does one click to deploy an app like nextcloud or whatever. But I’m not really eager to see something like that.
Transferable user identity (between instances)
I dislike this for a similar reason, tbh. If someone gets banned, they should have to start over. Not get to instantly recreate and refederate all their content from a different instance.
Of course, ban evasion is always a thing. But what I like is that spammers or problematic individuals who had their content nuked are forced to start from scratch and spend time recreating it before they get banned again.
As for what I would really like to see, I would really love features that make lemmy work as a more powerful help forum. Like, on discourse if you make a post, it automatically searches for similar posts and shows them to you in order to avoid duplicate posts. Lemmy does something similar, but it appears to only be the title. It would also be cool to automatically show relevant wiki pages, or FAQ content, since one of the problems on reddit was that people wouldn’t read the wiki or FAQ of help forums.
I would also like the ability to mark a comment on a post as an “answer”, or something similar. I think stackoverflows model definitely had lots of issues with mods incorrectly marking things as duplicate, but I think it was a noble goal to try to ensure that questions were only asked once, and for them to accumulate into a repository of knowledge. For the all the complaints about it, stackoverflow is undeniably the one of the biggest and most useful repositories of knowledge.
- Comment on Where can I learn about networking? 1 month ago:
- Comment on How do you manage your home server configuration? 1 month ago:
I have a similar setup, and even though I am hosting git (forgejo), I use ssh as a git server for the source of truth that k8s reads.
This prevents an ouroboros dependency where flux is using the git repo from forgejo which is deployed by flux…
- Comment on Portainer on Debian or Proxmox? 1 month ago:
Proxmox is based on debian and uses debian under the hood…
- Comment on What are some lower size games that work well on linux handhelds? 1 month ago:
I remember I fit binding of isaac and an archlinux install into 10 gigs of storage using btrfs transparent compression.
The computer was a craptop with only 32 gigs of flash storage overall.
- Comment on Hmm, any XCP-NG fans for self-hosting? 1 month ago:
Care to elaborate? Proxmox’s paid tier is long term support for their older releaes, and paid support. The main code is entirely free.
- Comment on Fun/interesting things to self host? 1 month ago:
I don’t see any mention of games so far.
A minecraft server is always a good time with friends, and there are hundreds of other game servers you can self host.
- Comment on Docker security 1 month ago:
I don’t know what the commenter you replied to is talking about, but systemd has it’s own firewalling and sandboxing capabilities. They probably mean that they don’t use docker for deployment of services at all.
Here is a blogpost about systemd’s firewall capabilities: ctrl.blog/…/systemd-application-firewall.html
Here is a blogpost about systemd’s sandboxing: www.redhat.com/en/blog/mastering-systemd
- Comment on Docker setup for debian 13 trixie Ansible Playbook 2 months ago:
I don’t really understand why this is a concern with docker. Are there any particular features you want from version 29 that version 26 doesn’t offer?
The entire point of docker is that it doesn’t really matter what version of docker you have, the containers can still run.
Debian’s version of docker receives security updates in a timely manner, which should be enough.
- Comment on Docker setup for debian 13 trixie Ansible Playbook 2 months ago:
You are adding a new repo, but you should know that the debian repos already contain docker (via
docker.io) anddocker-compose. - Comment on Interoperability between self-hosted services 2 months ago:
I use authentik, which emables single sign on (the same account) between services.
Authentik is a bit complex and irritating at times, so I would recommend voidauth or kanidm as alternatives for most self hosters.
- Comment on Headscale vs Netbird vs Pangolin - How do you like selfhosting them? 2 months ago:
No, they added a beta vpn feature.
- Comment on I made a project that can install/configure/orchestrate 115+ applications on your homelab using Ansible! 2 months ago:
Does it require docker installed and being in the docker group, with the docker daemon running?
Just an FYI, having the ability to create containers and do other docker is equivalent to root: docs.docker.com/engine/security/#docker-daemon-at…
It’s not really accurate to say that your playbooks don’t require root to run when they basically do.
- Comment on Is self-hosting becoming too gatekept by power users? 2 months ago:
Yeah. I’m seeing a lot a it in this thread tbh. People are stylizing themselves to be IT admins or cybersec people rather than just hobbyists. Of course, maybe they do do it professionally as well, but I’m seeing an assumption from some people in this thread that its dangerous to self host even if you don’t expose anything.
Tailscale in to you machine, and then be done with it, and otherwise only have access to
Now, about actually keeping the services secure, further than just having them on a private subnet and then not really worrying about them. To be explicit, this is referring to fully/partially exposed setups (like VPN access to a significant number of people).
There are two big problems IMO: Default credentials, and a lack of automatic updates.
Default credentials are pretty easy to handle. Docker compose yaml files will put the credentials right there. Just read them and change them. It should be noted that you still should be doing this, even if you are using gui based deployment
This is where docker has really held the community back, in my opinion. It lacks automatic updates. There do exist services like watchtower to automatically update containers, but things like databases or config file schema don’t get migrated to the next version, which means the next version can break things, and there is no guarantee between of stability between two versions.
This means that most users, after they use the
docker-composemethod recommended by software, are manually, every required to every so often, log in, and run docker compose pull and up to update. Sometimes they forget. Combine this with shodan/zoomeye (internet connected search engines), you will find plenty of people who forgot, becuase docker punches stuff through firewalls as well.GUI’s don’t really make it easy to follow this promise, as well. Docker GUI’s are nice, but now you have users who don’t realize that Docker apps don’t update, but that they probably should be doing that. Same issue with Yunohost (which doesn’t use docker, which I just learned today. Interesting).
I really like Kubernetes because it lets me, do automatic upgrades (within limits), of services. But this comes at an extreme complexity cost. I have to deploy another software on top of Kubernetes to automatically upgrade the applications. And then another to automatically do some of the database migrations. And no GUI would really free me from this complexity, because you end up having to have such an understanding of the system, that requiring a pretty interface doesn’t really save you.
Another commenter said:
20 years ago we were doing what we could manually, and learning the hard way. The tools have improved and by now do most of the heavy lifting for us. And better tools will come along to make things even easier/better. That’s just the way it works.
And I agree with them, but I think things kinda stalled with Docker, as it’s limitations have created barriers to making things easier further. The tools that try to make things “easier” on top of docker, basically haven’t really done their job, because they haven’t offered auto updates, or reverse proxies, or abstracted away the knowledge required to write YAML files.
Share your project. Then you’ll hear my thoughts on it. Although without even looking at it, my opinion is that if you have based it on docker, and that you have decided to simply run docker-compose on YAML files under the hood, you’ve kinda already fucked up, because you haven’t actually abstracted away the knowledge needed to use Docker, you’ve just hidden it from the user. But I don’t know what you’re doing.
You service should have:
- A lack of static default credentials. The best way is to autogenerate them.
Further afterthoughts:
Simple in implementation is not the same thing as simple in usage. Simple in implementation means easy to troubleshoot as well, as there will be less moving parts when something goes wrong.
I think operating tech isn’t really that hard, but I think there is a “fear” of technology, where whenever anyone sees a command line, or even just some prompt they haven’t seen before, they panic and throw a fit.
- Comment on Is self-hosting becoming too gatekept by power users? 2 months ago:
Not at all. In fact I remember the day my server was hacked because I’d left a service running that had a vulnerability in it.
Was this server on an internal network?
- Comment on I keep waffling on Proxmox. Sell me. For or against. 2 months ago:
I like Incus a lot, but it’s not as easy to create complex virtual networksnas it is with proxmox, which is frustrating in educational/learning environments.
- Comment on I keep waffling on Proxmox. Sell me. For or against. 2 months ago:
This is untrue, proxmox is not a wrapper around libvirt. It has it’s own API and it’s own methods of running VM’s.
- Comment on Route outgoing traffic of a docker bridge network through VPN 3 months ago:
Yes, this is where docker’s limitations begin to show, and people begin looking at tools like Kubernetes, for things like advanced, granular control over the flow of network traffic.
Because such a thing is basically impossible in Docker AFAIK. You’re getting these responses (and in general, responses like those you are seeing) appear when the thing a user is attempting to do is anywhere from significantly non trivial to basically impossible.
An easy way around this, if you still want to use Docker, is addressing the below bit, directly:
no isolation anymore, i.e qbit could access (or at least ping) to linkwarden’s database since they are all in the same VPN network.
As long as you have changed the default passwords for the databases and services, and kept the services up to date, it should not be a concern that the services have network level access to eachother, as without the ability to authenticate or exploit eachother, there is nothing that they can do, and there are no concerns.
If you insist on trying to get some level of network isolation between services, while continuing to use Docker, your only real option is iptables* rules. This is where things would get very painful, because iptables rules have no persistence by default, and they are kind of a mess to deal with. Also, docker implements their own iptables setup, instead of using standard ones, which result in weird setups like Docker containers bypassing the firewall when they expose ports.
You will need a fairly good understanding of iptables in order to do this. In addition to this, if you decide this in advance, I will warn you that you cannot create iptables rules based on ip addresses, as the ip addresses of docker containers are ephemeral and change, you must create rules based on the hostnames of containers, which adds further complexity as opposed to just blocking by ip.
A good place to start is here. You probably don’t know what a lot of the terminology here is. You will have to spend a lot of time learning all of it, and more. Perhaps you have better things to do with your time?
*Um, 🤓 ackshually it’s nftables, but the iptables-nft command offers a transparent compatibility layer enabling easier migrations from the older and no longer used iptables
- Comment on localhosting: selfhosting to the min 5 months ago:
There are a few apps that I think fit this use case really well.
Languagetool is a spelling and grammer checker that has a server client model. Libreoffice now has built in languagetool integration, where it can acess a server of your choosing. I make it access the server I run locally, since archlinux packages languagetool.
Another is stirling-pdf. This is a really good pdf manipulation program that people like, that comes as a server with a web interface.
- Comment on Exposing docker socket to a container 5 months ago:
I think I have also seen socket access in Nginx Proxy Manager in some example now. I don’t really know the advantages other than that you are able to use the container names for your proxy hosts instead of IP and port
I don’t think you need socket access for this? This is what I did: stackoverflow.com/…/how-to-reach-docker-container…