marauding_gibberish142
@marauding_gibberish142@lemmy.dbzer0.com
- Comment on Synology could bring “certified drive” requirements to more NAS devices 1 week ago:
Just lol at Synology trying to do an Nvidia
- Comment on Synology could bring “certified drive” requirements to more NAS devices 1 week ago:
There’s plenty of N100/N350 motherboards with 6 SATA ports on AliExpress, grab them while you can
- Comment on Synology could bring “certified drive” requirements to more NAS devices 1 week ago:
Synology is like Ubiquity in the self-hosted community: sure it’s self-hosted, but it’s definitely not yours. End of the day you get to deal with their decisions.
Terramaster lets you run your own OS on their machine. That’s basically what a homelabber wants: a good chassis and components. I couldn’t see a reason to buy a Synology after Terramaster and Ugreen started ramping out their product lines which let you run whatever OS you wanted. Synology at this point is for people who either don’t know what they’re doing or want to remain hands-off with storage management (which is valid; you don’t want to do more work when you get home for work). Unfortunately, such customers are now out in the lurch, so TrueNAS or trust some other company to hold your data safe.
- Comment on What CI/CD tools are you guys using? I have Forgejo but I need a simple way of running automation. 1 week ago:
Thanks
- Comment on Am I the only one interested in Fedora container? 1 week ago:
Alpine isn’t exactly fortified either. It needs some work too. Ideally you’d use a deblobbed kernel with KSPP and use MAC, harden permissions, install hardened_malloc. I don’t recall if there’s CIS benchmarks or STIGs for Alpine but those are very important too. These are my basic steps for hardening anything. But Alpine has the advantage of being lean from the start. Ideally you’d compile your packages with hardened flags like on Gentoo but for a regular container and VM host that might be too much (or not - depends on your appetite for this stuff).
- Comment on What CI/CD tools are you guys using? I have Forgejo but I need a simple way of running automation. 1 week ago:
I’m looking at buildbot
- Comment on Am I the only one interested in Fedora container? 1 week ago:
I don’t get it. Where is the idea that “Fedora focuses on security” coming from? Fedora requires an equivalent amount of work like other distros to harden it.
I personally use Alpine because I that busybox to have less attack surface than normal Linux utils
- Comment on How to self-host a distributed git server cluster? 1 week ago:
Oh I get it. Auto-pull the repos to the master nodes’ local storage for if something bad happens, and when that does, use the automatically pulled (and hopefully current) code to fix what broke.
Good idea
- Comment on How to self-host a distributed git server cluster? 1 week ago:
Well it’s a tougher question to answer when it’s an active-active config rather than a master slave config because the former would need minimum latency possible as requests are bounced all over the place. For the latter, I’ll probably set up to pull every 5 minutes, so 5 minutes of latency (assuming someone doesn’t try to push right when the master node is going down).
I don’t think the likes of Github work on a master-slave configuration. They’re probably on the active-active side of things for performance. I’m surprised I couldn’t find anything on this from Codeberg though, you’d think they have already solved this problem and might have published something. Maybe I missed it.
I didn’t find anything in the official git book either, which one do you recommend?
- Comment on How to self-host a distributed git server cluster? 1 week ago:
Thanks for the comment. There’s no special use-case: it’ll just be me and a couple of friends using it anyway. But I would like to make it highly available. It doesn’t need to be 5 - 2 or 3 would be fine too but I don’t think the number would change the concept.
Ideally I’d want all servers to be updated in real-time, but it’s not necessary. I simply want to run it like so because I want to experience what the big cloud providers run for their distributed git services.
Well the other choice was Reddit so I decided to post here (Reddit flags my IP and doesn’t let me create an account easily). I might ask on a couple of other forums too.
Thanks
- Comment on How to self-host a distributed git server cluster? 1 week ago:
This is a fantastic comment. Thank you so much for taking the time.
I wasn’t planning to run a GUI for my git servers unless really required, so I’ll probably use SSH. Thanks, yes that makes the part of the reverse proxy a lot easier.
I think your idea of having a designated “master” (server 1) and having rolling updates to the rest of the servers is a brilliant idea. The replication procedure becomes a lot easier this way, and it also removes the need for the reverse-proxy too! - I can just use Keepalived, set up weights to make one of them the master and corresponding slaves for failover. It also won’t do round-robin so no special stuff for sticky sessions! This is great news from the perspective of networking for this project.
Hmm, you said to enable pushing repos to the remote git repo instead of having it pull? I was going create a wireguard tunnel and have it accessible from my network for some stuff but I guess it makes sense.
Thanks again for the wonderful comment.
- Comment on How to self-host a distributed git server cluster? 1 week ago:
Sorry, I don’t understand. What happens when my k8s cluster goes down taking my git server with it?
- Comment on How to self-host a distributed git server cluster? 1 week ago:
I think I messed up my explanation again.
The load-balancer in front of my git servers doesn’t really matter. I can use whatever, really. What matters is: how do I make sure that when the client writes to a repo in one of the 5 servers, the changes are synced in real-time to the other 4 as well? Running rsync every 0.5 second doesn’t seem to be a viable solution
- Comment on How to self-host a distributed git server cluster? 1 week ago:
You mean have two git servers, one “PROD” and one for infrastructure, and mirror repos in both? I suppose I could do that, but if I were to go that route I could simply create 5 remotes for every repo and push to each individually.
For the k8s suggestion - what happens when my k8s cluster goes down, taking my git server along with it?
- Comment on How to self-host a distributed git server cluster? 1 week ago:
GitHub didn’t publish the source code for their project, previously known as DGit (Distributed Git), now known as spokes. The only mention of it is in a blog post on their website but I don’t have the link handy right now
- Comment on How to self-host a distributed git server cluster? 1 week ago:
Thank you. I did think of this but I’m afraid this might lead me into a chicken and egg situation, since I plan to store my Kubernetes manifests in my git repo. But if the Kubernetes instances go down for whatever reason, I won’t be able to access my git server anymore.
I edited the post which will hopefully clarify what I’m thinking about
- Comment on How to self-host a distributed git server cluster? 1 week ago:
Apologies for not explaining better. I want to run a loadbalancer in front of multiple instances of a git server. When my client performs an action like a pull or a push, it will go to one of the 5 instances, and the changes will then be synced to the rest.
- Comment on How to self-host a distributed git server cluster? 1 week ago:
Apologies for not explaining it properly. Essentially, I want to have multiple git servers (let’s take 5 for now), have them automatically sync with each other and run a loadbalancer in front. So when a client performs an action with a repository, it goes to one of the 5 instances and the changes are written to the rest.
- Submitted 1 week ago to selfhosted@lemmy.world | 27 comments
- Comment on Advice wanted: Making reliable private cloud backups with Kopia. 1 week ago:
B2
- Comment on Postiz v1.39.2 - Open-source social media scheduling tool, Introducing MCP. 1 week ago:
Upvoted. Awesome project
- Comment on Would you use a self-hosted, AI-powered search engine for your favorite sites? 2 weeks ago:
Sorry, I was wrong. I think I probably saw it in a blog post where they mentioned creating an AI search engine using SearXNG and Ollama. I don’t see any mention of native Ollama integration in the SearXNG docs
- Comment on How to use GPUs over multiple computers for local AI? 2 weeks ago:
Thanks man, I’ll take a look
- Comment on How to use GPUs over multiple computers for local AI? 2 weeks ago:
I see. Thanks
- Comment on How to use GPUs over multiple computers for local AI? 2 weeks ago:
I agree with your assessment. I was indeed going to run k8s, just hadn’t figured out what you told me. Thanks for that.
And yes, I realised that 10Gbe is just not enough for this stuff. But another commenter told me to look for used threadripper and EPYC boards (which are extremely expensive for me), which gave me the idea to look for older Intel CPU+Motherboard combos. Maybe I’ll have some luck there. I was going to use Talos in a VM with all the GPUs passed through to it.
- Comment on How to use GPUs over multiple computers for local AI? 2 weeks ago:
Specifically because PCIe slots go for a premium on motherboards and CPU architectures. If I didn’t have to worry about PCIe I wouldn’t care about a networked AI cluster. But yes, I accept what you say
- Comment on How to use GPUs over multiple computers for local AI? 2 weeks ago:
Heavily quantized?
- Comment on How to use GPUs over multiple computers for local AI? 2 weeks ago:
I think yes
- Comment on Authentik: Allowing users to create invites? 2 weeks ago:
I have no idea of how to do this but following
- Comment on Would you use a self-hosted, AI-powered search engine for your favorite sites? 2 weeks ago:
I think SearXNG already has AI integration. Not sure how it works though. I don’t think that I would personally use AI for things other than summarising what I search but it is a useful feature to have