Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

What's your self-hosting success of the week?

⁨110⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨weeks⁩ ago⁩ by ⁨shark@lemmy.org⁩ to ⁨selfhosted@lemmy.world⁩

source

Comments

Sort:hotnewtop
  • baller_w@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    I migrated openaw from docker running on my raspberry pi to an old nuc I had lying around. Backed it with mainly models off of OpenRouter or my local Ollama instance. For very difficult tasks it uses anthropic. Added it to my GitHub repo and implemented Plane for task management. Added a subagent for coding and have it work on touch up or research tasks I don’t have personal time to do. Made an sdlc document that it follows so I can review all of its work. Added a cron so it checks for work every hour. It ran out of tasks in five days. Work quality: C+, but it’s a hell of a lot better than having nothing.

    It helped research and implement SilverBullet for personal notes management in one shot.

    I also migrated all of my services’ DNS resolution to CloudFlare so I get automatic TLS handoff and set up nginx with deny rules so any app I don’t want exposed don’t get proxied.

    This weekend I’m resurrecting my HomeAssistant build.

    source
  • kokomo@lemmy.kokomo.cloud ⁨2⁩ ⁨weeks⁩ ago

    Managed to finally get around to self-hosting ntfy, added that to uptime kuma as notifications experimenting with Checkcle, stood up a invidious instance for funsies (prob will see how much i use it, but might as well) Less this week: Recently got pangolin up and running and i’m loving it, it’s so seamless and straight forward along with caddy on my other VPS machines.

    source
  • fleem@piefed.zeromedia.vip ⁨2⁩ ⁨weeks⁩ ago

    proxmox backups fixed!

    copyparty is really REALLY cool.

    self hosted gitea was much easier than expected.

    jellyfin updated to latest.

    fixed habitica issues (gotta have my goddamn checkmarks!)

    self hosted ntfy ssh login scripts EVERYWHERE

    i said fuck NUT and passed battery backup straight to truenas VM, the graphs are beautiful.

    ive decided that a rclone docker set up to serve webdav will be a tool i keep on all lxcs, for moving shit around easier. turn it on, move the stuff, turn back off. (i can SCP with the best of them but this is so much easier)

    i want a self hosted CA 😭😭😭

    source
    • shark@lemmy.org ⁨2⁩ ⁨weeks⁩ ago

      copyparty is really REALLY cool. (i use the phi95 theme)

      Wow. That’s amazing!

      i want a self hosted CA

      It’s totally worth it. I was putting it off for a very long time, but it was actually kind of easy.

      source
      • fleem@piefed.zeromedia.vip ⁨2⁩ ⁨weeks⁩ ago

        got a link? I’ve been falling to get vaulTLS to even start

        source
        • -> View More Comments
  • silenium_dev@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    I already had Keycloak set up, but a few services don’t support OIDC or SAML (Jellyfin, Reposilite), so I’ve deployed lldap and connected those services and Keycloak to it. Now I really have a single user across all services

    source
    • WhyJiffie@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      how did tou migrate your existing accounts to this system? or did you just make a new account?

      source
      • silenium_dev@feddit.org ⁨2⁩ ⁨weeks⁩ ago

        I recreated the Keycloak account from LDAP, and then manually patched the databases for all OIDC-based services to the new account UUID, so the existing accounts are linked to the new Keycloak account.

        I have two Keycloak accounts, one in the master realm for administrative purposes, and one in the apps realm for all my services, so I didn’t break access to Keycloak

        source
  • CodeGameEat@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    A hopefully “success in progress”: I am building a new trueNAS server for storage. I have a k8s cluster and am currently using rancher for storage, but I decided at my scale central storage made more sense & would be easier to manage. I am also using that opportunity to upgrade from 2TB usable storage to 44TB usable storage. Fingers crossed everything will work 🤞

    source
  • TheRagingGeek@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    This week I saw my 3 machine cluster flailing trying to stay online, digging around identified it as an issue with communication with my NAS. It was running NFS3 and so I swapped that to NFS4.1 and did some tuning and now my services have never been faster!

    source
  • Alfredolin@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

    I finally set up a VPN instead of exposing unnecessary ports to the wild!

    source
  • atzanteol@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    This week - Apache Airflow setup to automate running backups (replacing cron).

    source
  • GnuLinuxDude@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

    I’ve been self-hosting for years, but with a recent move comes a recent opportunity to do my network a bit differently. I’m now running a capable OpenWRT router, and support for AdGuard Home is practically built into OpenWRT. I just needed to configure it right and set it up, but the documentation was comprehensive enough.

    For years I had kept a Debian VM for Pi-Hole running. I kept it ultra lean with a cloud kernel and 3 gb of disk space and 160MB of RAM, just so it could control its own network stack. And I’d set devices to manually use its IP address to be covered. AGH seems to be about the same exact thing as Pi-Hole. With my new setup the entire network is covered automatically without having to configure any device. And yes, I know I could’ve done the same before by forwarding the DNS lookups to the Pi-Hole, but I was always afraid it would cause a problem for me and I’d need an easy way to back out of the adblocking. Subjectively, over about 6 years, I only had a couple worthless websites that blocked me out.

    I haven’t yet gotten to the point where I’m trying to also to intercept hardcoded DNS lookups, but soon… It’s not urgent for me because I don’t have sinister devices that do that.

    source
  • Kushan@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    It was a couple of weeks ago for me but I managed to get my docker compose script for all my infrastructure cleaned up and all versions of containers are now pinned.

    I have renovate set up to open PR’s when a new version is available so I can handle updates by just accepting the PR and it’s automatically deployed to my server.

    Nice and easy to keep apps up to date without them randomly breaking because I didn’t know if a breaking change when blindly pulling from latest.

    source
  • synapse1278@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Reconnected my light switches to home assistant. I just had to press the pairing button on the device again for some reason. But it’s inside de Switch box in the wall, not so practical. I wich they thought of another way to put the device in pairing mode, like switch one-off 10 times, something like that.

    source
  • BasicallyHedgehog@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

    I’ve been running all my apps on my NAS as docker containers, but some get ‘stuck’ occasionally, requiring a reboot of the whole machine. Using the NAS was mostly out of convenience.

    I also had an old laptop running k3s, hosting a few stateless services.

    This week I picked up three Wyse 5070 devices and started setting up a more permanent Kubernetes cluster. I decided to use Talos Linux, which is a steep learning curve, but should hopefully reduce the amount of ongoing work for upgrades. I’ll be deploying everything with FluxCD this time around too.

    I’ve stumbled a bit with the synology-csi-driver. It didn’t work with Talos out of the box, but turns out the latest commits have a fix. The only thing remaining before I can start porting the apps over is figuring out how to spin up a new CA and generate client certificates for mTLS. I currently do that in Vault but it seems like something cert-manager could handle going forward.

    source
    • funkajunk@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I also just setup a cluster using Talos!

      I’ve never used kubernetes before, but decided it was time to learn so I picked up 4x HP EliteDesk Mini systems and dove in.

      source
  • Zwuzelmaus@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    I have tried out Openclaw in a container, and it wasn’t hard at all.

    All the warnings of danger are right, though. But if anything goes wild, I still know how to delete a container :-)

    source
  • tophneal@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    The table (dm) might finally make the switch from roll20 to foundry for a campaign!

    source
  • harsh3466@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

    I got a test box set up with nixos and a config that runs all of my services. I wanted to test the declarative rebuild promise of it, so I:

    1. Filled the services with my some of my backed up data (a copy of the data, not the actual backup)
    2. Ran it for a few days using some of the services
    3. Backed up the data of the nixos test server, as well as the nixos config
    4. Reinstalled nixos on the test box, brought in the config, and rebuilt it.

    And it worked!!! All serviced came back with the data, all configuration was correct.

    I’m going to keep testing, and depending on how that goes I may switch my prod server and nas to nixos.

    source
    • smiletolerantly@awful.systems ⁨2⁩ ⁨weeks⁩ ago

      Very cool!

      Re: the backup / restore of state in NixOS: I found myself writing the same things over and over again for each VM/service, so finally wrote this wrapper module (in action e.g. here for Jellyfin), which confgures both the backup services and timers, as well as adding a simple rsync-restore-jellyfin command to the system packages. In case you find this useful and don’t already have your own abstractions, or a sufficiently different use case 😄

      source
      • idealpink@feddit.nu ⁨2⁩ ⁨weeks⁩ ago

        This is great! Thanks

        source
  • 5ymm3trY@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

    Started my self-hosting journey a couple of year ago with a Raspberry Pi, OpenMediaVault and a couple of Docker containers. This week i finally managed to move my Adguard Home container and my DNS setup over to my NAS, which was the final thing that kept the Pi running. I also synched all the data to the NAS.

    The next step I am trying to figure out is a decent backup setup. Read about Borg, Restic and Kopia, but haven’t decided on one of them yet. What are you guys using?

    source
    • Cyber@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

      Use the one that makes most sense to you for restores.

      Backup a folder, then restore it somewhere else… if any of the applications causes you problems for your setup, move on.

      source
      • 5ymm3trY@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

        Good point. I was going to set 1-2 of them up and find out what suits my needs.

        source
        • -> View More Comments
    • Saltarello@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I settled on Kopia myself but I always seem to see the others mentioned

      source
  • tofu@lemmy.nocturnal.garden ⁨2⁩ ⁨weeks⁩ ago

    Still waiting for my success. Pihole randomly doesn’t answer DNS requests in time, causing a lot of trouble between my services. It’s happening since I switched to dnsmasq in opnsense (which is upstream for my local domain for Pihole), but also for external domains. Can’t nail it down and am this short of reconsidering my whole network setup. It used to work fine for over a year though…

    Opnsense dnsmasq is DHCP for my servers and also resolves them as local hosts. (e.g. server1.local.domain) and Pihole conditionally forwards there. Since the issue is also when resolving external domains, it shouldn’t be related, but the timing is suspicious. I also switched the general upstream DNS.

    Pihole does have some logs indicating too many concurrent requests, but those are not always correlating with the timeouts.

    I know it’s DNS, I just don’t know where yet.

    source
    • brygphilomena@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      Is dnsmasq rate limiting tbe pi’s IP? Or is opnsense intercepting port 53 outbound and sending it to dnsmasq anyway so all pi DNS queries are being resolved in dnsmasq?

      source
      • tofu@lemmy.nocturnal.garden ⁨2⁩ ⁨weeks⁩ ago

        Opnsense is only between the servers and the pi, the pi is in the same subnet as our consumer devices and the opnsense (directly connected to the router). The issues are both on the consumer devices and on the server, so the opnsense should not be the direct issue.

        source
  • Natal@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Hum. I’ve been smooth sailing for a while now. I’ve tried installing OwnTracks again and made some progress by figuring out cloud flare tunnels are a problem (at least the way I configured them). New to MQTT. So the app still doesn’t work properly but now I have an idea why and I’m not just banging my head on the wall anymore.

    source
  • shrek_is_love@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

    I got Terminus for the TRMNL set up using Podman on my server running NixOS.

    Although I’m actually planning on replacing Terminus with my own simple server app that way it can be even more declarative (no Postgres database of devices/users/screens) and easier for me to customize. The API I’ll have to implement is extremely straightforward, so I don’t anticipate it taking too long.

    source
  • ragingHungryPanda@piefed.keyboardvagabond.com ⁨2⁩ ⁨weeks⁩ ago

    I got gitea running on my VPs cluster that I use to host keyboard vagabond services. I moved my repository from my home PC into it, and set up an action runner to automate a build and deploy of piefed, so it runs my build script, pushes to harbor registry (internal), and then deletes and recreates a job to run db migrations and restarts the web and worker pods.

    I’m going to migrate the other build services to it as well, and after that I should be able to finally get all of my services behind cloud flare tunnels and tail scale, and finally remove the last bits of ingress-nginx. The registry was the only thing still on ingress-nginx because I needed to push larger image files than are permitted by cloud flare. since all of that is internal now, I get to finally seal those bits off.

    The build is also faster some I don’t have to rely on wifi

    source