glizzyguzzler
@glizzyguzzler@lemmy.blahaj.zone
- Comment on Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top? 1 week ago:
So extra background, I was put off by proxmox’s weird steps to get ISO’s onto the system via USB so I was like “I am not touching the backup stuff” and just rolled my own (I treat the VMs/containers on my proxmox server like individual servers and back them up accordingly and do not back up the underlying proxmox instance itself).
I see proxmox has a similar pruning setting to Restic, and it exports the files like incus. So I’d say yes, proxmox is one-stop-shop for backup while with incus you have to put its container export options and restic together and put that in a cron job.
Still hard to say what I’d definitively tell a newbie to go with. I found (and still find) the proxmox ui daunting and difficult while the incus UI makes much more sense to me and is easier (has an ISO pulling system built in for instance. But as you’ve pointed out - proxmox gives you an easy way to have robust backups that takes much more effort on the incus side.
As backups are paramount, proxmox for a total newbie. If someone is familiar with scripting, then incus - because it needs scripted backups to be as robust as proxmox’ backups. @barnaclebill@lemmy.dbzer0.com this conclusion should help you choose proxmox (most likely)!
- Comment on Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top? 1 week ago:
linuxcontainers.org/incus/…/instances_backup/#ins…
A bit down from the snapshots section is the export section, what I do is I export to a place then back it up with Restic. I do not compress on export and instead do it myself with the —rsyncable flag added to zstd. (Flag applies to gzip too) With the rsyncable flag incremental backups work on the zip file so it’s space efficient despite being compressed. I don’t worry about collating individual zip files, instead I rely on Restic’s built-in versioning to get a specific version of the VM/container if I needed it.
Also a few of my containers I linked the real file system (big ole data drive) into the container and just snapshot the big ole data drive/send said snapshot using the BTRFS/ZFS methods cause that seemed easier, those containers are easy enough to stand up on a whim and then just need said data hooked up.
I also restic the sent snapshot since snapshots are write-static and restic can read from it at its leisure. Restic is the final backup orchestrator for all of my data. One restic call == one “restic snapshot” so I call it monolithically with one call covering several data sources.
Hope that helps!
- Comment on Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top? 1 week ago:
linuxcontainers.org/incus/…/instances_backup/#ins…
This describes the jist, it’s all about snapshots! Incus loves BTRFS/ZFS.
There’s no true need for stop everything as far as I can tell.
Stop everything is applicable for databases for any backup system (snapshot avoids backing up a database mid write (guaranteed failure) but the snapshot could be during a live database multi-step operation and while intact is left in a cursed state). For databases I make sure to stop and backup (SQLite losers) or backup live (Gods’ chosen Postgres) specially so no very niche database failures occur even though it was done with instant/write-safe snapshots!!
- Comment on Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top? 1 week ago:
There is a larger community. I have proxmox and incus on two devices and for the basics (LXC container/VM) Incus is way more straight forward. Ditchin proxmox next reinstall on the other device (that proxmox install is the OS version). If you’re doing regular stuff it’s easy enough!
But again, proxmox community is larger. I started with it for that reason too.
- Comment on Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top? 2 weeks ago:
Since you’re not using proxmox as an OS install, why not check out Incus? It accomplishes the same goals as proxmox but is easier to use (for me at least). Make sure you install incus’ web ui, makes it ez pz. Incus does the VMs and containers just like proxmox but isn’t focused on clustering 1st but rather machine 1st. It does do clustering, but the default UI is set for your machine to start so it makes more sense to me. The forums are very useful and questions get answered quickly, and there’s an Ubuntu-only fork called LXD which expands the available pool of answers. (For now, almost all commands are the same between Incus and LXD). I run the incus stable release from the Zabbly package repo, I think the long term release doesn’t have the web ui yet (I could be wrong). Never have had a problem. When Debian 13 hits I’ll switch to whatever is included there and should be set.
linuxcontainers.org/incus/docs/main/installing/#i…
I use incus for VMs and LXC containers. I also have Docker on the Debian system. Many types of containers for every purpose!
I installed incus on a Debian system that I encrypted with LUKS. It unlocks after reboots with a USB drive, basically I use it like a yubikey but you could leave it in so the system always reboots no problem. There’s also a network unlock too but I didn’t try to figure that out. Without USB drive or network, you’ll have to enter the encryption key on every reboot.
- Comment on How good are amphetamines for brain fog? 2 weeks ago:
Not a doctor, but based on research I’ve seent brain fog (in likely many cases) seems to be due to inflammation. autoimmuneinstitute.org/…/brain-fog-likely-caused…
Have your friend try inflammation-reducing drugs like metformin. Metformin specifically, maybe there’s others, I’m sadly not a doctor. Metformin is a magic drug that’s not just for diabetius.
It won’t be immediate, but maybe it could help your friend recover. Idk if cranking yourself will break through when it’s a blocking mechanism causing the problem.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 3 weeks ago:
Indeed I did not, we’re at a stalemate because you and I do not believe what the other is saying! So we can’t move anywhere since it’s two walls. Buuuut Tim Apple got my back for once, just saw this now!: lemmy.blahaj.zone/post/27197259
I’ll leave it at that, as thanks to that white paper I win! Yay internet points!
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 3 weeks ago:
It’s wild, we’re just completely talking past each other at this point! I don’t think I’ve ever gotten to a point where I’m like “it’s blue” and someone’s like “it’s gold” so clearly. And like I know enough to know what I’m talking about and that I’m not wrong (unis are not getting tons of grants to see “if AI can think”, no one but fart sniffing AI bros would fund that (see OP’s requested source is from an AI company about their own model), research funding goes towards making useful things not if ChatGPT is really going through it like the rest of us), but you are very confident in yourself as well. Your mention of information theory leads me to believe you’ve got a degree in the computer science field. The basis of machine learning is not in computer science but in stats (math). So I won’t change my understanding based on your claims since I don’t think you deeply know the basis just the application. The focus on using the “right words” as a gotchya bolsters that vibe. I know you won’t change your thoughts based on my input, so we’re at the age-old internet stalemate! Anyway, just wanted you to know why I decided not to entertain what you’ve been saying - I’m sure I’m in the same boat from your perspective ;)
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 3 weeks ago:
You can, but the stuff that’s really useful (very competent code completion) needs gigantic context lengths that even rich peeps with $2k GPUs can’t do. And that’s ignoring the training power and hardware costs to get the models.
Techbros chasing VC funding are pushing LLMs to the physical limit of what humanity can provide power and hardware-wise. Way less hype and letting them come to market organically in 5/10 years would give the LLMs a lot more power efficiency at the current context and depth limits. But that ain’t this timeline, we just got VC money looking to buy nuclear plants and fascists trying to subdue the US for the techbro oligarchs womp womp
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 3 weeks ago:
No, they’re right. The “research” is biased by the company that sells the product and wants to hype it. Many layers don’t make think or reason, but they’re glad to put them in quotes that they hope peeps will forget were there.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 3 weeks ago:
So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 3 weeks ago:
They do not, and I, a simple skin-bag of chemicals (mostly water tho) do say
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 3 weeks ago:
I was channeling the Interstellar docking computer (“improper contact” in such a sassy voice) ;)
There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.
An audio codec (not a pipeline) is just actually doing math - just like the workings of an LLM. There’s plenty of work to be done after the audio codec decodes the m4a to get to tunes in your ears. Same for an LLM, sandwiching those matrix multiplications that make the magic happen are layers that crunch the prompts and assemble the tokens you see it spit out.
LLMs can’t think, that’s just the fact of how they work. The problem is that AI companies are happy to describe them in terms that make you think they can think to sell their product! I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago. AI companies will string the LLMs together and let them chew for a while to try make themselves catch when they’re dropping bullshit. It’s still not thinking and reasoning though. They can be useful tools, but LLMs are just tools not sentient or verging on sentient
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 3 weeks ago:
Improper comparison; an audio file isn’t the basic action data it is the data, the audio codec is the basic action on the data
“An LLM model isn’t really an LLM because it’s just a series of numbers”
But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed
And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
It’s literally tokens. Doesn’t matter if it completes the next word or next phrase, still completing the next most likely token 😎😎 can’t think can’t reason can witch’s brew facsimile of something done before
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.
The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.
Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.
It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain’t smart, and they ain’t worth destroying the planet.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):
LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.
Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.
In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.
If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.
It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.
TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster
- Comment on 3-2-1 Backups: How do you do the 1 offsite backup? 1 month ago:
I got my parents to get a NAS box, stuck it in their basement. They need to back up their stuff anyway. I put in 2 18 TB drives (mirrored) from server part deals (peeps have said that site has jacked their prices, look for alts). They only need like 4 TB at most. I made a backup samba share for myself. It’s the cheapest symbology box possible, their software to make a samba share with a quota.
I then set up a wireguard connection on an RPi, taped that to the NAS, and wireguard to the local network with a batch script. Mount the samba share and then use restic to back up my data. It works great. Restic is encrypted, I don’t have to pay for storage monthly, their electricity is cheap af, they have backups, I keep tabs on it, everyone wins.
Next step is to go the opposite way for them, but no rush on that goal, I don’t think their basement would get totaled in a fire and I don’t think their house (other than the basement) would get totaled in a flood.
If you don’t have a friend or relative to do a box-at-their-house (peeps might be enticed with reciprocal backups), restic still fits the bill. Destination is encrypted, has simple commands to check data for validity.
Rclone crypt is not good enough. Too many issues (path length limits, password “obscured” but otherwise there, file structure preserved even if names are encrypted). On a VPS I use rclone to be a pass-through for restic to backup a small amount of data to a goog drive. Works great. Just don’t fuck with the rclone crypt for major stuff.
Lastly I do use rclone crypt to upload a copy of the restic binary to the destination, as the crypt means the binary can’t be fucked with and the binary there means that is all you need to recover the data.
- Comment on GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust. 2 months ago:
Odd, I’ll try to deploy this when I can and see!
I’ve never had a problem with a volume being on the host system, except with user permissions messed up. But if you haven’t given it a user parameter it’s running as root and shouldn’t have a problem. So I’ll see sometime and get back to you!
- Comment on GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust. 2 months ago:
That’s pretty damn clever
- Comment on GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust. 2 months ago:
I try to slap anything I’d face the Internet with with the read_only to further restrict exploit possibilities, would be abs great if you could make it work! I just follow all reqs on the security cheat sheet, with read_only being one of them: …owasp.org/…/Docker_Security_Cheat_Sheet.html
With how simple it is I guessed that running as a user and restricting cap_drop: all wouldn’t be a problem.
For read_only many containers just need tmpfs: /tmp in addition to the volume for the db. I think many containers just try to contain temporary file writing to one directory to make applying read_only easier.
So again, I’d abs use it with read_only when you get the time to tune it!!
- Comment on GitHub - SinTan1729/chhoto-url: A simple, blazingly fast, selfhosted URL shortener with no unnecessary features; written in Rust. 2 months ago:
Looks awesome and very efficient, does it also run with
read_only: true
(with a db volume provided, of course!)? Many containers just need a /tmp, but not always - Comment on Making sure restic backups are right 2 months ago:
I trust the check
restic -r ‘/path/to/repo’ --cache-dir ‘/path/to/cache’ check --read-data-subset=2000M --password-file ‘/path/to/passfile’ --verbose
. The-read-data-subset
also does the structural integrity while also checking an amount of data. If I had more bandwidth, I’d check more.When I set up a new repo, I restore some stuff to make sure it’s there with
restic -r ‘/path/to/repo’ --cache-dir ‘/path/to/cache’ --password-file ‘/path/to/passfile’ restore latest --target /tmp/restored --include ‘/some/folder/with/stuff’
.You could automate that and make sure some essential-but-not-often-changing files match regularly by restoring them and comparing them. I would do that if I wasn’t lazy I guess, just to make sure I’m not missing some key-but-slowly-changing files. Slowly/not often changing because a diff would fail if the file changes hourly and you backup daily, etc.
Or you could do as others have suggested and mount it locally and just traverse it to make sure some key stuff works and is there
sudo mkdir -p ‘/mnt/restic’; sudo restic -r ‘/path/to/repo’ --cache-dir ‘/path/to/cache’ --password-file ‘/path/to/passfile’ mount ‘/mnt/restic’
. - Comment on [deleted] 2 months ago:
I have my router (opnsense) redirect all DNS requests to pihole/adguardhome. AdGuard home is easier for this since you can have it redirect wildcard *.local.domain while pihole wants every single one individually (uptime.local.domain, dockage.local.domain). With that combo of router not letting DNS out to upstream servers and my local DNS servers set up to redirect *.local.domain to the correct location(s), my DNS requests inside my local network never get out where an upstream DNS can tell you to kick rocks.
I combined the above with a (hella cheap for 10yr) paid domain, wildcard certified the domain without exposure to the wan (no ip recorded, but accepted by devices), and have all *.local.domain requests redirect to a single server caddy instance that does the final redirecting to specific services.
I’m not fully sure what you’ve got cooking but I hope typing out what works for me can help you figure it out on your end! Basically the router doesn’t let anything DNS get by to be fucked with by the ISP.
- Comment on Proxmox vs. Debian: Running media server on older hardware 2 months ago:
I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.
Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.
Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.
On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.
So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.
If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.
Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.
- Comment on How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN? 3 months ago:
I see, do you know of a way in Docker (or Podman) to bind to a specific network interface on the host? (So that a container could use a macvlan adapter on the host)
Or are you more advocating for putting the Docker/Podman containers inside of a VM/LXC that has the macvlan adapter (or fancy incus bridge adapter) attached?
- Comment on How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN? 3 months ago:
Confused at this sentiment, Docker includes a MACVLAN driver so clearly it’s intended to be used. Do you eschew any networking in Docker beyond the default bridge for some reason?
- Comment on How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN? 3 months ago:
With the default Docker bridge networking the container won’t have a unique IP/MAC address on the local network, as far as I am aware. Communication with external clients will have to contact the host server’s IP at the port the container is tied to in order to interact. If there’s a way to specify a specific parent interface, let me know!
- Comment on How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN? 3 months ago:
This was very insightful and I’d like to say I groked 90% of it meaningfully!
For an Incus container with its unique MAC interface, yes if I run a Docker container in that Incus container and leave the Docker container in its default bridge mode then I get the desired feature set (with the power of onions).
And thanks for explaining CNI, I’ve seen it referenced but didn’t fully get how it’s involved. I see that podman uses it to make a MACVLAN interface that can do DHCP (until 5.0, but the replacement seems to be feature-compatible for MACVLAN), so podman will sidestep the pain point of having to assign a no-go-zone on the DHCP server for a Docker swath of IPv4s, as you mentioned. Close enough for containers that the host doesn’t need to talk to.
So in summary:
-
I’ve got Docker doing the extent it can manage with MACVLAN and there’s no extra magicks to be done on it.
-
Podman will still use MACVLAN (no host to container comms still) but it’s able to use DHCP to get an address for the MACVLAN container.
-
If the host must talk to the container with MACVLAN, I can either use the MACVLAN bypass as you linked to above or put the Docker/Podman container inside an Incus container with its bridge mode.
-
Kubernutes continues to sound very powerful and flexible but is definitely beyond my reach yet. (Womp womp)
Thanks again for taking the time to type and explain all of that!
-