thundermoose
@thundermoose@lemmy.world
- Comment on ZFS new disk 5 days ago:
Oh, forgot to mention: striping in ZFS will use the capacity of the smallest drive. It sounds like you have a 1TB drive and a 4TB drive, so striping would give you access to 2TB at most.
- Comment on ZFS new disk 5 days ago:
Losing one drive in a striped pool with no redundancy means the entire pool is shot. Restoration from your HDDs may take a very long time, on top of data loss between the time of failure and your last snapshot. Striping without redundancy is fast, but dangerous.
This may work at first, and maybe you really do have a use case where this kind of failure is tolerable. However, in my experience, data is precious more often than it isn’t. Over time, you’re more likely to find use cases where the loss of the pool will be frustrating at best, and devastating at worst.
If you’re not using any redundancy, I would create separate pools so each drive can fail independently. You’ll have all 5TB of storage, but not contiguously. That at least constrains the failure modes you’re likely to run into.
If you are striping with redundancy (e.g., RAID-Z1), which I would highly recommend, you can lose a drive and not lose any data. That would take at least 3 equally-sized drives though, and you’d only be able to use the capacity of 2 of them.
- Comment on Self-hostee storage for Gmail 4 months ago:
It’s really more of a proxy setup that I’m looking for. With thunderbird, you can get what I’m describing for a single client. But if I want to have access to those emails from several clients, there needs to be a shared server to access.
docker-mbsync might be a component I could use, but doesn’t sound like there’s a ready-made solution for this today.
- Submitted 4 months ago to selfhosted@lemmy.world | 17 comments
- Comment on Linux Mint 22 released: An attractive option for migrating away from Windows | Windows 11 system requirements block millions of PCs from upgrading, while Linux Mint continues to work on older hardware 5 months ago:
Steam + Proton works for most games, but there are still rough edges that you need to be prepared to deal with. In my experience, it’s typically older titles and games that use anti-cheat that have the most trouble. Most of the time it just works, I even ran the Battle.net installer as an external Steam game with Proton enabled and was able to play Blizzard titles right away.
The biggest gap IMO is VR. If you have a VR headset that you use on your desktop and it’s important to you, stay on Windows. There is no realistic solution for VR integration in Linux yet. There are ways that you can kinda get something to work with ALVR, but it’s incredibly janky and no dev will support it. There are rumors Steam Link is being ported to Linux, nothing official yet though.
On balance, I’m incredibly happy with Mint since I switched last year. However, I do a decent amount of personal software development, and I’ve used Linux for 2 decades as a professional developer. I wouldn’t say the average Windows gamer would be happy dealing with the rough spots quite yet, but it’s like 95% of the way there these days. Linux has really grown up a lot in the last few years.
- Comment on Experimental Video Game Made Purely With AI Failed Because Tech Was 'Unable to Replace Talent' 9 months ago:
Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there’s a ton of hype around them. They’re also easy to build tools around, so everyone in tech is trying to get their piece of AI now.
However, LLMs are chat interfaces to searching a large dataset, and that’s about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it’s pattern-matching.
A lot of people will say that’s intelligence, but it’s different; the LLM isn’t capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don’t make the LLM smarter.
AGI needs something new, we aren’t going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.
- Comment on [deleted] 1 year ago:
I can hear René Auberjonois in that line