enumerator4829
@enumerator4829@sh.itjust.works
- Comment on You should know how to coil cables 5 days ago:
For the record - analog multis can burn in hell. Nowadays, not running all of the show over Cat6 should be criminal.
- Comment on You should know how to coil cables 6 days ago:
For anyone working on or around stages:
Most sane production companies standardise on over-under. Even if you find some other method superior (nothing is), you’ll get thrown out headfirst if you don’t follow the standard. Having a tech fuck around with a non-compliant cable during a changeover is far too risky.
Should be noted that there are special cases. For example, thicccc cables (i.e. 24ch analog multi) that have their own dedicated cases often go down in an 8 instead - easier to pull out and you can use a smaller case. Thank god for digital audio.
(Also, when using over-under correctly, you can throw the cable and it will land straight without any internal stresses winding it up like a spring)
- Comment on Google's shocking developer decree struggles to justify the urgent threat to F-Droid 1 week ago:
I can agree on Apple not really having a properly supported hardware repair ecosystem, and actively working against third party repair.
But the software? When Samsung and friends had 2-4 years of security updates, Apple had almost twice that. The iPhone XS still has support, 6 years after end-of-sale, 7 years from release. Normal people can’t be expected to flash their phones with LineageOS. The situation is slightly better nowadays, but Samsung still seems to be depreciating 3 year old devices: endoflife.date/samsung-mobile
- Comment on Tailscale difficulties 1 week ago:
Here I am, running separate tailscale instances and a separate reverse proxy for like 15 different services, and that’s just one VM… All in all, probably 20-25 tailscale instances in a single physical machine.
Don’t think about Tailscale like a normal VPN. Just put it everywhere. Put it directly on your endpoints, don’t route. Then lock down all your services to the tailnet and shut down any open ports to the internet.
- Comment on Those who are hosting on bare metal: What is stopping you from using Containers or VM's? What are you self hosting? 1 week ago:
My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.
As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.
Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)
So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.
- Comment on Report: Microsoft's latest Windows 11 24H2 update breaks SSDs/HDDs, may corrupt your data 1 month ago:
$ su - # rm -rf —no-preserve-root /
Should do the trick. (Obviously don’t try it unless you know what you are doing and know what may happen when it hits your EFI variables.)
- Comment on HELP HIM. 1 month ago:
Computational biochemistry is slowly getting there. Alphafold was a big breakthrough, and there is plenty of ongoing research simulating more and more.
We can probably never get rid of animal testing entirely for clinical research, we’ll always need to validate simulations in animals before moving on to humans.
I do however agree that animal testing outside of clinical research approved by a competent independent ethics committee can fuck right off. (Looking at you, cosmetics industry)
- Comment on Popup Ads in Your Pickup Truck? RAM Trucks Now Feature Scammy Ads on the Center Display 1 month ago:
I don’t think there is much overlap between the sets of people
- buying these cars
- having the competence to hack them
- having the willingness and finances to potentially brick the car
- Comment on Spotify fans threaten to return to piracy as music streamer introduces new face-scanning age checks in the UK 2 months ago:
I wonder if ancient crunchy low bitrate mp3s will be an aesthetic, the way that dusty vinyl or worn out tapes are?
- Comment on Duckstation(one of the most popular PS1 Emulators) dev plans on eventually dropping Linux support due to Linux users, especially Arch Linux users. 2 months ago:
Most arch users are casuals that finally figured out how to read a manual. Then you have the 1% of arch users who are writing the manual…
It’s the Gentoo and BSD users we should fear and respect, walking quietly with a big stick of competence.
- Comment on China advances toward tech independence with new homegrown 6nm gaming and AI GPUs — Lisuan 7G106 runs Chinese AAA titles at 4K over 70 FPS and matches RTX 4060 in synthetic benchmarks 2 months ago:
Yeah, that’s the thing.
The gaming market only barely exists at this point.
- Comment on China advances toward tech independence with new homegrown 6nm gaming and AI GPUs — Lisuan 7G106 runs Chinese AAA titles at 4K over 70 FPS and matches RTX 4060 in synthetic benchmarks 2 months ago:
Pheasantsgamers buycheap inference cardsgaming cards.The absolute majority of Nvidias sales globally are top-of-the-line AI SKUs. Gaming cards are just a way of letting data scientists and developers have cheap CUDA hardware at home (while allowing some Cyberpunk), so they keep buying NVL clusters at work.
Nvidia’s networking division is probably a greater revenue stream than gaming GPUs.
- Comment on Thoughts?? 2 months ago:
I have fucked around enough with R’s package management. Makes Python look like a god damn dream. Containers around it is just polishing a turd. Still have nightmares from building containers with R in automated pipelines, ending up at like 8 GB per container.
Also, good luck getting reproducible container builds.
Regarding locales - yes, I mentioned that. Thats’s a shitty design decision if I ever saw one. But within a locale, most Excel documents from last century and onwards should work reasonably well. (Well, normal Excel files. Macros and VB really shouldn’t work…). And it works on normal office machines, and you can email the files, and you can give it to your boss. And your boss can actually do something with it.
I also think Excel should be replaced by something. But not R.
- Comment on Thoughts?? 2 months ago:
R, the language where dependency resolution is built upon thoughts and prayers.
Say what you want about Excel, but compatibility is kinda decent (ignoring locales and DNA sequences). Meanwhile, good luck replicating your R installation on another machine.
- Comment on Very large amounts of gaming gpus vs AI gpus 2 months ago:
the H200 has a very impressive bandwith of 4.89 TB/s, but for the same price you can get 37 TB/s spread across 58 RX 9070s, but if this actually works in practice i don’t know.
Your math checks out, but only for some workloads. Other workloads scale out like shit, and then you want all your bandwidth concentrated. At some point you’ll also want to consider power draw:
- One H200 is like 1500W when including support infrastructure like networking, motherboard, CPUs, storage, etc.
- 58 consumer cards will be like 8 servers loaded with GPUs, at like 5kW each, so say 40kW in total.
Now include power and cooling over a few years and do the same calculations.
As for apples and oranges, this is why you can’t look at the marketing numbers, you need to benchmark your workload yourself.
- Comment on Very large amounts of gaming gpus vs AI gpus 2 months ago:
Well, a few issues:
- For hosting or training large models you want high bandwidth between GPUs. PCIe is too slow, NVLink has literally a magnitude more bandwidth. See what Nvidia is doing with NVLink and AMD is doing with InfinityFabric. Only available if you pay the premium, and if you need the bandwidth, you are most likely happy to pay.
- Same thing as above, but with memory bandwidth. The HBM-chips in a H200 will run in circles around the GDDR-garbage they hand out to the poor people with filthy consumer cards. By the way, your inference and training is most likely bottlenecked by memory bandwidth, not available compute.
- Commercially supported cooling of gaming GPUs in rack servers? Lol. Good luck getting any reputable hardware vendor to sell you that, and definitely not at the power densities you want in a data center.
- TFLOP16 isn’t enough. Look at 4 and 8 bit tensor numbers, that’s where the expensive silicon is used.
- Nvidias licensing agreements basically prohibit gaming cards in servers. No one will sell it to you at any scale.
For fun, home use, research or small time hacking? Sure, buy all the gaming cards you can. If you actually need support and have a commercial use case? Pony up. Either way, benchmark your workload, don’t look at marketing numbers.
Is it a scam? Of course, but you can’t avoid it.
- Comment on Huawei shows off data center supercomputer that is better “on all metrics” 5 months ago:
Please note that the nominal FLOP/s from both Nvidia and Huawei are kinda bullshit. What precision we run at greatly affect that number. Nvidias marketing nowadays refer to fp4 tensor operations. Traditionally, FLOP/s are measured with fp64 matrix-matrix multiplication. That’s a lot more bits per FLOP.
Also, that GPU-GPU bandwidth is kinda shit compared to Nvidias marketing numbers if I’m parsing correctly (NVLink is 18x 10GB/s links per GPU, big ’B’ in GB). I might read the numbers incorrectly, but anyway. How and if they manage multi-GPU cache coherency will be interesting to see. Nvidia and AMD both do (to varying degrees) have cache coherency in those settings. Developer experience matters…
Now, the real interesting thing is power draw, density and price. Power draw and price obviously influence TCO. On 7nm, I guess the power bill won’t be very fun to read, but that’s just a guess. The density influences network options - are DAC-cables viable at all, or is it (more expensive) optical all the way?
- Comment on What could possibly go wrong? DOGE to rapidly rebuild Social Security codebase. 6 months ago:
Document databases are the future /s
- Comment on FBI warnings are true—fake file converters do push malware 6 months ago:
What? Just base64 encrypt it before you store it in the git hub
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 6 months ago:
The flaw with hard drives comes with large pools. The recovery speed is simply too slow when a drive fails, unless you build huge pools. So you need additional drives for more parity.
I don’t know who cares about shelf life. Drives spin all their lives, which is 5-10 years. Use M-Disk or something if you want shelf life.
- Comment on Alibaba doubles down on RISC-V architecture with a new secretive 'server-grade' chip that will put AMD and Intel on alert 6 months ago:
I agree with you, mostly. Margins in the datacenter are thin for some players. Not Nvidia, they are at like 60% pure profit per chip, including software and RnD. That will have an effect on how we design stuff in the next few years.
I think we’ll need both ”GPU” and traditional CPUs for the foreseeable future. GPU-style for bandwidth or compute constrained workloads and CPU-style for latency sensitive workloads or pointer chasing. Now, I do think we’ll slap them both on top of the same memory, APU-style á la MI300A.
That is, as long as x86 has the single-threaded advantage, RISC-V won’t take over that marked, and as long as GPUs have higher bandwidth, RISC-V won’t take over that market.
Finally, I doubt we’ll see a performant RISC-V chip from China the next decade - they simply lack the EUV fabs. From outside of China, maybe, but the demand isn’t nearly as large.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 6 months ago:
Not economical. Storage is already done on far larger fab nodes than CPUs and other components. This is a case where higher density actually can be cheaper. ”Mature” nodes are most likely cheaper than the ”ancient” process nodes simply due to age and efficiency. (See also the disaster in the auto industry during covid. Car makers stopped ordering parts made on ancient process nodes, so the nodes were shut down permanently due to cost. After covid, fun times for automakers that had to modernise.)
Go compare prices, new NVMe M.2 will most likely be cheaper than SATA 2.5” per TB. The extra plastic shell, extra shipping volume and SATA-controller is that difference. 3.5” would make it even worse. In the datacenter, we are moving towards ”rulers” with 61TB available now, probably 120TB soon. Now, these are expensive, but the cost per TB is actually not that horrible when compared to consumer drives.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 6 months ago:
Tape will survive, SSDs will survive. Spinning rust will die
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 6 months ago:
Nope. Larger chips, lower yields in the fab, more expensive. This is why we have chiplets in our CPUs nowadays. Production cost of chips is superlinear to size.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 6 months ago:
It’s not the packaging that costs money or limits us, it’s the chips themselves. If we crammed a 3.5” form factor full of flash storage, it would be far outside the budgets of mortals.
- Comment on 'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises 6 months ago:
Why? We can cram 61TB into a slightly overgrown 2.5” and like half a PB per rack unit.
- Comment on How do you keep track of vulnerabilities? 7 months ago:
Unless you have actual tooling (i.e. RedHat erratas + some service on top of that), just don’t even try.
Stop downloading random shit from dockerhub and github. Pick a distro that has whatever you need packaged, install from the repositories and turn on automatic updates. If you need stuff outside of repos, use first party packages and turn on auto updates. If there aren’t any decent packages, just don’t do it. There is a reason people pay RedHat a shitton of money, and that’s because they deal with much of this bullshit for you.
At home, I simply won’t install anything unless I can enable automatic updates. Nixos solves much of it. Two times a year I need to bump the distro version, bump the nextcloud release, and deal with depreciations, and that’s it.
I also highly recommend turning on automatic periodic reboots, so you actually get new kernels running…
- Comment on Immich: opinion revised 7 months ago:
You mean ”hardcore WAF challenge”?
- Comment on Immich: opinion revised 7 months ago:
If you’ve taken care to properly isolate that service, sure. You know, on a dedicated VM in a DMZ, without access to the rest of your network. Personally, I’d avoid using containers as the only barrier, but your risk acceptance is yours to manage.
- Comment on Immich: opinion revised 7 months ago:
Well, I’d just go for a reverse proxy I guess. If you are lazy, just expose it as an ip without any dns. For working DNS, you can just add a public A-record for the local IP of the Pi. For certs, you can’t rely on the default http-method that letsencrypt use, you’ll need to do it via DNS or wildcards or something.
But the thing is, as your traffic is on a VPN, you can fuck up DNS and TLS and Auth all you want without getting pwnd.