brucethemoose
@brucethemoose@lemmy.world
- Comment on Intel to Announce Plans This Week to Cut More Than 20% of Staff 21 hours ago:
Good!
- Comment on Intel to Announce Plans This Week to Cut More Than 20% of Staff 1 day ago:
I’m hoping Arc survives all this?
I know they want to focus, but no one’s going to want their future SoCs if the GPU part sucks or is nonexistent. Battlemage is good!
- Comment on Angry, disappointed users react to Bluesky's upcoming blue check mark verification system 3 days ago:
It was selectively given to institutions and “major” celebrities before that.
Selling them dilutes any meaning of “verified” because any joe can just pay for extra engagement. It’s a perverse incentive, as the people most interest in grabbing attention buy it and get amplified.
It really has little to do with Musk.
- Comment on Angry, disappointed users react to Bluesky's upcoming blue check mark verification system 3 days ago:
the whole concept is stupid.
+1
Being that algorithmic just makes any Twitter-like design too easy to abuse.
- Comment on Angry, disappointed users react to Bluesky's upcoming blue check mark verification system 3 days ago:
Not sure where you’re going with that, but it’s a perverse incentive, just like the engagement algorithm.
Elon is a problem because he can literally force himself into everyone’s feeds, but also because he always posts polarizing/enraging things these days.
- Comment on This is real 5 days ago:
Yeah, and the engagement metrics are massive. The problem is this is reaching people.
God, anyone, please send some meteors to Twitter data centers. And Facebook while you’re at it…
- Comment on Baldur's Gate 3 - The Final Patch: New Subclasses, Photo Mode, and Cross-Play 1 week ago:
They seem to love writing cities and fantasy-tech too, going by some of the stuff in BG3.
Looks like Shadowrun’s licensing is a complicated mess though, with Microsoft at least involved, so I guess it’s unlikely :(
- Comment on Baldur's Gate 3 - The Final Patch: New Subclasses, Photo Mode, and Cross-Play 1 week ago:
Oh man, imagine if they did a Shadowrun game. Take their fantasy credentials/writing and mix it with cyberpunk…
- Comment on Baldur's Gate 3 - The Final Patch: New Subclasses, Photo Mode, and Cross-Play 1 week ago:
Awesome!
I wonder if things will organize around a “unofficial” modding API like Harmony for Rimworld, Forge for Minecraft, SMAPI for Stardew Valley, and so on? I guess it depends if some hero dev team does it and there’s enough “demand” to built a bunch of stuff on it.
Skyrim and some other games stayed more fragmented, others like CP2077 just never hit critical mass I guess.
- Comment on Baldur's Gate 3 - The Final Patch: New Subclasses, Photo Mode, and Cross-Play 1 week ago:
How is the modding scene these days? Seems like there’s a lot in the patch addressing that, but are things still more aesthetic?
- Comment on High school student uses AI to reveal 1.5 million previously unknown objects in space. 1 week ago:
I mean, “modest” may be too strong a word, but a 2080 TI-ish workstation is not particularly exorbitant in the research space.
Also that’s not always true. Some “AI” models, especially oldschool ones, function fine on old CPUs. There are also efforts (like bitnet) to get larger ones fast cheaply.
- Comment on High school student uses AI to reveal 1.5 million previously unknown objects in space. 1 week ago:
I have no idea if it has any impact on the actual results though.
Is it a PyTorch experiment? Other than maybe different default data types on CPU, the results should be the same.
- Comment on High school student uses AI to reveal 1.5 million previously unknown objects in space. 1 week ago:
That’s even overkill. A 3090 is pretty standard in the sanely priced ML research space. It’s the same architecture as the A100, so very widely supported.
5090 is actually a mixed bag because it’s too new, and architecture support for it is hit and miss miss. And also because it’s ridiculously priced for a 32G card.
And most CPUs with tons of RAM are fine, depending on the workload, but the constraint is usually “does my dataset fit in RAM” more than core speed (since just waiting 2X or 4X longer is not that big a deal).
- Comment on High school student uses AI to reveal 1.5 million previously unknown objects in space. 1 week ago:
The model was run (and I think trained?) on very modest hardware:
The computer used for this paper contains an NVIDIA Quadro RTX 6000 with 22 GB of VRAM, 200 GB of RAM, and a 32-core Xeon CPU, courtesy of Caltech.
That’s a double VRAM Nvidia RTX 2080 TI + a Skylake Intel CPU, a relative potato these days. With room for a batch size of 4096, nonetheless! Though they did run into some preprocessing bottleneck in CPU/RAM.
The primary concern is the clustering step. Given the sheer magnitude of data present in the catalog, without question the task will need to be spatially divided in some way, and parallelized over potentially several machines
- Comment on AI slop farms are churning out fake heartwarming videos about Trump figures. 1 week ago:
I posit the central flaw is the engagement system, which was AI driven long before LLM/diffusion was public. The slop is made because it works in that screwed up system.
I dunno about substitutions or details, but the bottom line is simple: if engagement-driven social media doesn’t die in fire, the human race will.
- Comment on A 'US-Made iPhone' Is Pure Fantasy 1 week ago:
Excess carrier inventory they can write off as a loss since the Plus didn’t sell as well, or so I was told.
- Comment on A 'US-Made iPhone' Is Pure Fantasy 1 week ago:
TBH most of the cost is from the individual components. The core chip fab, the memory fab, the oled screen fab, the battery, power regulation, cameras, all massive operations and very automated. Not to speak of the software stack. Or the chip R&D and tape out costs.
The child labor is awful, but IDK why people think it’s the most expensive part of a $1k+ iPhone.
- Comment on A 'US-Made iPhone' Is Pure Fantasy 1 week ago:
They’re more often subsidized by carriers here (in the US), too. I didn’t really want* an iPhone, but $400 new for a Plus, with a plan discount, just makes an Android set not worth it.
- Comment on Adobe Gets Bullied Off Bluesky 1 week ago:
OP’s being abrasive, but I sympathize with the sentiment. Bluesky is algorithmic just like Twitter.
…Lemmy feels like a political purity test to me, TBH. Like, I love Lemmy and the Fediverse, but at the same time mega upvoted posts/comments like “X person should kill themself,” explulsion of nuance in specific issues, leaks into every community and such are making me step back more and more.
- Comment on I had no idea y cunt was this powerful 1 week ago:
Yeah, oops.
Point still stands though.
- Comment on New 'DRAM+' memory designed to provide DRAM performance with SSD-like storage capabilities, uses FeRAM tech 2 weeks ago:
Yes because ultimately, it just wasn’t good enough.
That’s what I was trying to argue below. Unified memory is great if it’s dense and fast enough, but that’s a massive if.
- Comment on I had no idea y cunt was this powerful 2 weeks ago:
Uh, none of them. The troll they are feeding is Elon Musk, the fallacy is that Twitter is an open forum where your engagement “makes a difference.” It’s not. It’s an algorithmic feed.
- Comment on I had no idea y cunt was this powerful 2 weeks ago:
Notice the engagement.
240K views between the top two.
0.6K for the shot back.
Come on… Rule #1. Don’t feed the trolls. Get off Twitter.
- Comment on The one good thing about all this 2 weeks ago:
I’m sure they rationally assume Trump is totally unfamiliar with that policy.
- Comment on New 'DRAM+' memory designed to provide DRAM performance with SSD-like storage capabilities, uses FeRAM tech 2 weeks ago:
It’s not theoretical, it’s just math. Removing 1/3 of the bus paths, and also removing the need to constantly keep RAM powered
And here’s the kicker.
You’re supposing it’s (given the no refresh bonus) 1/3 as fast as dram, similar latency, and cheap enough per gigabyte to replace flash. That is a tall order, and it would be incredible if it hits all three of those. I find that highly improbable.
Optane, for reference, was a lot slower than DRAM and a lot more expensive/less dense than flash even with all the work Intel put into it and busses built into then top end CPUs for direct access. And they thought that was pretty good.
- Comment on New 'DRAM+' memory designed to provide DRAM performance with SSD-like storage capabilities, uses FeRAM tech 2 weeks ago:
You are talking theoretical.
A big reason that supercomputers moved to a network of “commodity” hardware architecture is that its cost effective.
How would one build a giant unified pool of this memory? CXL, but how does it look physically? Maybe you get a lot of bandwidth in parallel, but how would it be even close to the latency of “local” DRAM busses on each node? Is that setup truly more power efficient than banks of DRAM backed by infrequently touched flash? If your particular workload needs fast random access to memory, even at scale the only advantage seems to be some fault tolerance at a huge speed cost, and if you just need bulk high latency bandwidth, flash has got you covered for cheaper.
…I really like the idea of non volatile unified memory, but ultimately architectural decisions come down to economics.
- Comment on New 'DRAM+' memory designed to provide DRAM performance with SSD-like storage capabilities, uses FeRAM tech 2 weeks ago:
How is that any better than DRAM though? It would have to be much cheaper/GB, yet reasonably faster than the top-end SLC/MLC flash Samsung sells.
Another thing I don’t get… in all the training runs I see, dataset bandwidth needs are pretty small. Like, streaming images (much less like 128K tokens of text) is a minuscule drop in the bucket compared to how long a step takes, especially with hardware decoders for decompression.
Weights are an entirely different duck, and stud like Cerebras clusters do stream them, but they need the speed of DRAM.
- Comment on New 'DRAM+' memory designed to provide DRAM performance with SSD-like storage capabilities, uses FeRAM tech 2 weeks ago:
Yeah, it’s a solution in search of a problem.
Is it cheaper than DRAM? Great! But until then, even if it’s lower power due to not needing the refresh, flash is just so cheap that it can be scaled up much better.
And I dunno what they mean by AI workloads. How would non volatility help at all, unless it’s starting to approach SRAM performance.
- Comment on Are PC handhelds like Steam Deck really competitors for Switch 2? 2 weeks ago:
Some people spend a lot of time, money in mobile games.
Occam’s Razor. I think it’s just the “default device” and placed in front of their eyes, so it’s what most people choose?
- Comment on U.S. stock futures fall and Asian markets open sharply lower as Trump tariffs shock continues 2 weeks ago:
Wallstreetbets is hilarious now. Not just the wild options trades, but the apocalyptic sentiment.