brucethemoose
@brucethemoose@lemmy.world
- Comment on Mods react as Reddit kicks some of them out again: “This will break the site” 1 day ago:
“Worse” only being “less engagement in the next quarter.”
AI mods are probably pretty good in that respect. Random bans don’t really matter, letting more controversial or ragebait disinformation through is a plus. In the short term.
- Comment on Borderlands 4 boss tells players "please get a refund from Steam if you aren't happy" as Randy Pitchford continues his very public crashout over the FPS's performance woes 1 day ago:
Gearbox has developed on Unreal Engine since 2005. They have ~1,300 employees.
I’m sorry, I know game dev is hard. But if indie devs can get it to work, Gearbox should too.
- Comment on Borderlands 4 boss tells players "please get a refund from Steam if you aren't happy" as Randy Pitchford continues his very public crashout over the FPS's performance woes 1 day ago:
Honestly Cyberpunk’s raytracing run’s like poo compared to Lumen (or KCD2 Crytek) compared to how good it looks. I don’t like any of the RT effects but RT Reflections.
PTGI looks incredible, but it’s basically only usable with mods.
- Comment on Borderlands 4 boss tells players "please get a refund from Steam if you aren't happy" as Randy Pitchford continues his very public crashout over the FPS's performance woes 1 day ago:
Trying to run Borderlands at 4K sounds about as stupid to me as
On the contrary, it should be perfectly runnable at 4K because its a 2025 PC game and the cel-shaded graphics should be easy to render.
‘Unreal Engine’ is no excuse either. Try something like Satisfactory rendering TONS of stuff on a shoestring budget with Lumen, like butter, on 2020 GPUs, and tell me that’s a sluggish engine.
- Comment on Stop Talking to Technology Executives Like They Have Anything to Say 1 day ago:
This is so on point and perfect.
- Comment on Are Cars Just Becoming Giant Smartphones on Wheels? 1 day ago:
Backed, not owned though, and not alone:
slaterides.com/slate-auto-investors/
I view it as a net positive if Amazon wants them for EV delivery. A substantial guaranteed customer is huge.
- Comment on I Got This Right, Right? 1 day ago:
His trail’s probably a long way away, isn’t it?
- Comment on I Got This Right, Right? 1 day ago:
My sentiment is the same. To be blunt, Lemmy is a terrible place to ask.
- Comment on I Got This Right, Right? 1 day ago:
Is it this one? www.youtube.com/watch?v=rncUo1Pnqio
- Comment on I Got This Right, Right? 1 day ago:
It’s a loyalty test, yes.
- Comment on Whether you use AI, think it's a "fun stupid thing for memes", or even ignore it, you should know it's already polluting worse than global air travel. 2 days ago:
If you want to look at it another way, if you assume every single square inch of silicon from TSMC is Nvidia server accelerators/AMD EPYCs, every single one running AI 24/7/365…
It’s not that much power, or water.
That’s unrealistic, of course, but its literally the max physical cap humanity can produce.
- Comment on Whether you use AI, think it's a "fun stupid thing for memes", or even ignore it, you should know it's already polluting worse than global air travel. 2 days ago:
I’m not sure what you’re reference. Imagegen models are not much different, especially now that they’re going transformers/MoE. Video gen models are chunky, but more rarely used, and they’re usually much smaller parameter counts.
Basically anything else machine learning is an order of magnitude less energy, at least
- Comment on Are Cars Just Becoming Giant Smartphones on Wheels? 2 days ago:
Yeah, that is so perfect.
Imagine a sedan or hatchback. It would be light as a feather and still feel spacious being so ‘clean’ inside.
- Comment on 'Borderlands 4 is a premium game made for premium gamers' is Randy Pitchford's tone deaf retort to the performance backlash: 'If you're trying to drive a monster truck with a leaf blower's motor, you're going to be disappointed' 2 days ago:
On the contrary, custom engines have been bombing.
Look at Starfield or Cyberpunk 2077 or basically any custom engine AAA.
…Then look at KCD2. It looks freaking fantastic, looks like raytacing with no raytracing, runs like butter, and it’s Crytek.
Look at something like Satisfactory, rendering tons of stuff on a shoestring budged and still looking fantastic thanks to Unreal Lumen.
There’s a reason the next Cyberpunk is going to be Unreal, and its because building a custom engine just for your game is too big an undertaking. Best to put that same budget in optimizing a ‘communal’ engine.
Borderlands 4 is slow because the botched optimization, not because its Unreal.
- Comment on 'Borderlands 4 is a premium game made for premium gamers' is Randy Pitchford's tone deaf retort to the performance backlash: 'If you're trying to drive a monster truck with a leaf blower's motor, you're going to be disappointed' 2 days ago:
It’s what he does best.
- Comment on Are Cars Just Becoming Giant Smartphones on Wheels? 2 days ago:
The sad thing is ‘smartphone on wheels’ is a slur.
Smartphones don’t have to be soulless and uniform and enshittified, but here we are.
- Comment on 6 days ago:
I had to look it up. The full context is:
So the new communications strategy for Democrats, now that their polling advantage is collapsing in every single state… collapsing in Ohio. It’s collapsing even in Arizona. It is now a race where Blake Masters is in striking distance. Kari Lake is doing very, very well. The new communications strategy is not to do what Bill Clinton used to do, where he would say, “I feel your pain.” Instead, it is to say, “You’re actually not in pain.” So let’s just, little, very short clip. Bill Clinton in the 1990s. It was all about empathy and sympathy. I can’t stand the word empathy, actually. I think empathy is a made-up, new age term that — it does a lot of damage. But, it is very effective when it comes to politics. Sympathy, I prefer more than empathy. That’s a separate topic for a different time.
Later on Twitter:
The same people who lecture you about ‘empathy’ have none for the soldiers discharged for the jab, the children mutilated by Big Medicine, or the lives devastated by fentanyl pouring over the border. Spare me your fake outrage, your fake science, and your fake moral superiority.
www.snopes.com/…/charlie-kirk-empathy-quote/
It’s not as bad as the out-of-context quote, but it’s still pretty bad.
- Comment on Sexualized video games are not causing harm to male or female players, according to new research 6 days ago:
And any divergence from that is “ruining games” or “being woke” to the point that we don’t even GET those games outside of the rare case of a game nobody cared about becoming popular
I would argue the origin is sales. E.G. the publisher wants the sex appeal to sell, so that’s what they put in the game. Early ‘bro’ devs may be a part of this, but the directive from up top is the crux of it.
And that got so normalized, it became what gamers expect. And now they whine like toddlers when anyone tries to change it, but that just happens to be an existing problem conservative movements jumped on after the fact.
TL;DR the root cause is billionares.
Like aways.
- Comment on Trump's video on the shooting of Kirk appears to be AI 6 days ago:
The video encoding crowd was screaming that that 10 years ago, heh.
…Shrug. I guess people are still getting ‘traditional’ cable broadcasts. Turn on YouTube TV, and you can see they still use those ancient broadcast codecs (even though its all streamed), probably because they have to just comply with the system.
- Comment on Trump's video on the shooting of Kirk appears to be AI 6 days ago:
Agreed.
Though I wouldn’t disparage investigation. I’d perhaps rephrase that: “I don’t have much experience with this topic,” as a sentiment, seems to have disappeared. The Trump admin is kind of a perfect embodiment of that.
- Comment on Trump's video on the shooting of Kirk appears to be AI 1 week ago:
The real boring dystopia is obsession people seem to have over something being ‘AI’ or not.
This is a perfect example. We’re looking at deinterlacing and blocking artifacts that have plagued reuploaded broadcasts for decades, and that pixel peepers have complained about that entire time, yet these ancient filters are being called out as ‘AI’. And it has little to do with Trump and the government: this would happen to anything high-profile enough.
- Comment on Trump's video on the shooting of Kirk appears to be AI 1 week ago:
Oh my… y’all need to chill.
Download yt-dlp and run this command, to get CNN’s ‘direct’ 245MB stream instead of AP’s (or YouTube’s) tiny re-encodes:
yt-dlp -f direct https://www.cnn.com/2025/09/10/us/video/trump-oval-office-statement-white-house-charlie-kirk-death-digvid
It’s interlaced.
en.wikipedia.org/wiki/Interlaced_video
The re-encodes are clearly de-interlaced (poorly) which is why motion looks so weird. It’s also choppy and blocky, hallmarks of poor encoding, even in a very high bitrate stream.
Y’all (and apparently the rest of the internet) are obsessing over literally decades-old video encoding/broadcasting issues and oldschool video filters to fix them that have nothing to with AI.
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 1 week ago:
Yeah. But it also messes stuff up from the llama.cpp baseline, and hides or doesn’t support some features/optimizations, and definitely doesn’t support the more efficient iq_k quants of ik_llama.cpp and its specialzied MoE offloading.
And that’s not even getting into the various controversies around ollama (like broken GGUFs or indications they’re going closed source in some form).
…It just depends on how much performance you want to squeeze out, and how much time you want to spend on the endeavor. Small LLMs are kinda marginal though, so its pretty important IMO.
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 1 week ago:
In case I miss your reply, assuming a 3080 + 64 GB of RAM, you want the IQ4_KSS (or IQ3_KS, for more RAM for tabs and stuff) version of this:
huggingface.co/ubergarm/GLM-4.5-Air-GGUF
Part of it will run on your GPU, part will live in system RAM, but ik_llama.cpp does the quantizations split in a particularly efficient way for these kind of ‘MoE’ models.
If you ‘only’ have 32GB RAM or less, that’s tricker, and the next question is what kind of speeds do you want. But it’s probably best to wait a few days and see how Qwen3 80B looks when it comes out. Or just go with the IQ4_K version of this: huggingface.co/…/Qwen3-30B-A3B-Thinking-2507-GGUF
And you don’t really need the hyper optimization of ik_llama.cpp for Qwen3 30B.
Alternatively, you could try to squeeze Gemma 27B into that 11GB VRAM, but it would be tight.
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 1 week ago:
How much system RAM, and what kind? DDR5?
- Comment on Amid Palworld Lawsuit, Nintendo Patents a System for Summoning a Character 1 week ago:
What’s interesting is they file this in the US.
Is there a reason they don’t file patents in Japan instead?
- Comment on Amid Palworld Lawsuit, Nintendo Patents a System for Summoning a Character 1 week ago:
Original source:
gamesfray.com/last-week-nintendo-and-the-pokemon-…
I hate to be nitpicky, but try to link the original, and post where you found it (GameRant) in the description instead.
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 1 week ago:
At risk of getting more technical, ik_llama.cpp has a fantastic built in webui:
github.com/ikawrakow/ik_llama.cpp/
Getting more technical, its also way better than ollama. You can run models way smarter than ollama can on the same hardware.
For reference, I’m running GLM-4 (667 GB of raw weights) on a single RTX 3090/Ryzen gaming rig, at reading speed, with pretty low quantization distortion.
And if you want a ‘look this up on the internet for me’ assistant (which you need for them to be truly useful), you need another docker project as well.
…That’s just how LLM self hosting is now. It’s simply too hardware intense to be easy. You can indeed host a small LLM without much understanding, but its going to be pretty dumb.
- Comment on Larry Ellison overtakes Elon Musk as world’s richest person 1 week ago:
It’s not. Ellison is shady AF.
- Comment on Nvidia unveils new GPU designed for long-context inference 1 week ago:
Jamba (hybrid transformers/space state) is a killer model folks are sleeping on. It’s actually coherent at long context, fast, has good world knowledge, even/grounded, and is good at RAG Its like a straight up better Cohere model IMO, and a no brainer to try for many long context calls.
TBH I didn’t try Falcon H1 much when it seemed to break at long context for me. I think most folks (at least publicly) are sleeping on hybrid SSMs because support in llama.cpp is not great. For instance, context caching does not work.
…Not sure about many others, toy models aside. There really aren’t too many to try.