brucethemoose
@brucethemoose@lemmy.world
- Comment on The US government could get even more Intel stock if the company ends up losing control of its chip manufacturing business 12 hours ago:
Anandtech had a great saying:
There are no bad products, just bad prices.
Performance wise, Intel CPUs are just fine at the right price, no matter what manufacturing drama is going on. Don’t get me wrong, all my recent CPU purchases have been AMD, but not because of brand loyalty or anything; it’s because they were on sale and great for the price.
- Comment on I'm on the spectrum. How do I live the rest of my life? 20 hours ago:
I am coming to grips with this myself. Not that I’m exactly in your shoes or anything, but I’ve kind of ignored ADD and realized that I’m probably on the spectrum as well.
…It’s already wrecked my life.
I don’t have great advice. But the two things I might suggest, that I’m trying to work on myself, are:
-
Be mindful. Be conscious of your own needs and tendencies, and catch/steer yourself before they become a problem. Simply knowing you are on the neurodivergence spectrum is huge.
-
Be non combative. It’s easy to get frustrated with how shit and incompatible things are, and so on, but I’m finding a more ‘relaxed’ attitude is helping me. Don’t let people get under your skin: who cares what they think, beyond the bare minimum they require from you? Focus attention on people (and things) you like, instead.
-
- Comment on OpenAI Says It's Scanning Users' ChatGPT Conversations and Reporting Content to the Police 21 hours ago:
Also, I’m going to plug the AI Horde, which is basically the Fediverse for AI self hosting: aihorde.net
Ping me, and I can host a medium-sized model to try for free on my humble 3090 (via those linked web UIs), if you want. The options are limitless, from something STEM-focused like Nemotrom 49B, to dungeonmaster finetunes, to horny as heck roleplaying models, lol.
- Comment on OpenAI Says It's Scanning Users' ChatGPT Conversations and Reporting Content to the Police 23 hours ago:
I am on mobile and can be more detailed later, but the jist is to sign up (with a credit card) to some API service. There are many. Some neat ones include:
- Openrouter (a gateway to many, many models)
- Cerebras API (which is faster than anything and has a generous free tier)
- Google Gemini, which is free to just try this out on with no credit card.
Most (in exchanges for charging pennies for each request) do not log your prompts. If you are really, really concerned, you can even rent your own GPU instance on demand.
Anyway, they will give you a key, which is basically a password.
Paste that key into the LLM frontend of your choice, like Open Web UI, LM Studio, or even web apps like:
- Comment on OpenAI Says It's Scanning Users' ChatGPT Conversations and Reporting Content to the Police 23 hours ago:
The only way to use chatgpt if you must is
-To use it over API.
Or, preferably, literally any other LLM API that isn’t such a censored privacy nightmare.
The problem here is that most percieve ChatGPT as the only chatbot in existance… It’s not, not even close.
- Comment on Framework unveils a second-generation Framework Laptop 16 with a swappable Nvidia RTX 5070 GPU, an industry first, shipping in November 2025 1 day ago:
It’s just soldered LPDDR5X. Framework could’ve fixed it to a motherboard just like the desktop.
I think the problem is cooling and power. The laptop’s internal PSU and heatsink would have to be overhauled for Strix Halo, which would break backwards compatibility if it was even possible to cram in. Same with bigger AMD GPUs and such; people seem to underestimate the engineering and budget constraints they’re operating under.
That being said, way more laptop makers and motherboard makers could have picked up Strix Halo. I’d kill for a desktop motherboard with a PCIe x8 GPU slot.
- Comment on WATER! 1 day ago:
if ChatGPT sucks
Most people don’t know anything beyond ChatGPT and Copilot.
If we are talking programmers, maybe include claude, gemini, deepseek and perplexity search, though this is not always true.
…Point being, OpenAI does have a short term ‘default’ and known brand advantage, unfortunately.
That being said, there’s absolutely manipulation of LLMs, though not what OP is thinking persay. I see more of:
-
Benchmaxxing with a huge sycophancy bias (which works particularly well in LM Arena).
-
Benchmaxxing with massive thinking blocks, which is what OP is getting at. I’ve found Qwen is particularly prone to this, and it does drive up costs.
-
Token laziness from some of OpenAI’s older models, as if they were trained to give short responses to save GPU time.
-
“Deep Frying” models for narrow tasks (coding, GPQA style trivia, math, things like that) but making them worse outside of that, especially at long context.
-
…Straight up cheating by training on benchmark test sets.
-
Safety training to a ridiculous extent with stuff like Microsoft Phi, OpenAI, Claude, and such, for political reasons and to avoid bad PR.
In addition, ‘free’ chat UIs are geared for gathering data they can use to train on.
You’re right that there isn’t much like ad injection or deliberate token padding yet, but still.
-
- Comment on Nvidia Sales Jump 56%, a Sign the A.I. Boom Isn’t Slowing Down 2 days ago:
On the training side, it’s mostly:
-
Paying devs to prepare the training runs with data, software architecture, frameworks, things like that.
-
Paying other devs to get the training to scale across 800+ nodes.
-
Building the data centers, where the construction and GPU hardware costs kind of dwarf power usage in the short term.
On the inference side:
-
Sometimes optimized deployment frameworks like Deepseek uses, though many seem to use something off the shelf like sglang
-
Renting or deploying GPU servers individually. They don’t need to be networked at scale like for training, with the highest end I’ve heard (Deepseek’s optimized framework) being like 18 servers or so. And again, the sticker price of the GPUs is the big cost here.
-
Developing tool use frameworks.
On both sides, the big players burn tons of money on Tech Bro “superstar” developers that, frankly, seem to Tweet more than developing interesting things.
-
- Comment on Nvidia Sales Jump 56%, a Sign the A.I. Boom Isn’t Slowing Down 2 days ago:
Nods vigorously.
The future of LLMs basically unprofitable for the actual AI companies. We are in a hell of a bubble, which I can’t wait to pop so I can pick up a liquidation GPU (or at least rent one for cheap).
That doesn’t mean power usage is an issue. In fact, it seems like the sheer inefficiency of OpenAI/Grok and such are nails in their coffins.
- Comment on Web design magazine from 2000. New tech like… Flash 4! JavaScript! WAP! 2 days ago:
I read this as “100% Weeb Design” in the thumbnail, and immediately clicked, heh.
- Comment on Intel details everything that could go wrong with US taking a 10% stake 2 days ago:
Shrug. The DoD is notorious for trying to keep competition between its suppliers alive. But I don’t know enough about the airplane business to say they’re in a death spiral or not.
The fab business is a bit unique because of the sheer scaling of planning and capital involved.
I dunno why you brought up China/foreign interests though. Intel’s military fab designs would likely never get sold overseas, and neither would the military arm of Boeing. This is just about keeping one of three leading edge fabs on the planet alive, and of course the gov is a bit worried about the other two in Taiwan and South Korea.
- Comment on Nvidia Sales Jump 56%, a Sign the A.I. Boom Isn’t Slowing Down 2 days ago:
The power usage is massively overstated, and a meme perpetuated by Altman so he’ll get more more money for ‘scaling’
GPT-5 is already proof scaling with no innovation doesn’t work. And tech in the pipe like bitnet is coming to disrupt that even more; the future is small, specialized, augmented models, mostly running locally on your phone/PC because it’s so cheap and low power.
- Comment on Intel details everything that could go wrong with US taking a 10% stake 2 days ago:
Ars is making a mountain out of a molehill.
James McRitchie
Kristin Hull
These are literal activists investors known for taking such stances. It would be weird if they didn’t.
a company that’s not in crisis
Intel is literally circling the drain. It doesn’t look like it on paper, but the Fab/chip design business is so long term that if they don’t get on track, they’re basically toast. And they’re also important to the military.
Intel stock is up, short term and YTD. CNBC was ooing and aahing over it today. Of course there are blatant issues, like:
However, the US can vote “as it wishes,” Intel reported, and experts suggested to Reuters that regulations may be needed to “limit government opportunities for abuses such as insider trading.”
And we all know they’re going to insider trade the heck out of it. But the sentiment is not a bad idea. Government ties/history are why TSMC and Samsung Foundry are where they are today, and their dead competitors are not.
- Comment on Framework unveils a second-generation Framework Laptop 16 with a swappable Nvidia RTX 5070 GPU, an industry first, shipping in November 2025 2 days ago:
The 7900 specifically.
They have to stay within the TDP. Their only option is something newer and ~100W (like the 5070).
Also (while no 395 is disappointing), it is a totally different socket/platform, and has a much higher TDP, so it may not even work in the Framework 16 as its currently engineered. For instance, the cooling or PSU just may not be able to cope.
- Comment on Framework unveils a second-generation Framework Laptop 16 with a swappable Nvidia RTX 5070 GPU, an industry first, shipping in November 2025 2 days ago:
Maybe they will be the ones to break the curse then and I can have a laptop that can actually treat like a desktop.
Nah, unfortunately they are just as beholden to the GPU makers as any of us. More than larger laptop OEMs for sure.
A future Intel Arc module may be the only hope, but that’s quite a hope.
- Comment on Framework unveils a second-generation Framework Laptop 16 with a swappable Nvidia RTX 5070 GPU, an industry first, shipping in November 2025 2 days ago:
Problem is almost no laptop has Strix Halo. Not even the Frameworks.
And rumors are its successor may be much better, so the investment could be, err, questionable.
- Comment on Bethesda planning a Starfield space gameplay revamp to make it more rewarding 2 days ago:
Oh you must mod the stink out of FO4.
Is there even much of a Starfield modding scene?
- Comment on Uhm 2 days ago:
Yeah, I mean, people should mostly be using a RAG retrieval system, not pure LLM slop like this, for reference. It just hasn’t really been made at scale because Google functioned as that well enough, and AI Bros think everything should be done within LLM weights instead of proper databases.
I mean… WTF. What if human minds were not allowed to use references?
WolframAlpha was trying to build this, but stalled.
- Comment on Let Google know what you think about their proposed restrictions on sideloading Android apps. - Android developer verification requirements [Feedback Form] 3 days ago:
Apple is a bit more receptive to bad PR, but Google has a history of kinda ignoring developer feedback, like with the JPEG XL thing as a narrow example.
- Comment on Let Google know what you think about their proposed restrictions on sideloading Android apps. - Android developer verification requirements [Feedback Form] 3 days ago:
I dunno, I’m sure there’s a part of them that doesn’t want to scare off all the free labor they get from the community developers.
Google’s thinking has gone short term “next quarter must go up.” They would absolutely trash their dev community for a quick buck, 100%.
- Comment on Let Google know what you think about their proposed restrictions on sideloading Android apps. - Android developer verification requirements [Feedback Form] 3 days ago:
They will for the Chinese market, whatever that’s worth.
- Comment on Travelling through space using the Sun as a Fuel 3 days ago:
Yes! There’s actually a fictional universe way ahead of you. See, for example, the Sundrivers, which use their star systems for thrust: www.orionsarm.com/eg-article/478adb4aeb392
Or Dyson Beams, used to propel large spacecraft and for communication: www.orionsarm.com/eg-article/48fe49fe47202
Planets arranged as phased array communicators/propulsion are quite common.
But there are more exotic solutions here too, like re-arranging stars into more compact configurations, or “metric” engineering involving warping space time in theoretically plausible, non relativity violating ways. This even (theoretically) allows for the creation of wormhole network and highly relativistic spacecraft, though with immense difficulty and complications, and no true FTL.
As an addendum, you actually don’t need to go millions of years into the future! A few millennia is plausibly enough for feats of engineering that are way beyond our comprehension: www.orionsarm.com/xcms.php?r=oa-timeline
- Comment on A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. 3 days ago:
I think the metaphor is trying to engineer the blades to be “finger safe”, when the better approach would be to guard against fingers getting inside an active blender.
- Comment on DM me on Spotify: Spotify launches a messaging feature. 3 days ago:
There’s something to be said for curated “auto” playlists, both for background and discovering stuff.
That being said, Pandora is waaaay better at this. So are free broadcasts/channels like Radio Paradise.
- Comment on Our Channel Could Be Deleted - Gamers Nexus 5 days ago:
GN is literally a sizable business, it absolutely is.
- Comment on South Korea makes AI investment a top policy priority to support flagging growth 1 week ago:
LG’s recent Exaone release is a pretty great local model for code and stuff, actually.
…Except they slapped an insane license on it. Basically you sign away your life even looking at it: huggingface.co/LGAI-EXAONE/…/LICENSE
Which is not the precedent, seeing how many 32B class (aka 16GB-24GB GPU) models are Apache licensed.
- Comment on Do LLM modelers maintain a list of manual corrections fed by humans? 1 week ago:
Yes. Absolutely.
The meme in the research community is that current LLMs are literally trained on benchmarks and common stuff people test in LM-Arena, like the how many r’s in strawberry question.
I’m not talking speculatively: Meta literally got caught red-handed doing this. They ran a separate finetune just to look good on lm-arena. And some benchmarks like MMLU have errors in them that many LLMs *answer ‘correctly’.
- Comment on Lowering power consumption on Opteron 1 week ago:
I dunno about Linux, but on Windows I used to use something called K10stat to manually undervolt cores with no access to such via the BIOS. The difference was night and day dramatic, as they idled ridiculously fast and AMD left a ton of voltage headroom back then.
I bet there’s some Linux software to do it. Look up if anyone used voltage control software for desktop Phenom IIs and such.
- Comment on Steam: Updates to User Review Scores Based on Language 1 week ago:
Full disclosure, I will sometimes cry about cryptocurrency stuff. But I mined a bitcoin many moons ago, too!
- Comment on Steam: Updates to User Review Scores Based on Language 1 week ago:
Yeah.
That’s the vibe I get from Lemmygrad too, like they assume the rest of the world is constantly pondering how much they hate China, as a dominating thought.