mindbleach
@mindbleach@sh.itjust.works
- Comment on Is Germany on the Brink of Banning Ad Blockers? User Freedom, Privacy, and Security Is At Risk. 7 hours ago:
Nevermind tearing a page out of your own copy of a book is not a copyright issue… at all.
- Comment on China is about to launch SSDs so small you insert them like a SIM card 12 hours ago:
Defragging wasn’t handled in hardware. The OS is free to frag it up.
- Comment on China is about to launch SSDs so small you insert them like a SIM card 2 days ago:
It’s a little weird that wear leveling isn’t handled at the software level, given that you can surely pick free sectors randomly. Random access is nearly free. So is idle CPU time.
- Comment on China is about to launch SSDs so small you insert them like a SIM card 3 days ago:
Is there a difference, besides SSDs tending to be plugged-in all the time? Maybe better firmware?
- Comment on China is about to launch SSDs so small you insert them like a SIM card 3 days ago:
So… an SD card?
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
Are you sure? Check.
Where you jumped in is me, pointing out, repeatedly, that LLMs and IT have nothing to do with the actual article. Y’know, the doctors I keep mentioning? They’re not decorative.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
You literally did.
“Concerning that the same is happening in medical even for the experts.”
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
No. You’re making a faulty comparison. The thing in this article is exclusively for experts. Using it made them better doctors, but when they stopped using it, they were out-of-practice at the old way. Like any skill you stop exercising. Especially at an expert level. Your junior programmers incompetently trusting LLMs is not the same problem in any direction.
This is genuinely important, because people are developing prejudice against an entire branch of computer science. This stupid headline pretends AI made cancer detection worse. Cancer’s kind of a big deal! Disguising the fact that detection rates improved with this tool, by fixating on how they got worse without it, may cost lives.
A lot of people in this thread are theatrically advocating the importance of deep understanding of complex subjects, and then giving a kneejerk “fuckin’ AI, am I right?”
- Comment on Bonk. 4 days ago:
Bop it!
- Comment on 4 days ago:
Some guy blogged that the smart ones move to advertising.
- Comment on 4 days ago:
Neural networks becoming practical is world-changing. This lets us do crazy shit we have no idea how to program sensibly. Dead-reckoning with an accelerometer could be accurate to the inch. Chroma-key should rival professional rotoscoping. Any question with a bunch of data and a simple answer can be trained at some expense and then run on an absolute potato.
So it’s downright bizarre that every single company is fixated on guessing the next word with transformers. Alternatives like text diffusion and mamba pop up and then disappear, without so much as a ‘so that didn’t work’ blog post.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
We’re not talking about LLMs.
These doctors didn’t ask ChatGPT “does this look like cancer.” We’re talking about domain-specific medical tools.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
Should urologists still train to detect diabetes by taste? We wouldn’t want the complexity of modern medicine to stunt their growth. These quacks can’t sniff piss with nearly the accuracy of Victorian doctors.
When a tool gets good enough, not using it is irresponsible. Sawing lumber by hand is a waste of time. Farmers today can’t use scythes worth a damn. Programming in assembly is frivolous.
At what point do we stop practicing without the tool? How big can the difference be, and still be totally optional? It’s not like these doctors lost or lacked the fundamentals. They’re just rusty at doing things the old way. If the new way is simply better, good, that’s progress.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
“Concerning that the same is happening in medical even for the experts.”
It isn’t.
Glad we cleared that up?
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
Tone policing, followed by essentialist insults. Zero self-awareness.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
Okay cool, that’s not what’s happening here.
These aren’t “vibe doctors.” They’re trained oncologists and radiologists. They have the skill to do this without the new tool, but if they don’t practice it, that skill gets worse. Surprise.
For comparison: can you code without a compiler? Are you practiced? It used to be fundamental. There must be e-mails lamenting that students rely on this newfangled high-level language called C. Those kids’ programs were surely slower… and ten times easier to write and debug. At some point, relying on a technology becomes much smarter than demonstrating you don’t need it.
If doctors using this tool detect cancer more reliably, they’re better doctors. You would not pick someone old-fashioned to feel around and reckon about your lump, even if they were the best in the world at discerning tumors by feel. You’d get an MRI. And you’d want it looked-at by whatever process has the best detection rates. Human eyeballs might be in second place.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
No shit, it’s my analogy. And I made clear - the underlying skill still exists.
These doctors can still spot cancer. They’re just rusty at eyeballing it, after several months using a tool that’s better than their eyeballs.
X-rays probably made doctors worse at detecting tumors by feeling around for lumps. Do you want them to fixate on that skill in particular? Or would you prefer medical care that uses modern technology?
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 4 days ago:
This is not that kind of AI. It’s not an LLM trained on WebMD. You cannot reason about this domain-specific medical tool, based on your experience with ChatGPT.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 5 days ago:
“I can do math by hand.”
“But what if you can’t?”
Incorrect.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 5 days ago:
It sounds like this is about when they stopped using AI.
If they do better with it than without it, why optimize how good they are without it? Like, I know how to do math, by hand. But I also own a calculator. If the speed and accuracy of my multiplication is life-and-death for worried families, maybe I should use the calculator.
- Comment on AI is not bad for the environment in comparison with many other regular activities. 5 days ago:
And the power their employees need to drive to work?
And the power for all the computers they used growing up?
And the power to make those computers?
And the power Intel used inventing the microprocessor?!
And the power the entire telephone grid used while AT&T developed the transistor?!?!
- Comment on AI is not bad for the environment in comparison with many other regular activities. 5 days ago:
It’s highlighting hypocrisy. It’s asking: do you take this problem seriously, or are you just complaining?
Having LLMs shoved into everything is a serious problem. But it’s a problem the way that forced updates and invasion of privacy were already a problem. Fixating on energy use is pretense. It’s working backwards to point at the negative externalities of something you’ve already made conclusions about, as if those factors were relevant to your conclusion. Using that as rhetoric is the nature of bad faith.
- Comment on AI is not bad for the environment in comparison with many other regular activities. 5 days ago:
The shipping industry emits a billion tons of CO~2~e per year. Training a model emits… maybe a thousand? An impact that could be offset by reducing Chinese imports by 0.0001%. Or arbitrarily limited by strong-arming the very few companies involved. DeepSeek knocked off a few orders of magnitude and R1 seems to work, as well as any of these things work.
But some people don’t really give a shit about the electricity involved - it’s just a negative for them to latch onto.
Now, it is a problem locally, where datacenters turn power straight into heat, as fast as they can manage. Anything like a tiny sliver of the global shipping industry becomes noticeable when it’s concentrated in one building within range of a commute.
- Comment on LLMs’ “simulated reasoning” abilities are a “brittle mirage,” researchers find 5 days ago:
Please don’t mistake vindication for a lack of ambiguity. When this took off, we had no goddamn idea what the limit was. The fact it works half this well is still absurd.
Simple examples like addition were routinely wrong, but they were wrong in a way that indicated - the model might actually infer the rules of addition. That’s a compact way to predict a lot of arbitrary symbols. Seeing that abstraction emerge would be huge, even if it was limited to cases with a zillion examples. And it was basically impossible to reason about whether that was pessimistic or optimistic.
A consensus for “that doesn’t happen” required all of this scholarship. If we had not reached this point, the question would still be open. Remove all the hype from grifters insisting AGI is gonna happen now, oops I mean now, oops nnnow, and you’re still left with a series of advances previously thought impossible. Backpropagation doesn’t work… okay now it does. Training only plateaus… okay it gets better. Diffusion’s cute, avocado chairs and all, but… okay that’s photoreal video. It really took people asking weird questions on high-end models to distinguish actual reasoning capability from extremely similar sentence construction.
And if we’re there, can we please have models ask a question besides ‘what’s the next word?’
- Comment on ChatGPT Is Still a Bullshit Machine 6 days ago:
Ass-pull nonsense metric.
Someone already told you ‘you have to know how to use the tool’ and it didn’t fucking help. Excuse me for trying to politely guide you toward what should be obvious.
- Comment on ChatGPT Is Still a Bullshit Machine 6 days ago:
Charles Babbage was once asked, ‘But if someone puts in the numbers wrong, how will your calculator get the right answer?’
Using a chatbot to code is useful if you don’t know how to code. You still need to know how to chatbot. You can’t grunt at the machine and expect it to read your mind.
Have you never edited a Google search, because the first try didn’t work?
- Comment on ChatGPT Is Still a Bullshit Machine 6 days ago:
This kind of assertion wildly overestimates how well we understand intelligence.
Higher levels of bullshitting require more abstraction and self-reference. Meaning must be inferred from observation, to make certain decisions, even when picking words from a list.
Current models are abstract enough to see a chessboard in an Atari screenshot, figure out which pieces each jumble of pixels represents, and provide a valid move. Scoffing because it’s not actually good at chess is a bizarre line to draw, to say there’s zero understanding involved.
Current models might be abstract enough to teach them a new game by explaining the rules.
Current models are not abstract enough to explain why they’re bad at a game and expect them to improve.
- Comment on GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it. 1 week ago:
Seriously. Neural networks can approximate literally any function, and the lumbering giants have all decided ‘what’s the next word?’ is the only function worth pursuing.
It’d take a sliver of their current budget to try starting over like it’s 2020. Compare with benchmarks that now look quaint. Enjoy some wisdom where previously they could only guess. Buuut nope: all LLM, all the time, and big big big.
- Comment on GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it. 1 week ago:
It’s a chatbot that can see and draw, but rich idiots keep pushing it as an oracle. As if “a chatbot that can see and draw” isn’t impressive enough.
Three years ago, ‘label a tandem bicycle’ would’ve produced a tricycle covered in squiggles. Four years ago it was impossible. I don’t mean ‘really really hard.’ I mean we had no fucking idea how to make that program. People have been trying since code came on punchcards.
LLMs can almost-sorta-kinda do it, despite being completely the wrong approach. It’s shocking that ‘guess the next word’ works this well. I’m confused by the lack of experimentation in, just… asking a different question. Diffusion’s doing miracles with ‘estimate the noise.’ Video generators can do photorealism faster and cheaper than an actual camera.
The problem is, rich idiots claim this makes it an actual camera. In that context, it’s fair to point out when a video shows the Eiffel Tower with in Berlin. It’s deeply impressive that computers can do that, now. But it might ruin people’s vacation plans.
- Comment on LEAKED: A New List Reveals Top Websites Meta Is Scraping of Copyrighted Content to Train Its AI 1 week ago:
Outright piracy? It’s not allowed, but it’s supposed to be a civil matter.
Videos posted without permission? I don’t think the audience is liable for that.
Scraping despite robots.txt? If that’s illegal for its own sake, then it’s overreaching on ‘unauthorized access.’
Training on any of this? … nah, it’s probably fine.
A pile of linear algebra that knows what pornography looks like does not serve the same function as any particular example. No more than one video infringes on another for the general idea of cameras pointed at naked people. Producing the same kind of thing is not infringement. (Though if it involves Shrek, the trademark people will have angry and confusing questions.)
Reproducing any particular input is a failure of training. Even the Bible should be paraphrased past about Genesis 1:9. The whole idea is getting the vibe of everything we’ve ever published. Cliff notes, passable imitation of the writing style, couple passages everyone’s quoted verbatim.
An encyclopedia article about a book doesn’t become illegal if we learn the author shoplifted it.