theunknownmuncher
@theunknownmuncher@lemmy.world
- Comment on Bitch shape attack 2 days ago:
We’ve also probably got viruses as a permanent part of our genome from some ancestor species.
We definitely have viruses as a permanent part of our genome. A type of herpes virus is present in the DNA of all living things descended from bony fishes
- Comment on Why your old mobile phone may be polluting Thailand 1 week ago:
Nah, my old mobile phone is in a drawer in my basement.
- Comment on If you were (falsely) accused of murder, but you have records of your phone at home with youtube videos being played, can you submit those records as a sort of Alibi to exonerate you? 1 week ago:
it seems too flimsy
Okay, then the cops will have no problem proving you were elsewhere at the time, if its a lie. Until they’ve proved it and convinced a jury of that, you’re 100% innocent.
- Comment on If you were (falsely) accused of murder, but you have records of your phone at home with youtube videos being played, can you submit those records as a sort of Alibi to exonerate you? 1 week ago:
The commenter is still completely wrong,then. In that case there is no due process and you’re just guilty because people with guns say so.
- Comment on If you were (falsely) accused of murder, but you have records of your phone at home with youtube videos being played, can you submit those records as a sort of Alibi to exonerate you? 1 week ago:
Wrong, that’s the opposite of how reasonable doubt works. It is the prosecutor’s job to prove beyond doubt that the defendent is guilty of the charges. The defendent does not need to prove they are innocent.
- Comment on Encrypting without full disk encryption question 2 weeks ago:
If it can power up and decrypt the docker volumes on its own without prompting you for a password in your basement, it will also power up and decrypt the docker volumes on its own without prompting the robbers for a password in their basement
- Comment on Google is intentionally throttling YouTube videos, slowing down users with ad blockers 2 weeks ago:
I’ll wait.
- Comment on Trump's claim about "control of the skies over Iran" raises questions about U.S. involvement in conflict 2 weeks ago:
So… did Congress authorize this or is the US Constitution just no longer relevant at all…?
- Comment on The Plane That Crashed Yesterday Was the Same One a Dead Boeing Whistleblower Warned About 2 weeks ago:
I never would have interpreted the headline to mean “the same exact plane”?
- Comment on Wikipedia Pauses AI-Generated Summaries After Editor Backlash 3 weeks ago:
the Top section of each wikipedia article is already a summary of the article
- Comment on We Should Immediately Nationalize SpaceX and Starlink 3 weeks ago:
Yes, and the post title is just the title of the article 🤦
- Comment on We Should Immediately Nationalize SpaceX and Starlink 3 weeks ago:
You never clicked on the link, did you?
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 3 weeks ago:
loses the argument “we’re at the age-old internet stalemate!” LMAO
- Comment on We Should Immediately Nationalize SpaceX and Starlink 3 weeks ago:
American exceptionalism definitely sucks, but this is not an example of American exceptionalism. The source is an article from an American magazine, published for an American audience.
- Comment on We Should Immediately Nationalize SpaceX and Starlink 4 weeks ago:
Yeah I mean the tax payers have literally already paid for all of both SpaceX and Starlink. The public paid for it, the public should own it.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.
Incorrect. You might want to take an information theory class before speaking on subjects like this.
LLMs are just tools not sentient or verging on sentient
Correct. No one claimed they are “sentient” (you actually mean “sapient”, not “sentient”, but it’s fine as most people mix those up. And no, LLMs are not sapient either, and sapience has nothing to do with reasoning or logic, you’re just moving the goalpost)
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
LOL you didn’t really make the point you thought you did. It isn’t an “improper comparison” (it’s called a false equivalency FYI), because there’s isn’t a real distinction between information and this thing you just made up called “basic action on data”, but anyway have it your way:
Your comment is still exactly like saying an audio pipeline isn’t really playing music because it’s actually just doing basic math.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with “grab it”), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.
Instead, we found that Claude plans ahead. Before starting the second line, it began “thinking” of potential on-topic words that would rhyme with “grab it”. Then, with these plans in mind, it writes a line to end with the planned word.
🙃 actually read the research?
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn’t post a relevant or complete thought
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
You’re confusing the confirmation that the LLM cannot explain it’s under-the-hood reasoning as text output, with a confirmation of not being able to reason at all. Anthropic is not claiming that it cannot reason. They actually find that it performs complex logic and behavior like planning ahead.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
I don’t want to brigade, so I’ll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to “show it’s work” or explain it’s reasoning. The text response of an LLM cannot be taken at it’s word or used to confirm that kind of theory. It requires tracing the logic under the hood.
Just like how it’s not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It’s output is always “fake”
That doesn’t mean there isn’t a real potential element of self preservation, though, but you’d need to dig and trace through the network to show it, not use the text output.
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
No, you’re misunderstanding the findings. It does show that LLMs do not explain their reasoning when asked, which makes sense and is expected. They do not have access to their inner-workings and generate a response that “sounds” right, but tracing their internal logic shows they operate differently than what they claim, when asked. You can’t ask an LLM to explain its own reasoning. But the article shows how they’ve made progress with tracing under-the-hood, and the surprising results they found about how it is able to do things like plan ahead, which defeats the misconception that it is just “autocomplete”
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
anthropic.com/…/tracing-thoughts-language-model for one, the exact article OP was asking for
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
I’ve been very unimpressed by gemma3. 1b, 4b, 12b? 27b is probably your best chance at coherent results. Try qwen3:32b
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
it’s completing the next word.
Facts disagree, but you’ve decided to live in a reality that matches your biases despite real evidence, so whatever 👍
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
ollama is not an LLM, but a program used to run them. What model are you running?
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
It’s true that LLMs aren’t “aware” of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like “they can never reason” is provably false.
Its obvious that you have a bias and desperately want reality to confirm it, but there’s been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning. Neural networks and very powerful, after all, you are one too. Can you reason?
- Comment on I'm looking for an article showing that LLMs don't know how they work internally 4 weeks ago:
anthropic.com/…/tracing-thoughts-language-model its this one
- Comment on [deleted] 5 weeks ago:
Never playing a game that needs to be marketed with fake sob story screenshots. I assume its garbage anyway
- Comment on I knew it 5 weeks ago:
That’s still easily a mansion yeah lol. McMansion would still be “single family” sized in my bullshit-personal-definition of the word. Several families of multiple generations can live in a mansion. I bet there’s 12 bedrooms in that house