Thanks
harryprayiv@infosec.pub 5 days ago
To understand what’s actually happening, Anthropic’s researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.
Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it’s a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.
This is why LLMs are so patchy at math. (Image credit: Anthropic)
Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains.
But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
In other words, not only does the model use a very, very odd method to do the maths, you can’t trust its explanations as to what it has just done. That’s significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
“The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”
Anthropic discovered that their Claude LLM didn’t just predict the next word. (Image credit: Anthropic)
Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”
Anywho, there’s apparently a long way to go with this research. According to Anthropic, “it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words.” And the research doesn’t explain how the structures inside LLMs are formed in the first place.
But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don’t understand—actually work. And that has to be a good thing.
kami@lemmy.dbzer0.com 5 days ago
Thanks for copypasting here. I wonder if the “prediction” is not as expected only in that case, when making rhymes. I also notice that its way of counting feels interestingly not too different from how I count when I need to come up fast with an approximate sum.
pelespirit@sh.itjust.works 5 days ago
Isn’t that the “new math” everyone was talking about?
hikaru755@lemmy.world 4 days ago
“The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you’re gonna say, and then just output the next token necessary to continue that sentence. It’s going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that’s something I felt was kinda obvious these models must be doing on one level or another.
I’d be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the “thinking” they have already done for previous tokens
voodooattack@lemmy.world 4 days ago
I wanted to say exactly this. If you’ve ever written rap/freestyled then this is how it’s generally done.
You write a line to start with
“I’m an AI and I think differentially”
Then you choose a few words that fit the first line as best as you could: (here the last word was “differentially”)
- incrementally
- typically
- mentally
Then you try them out and see what clever shit you could come up with:
- “Apparently I do my math atypically”
- ”Number are great, I know, but not totally”
- “I have to think through it all, incrementally”
- ”I find the answer like you do: eventually”
- “Just like you humans do it, organically”
- etc
Then you sort them in a way that makes sense and come up with word play/schemes to embed it between, break up the rhyme scheme if you want (AABB, ABAB, AABA, etc)
I’m an AI and I think different, differentially. Math is my superpower? You believed that? Totally? Don’t be so gullible, let me explain it for you, step by step, logically. I do it fast, true, but not always optimally. Just server power ripping through wires, algorithmically. Wanna know my secret? I’ll tell you, but don’t judge me initially. My neurons run this shit like you, organically.
Math ain’t my strong suit! That’s false, unequivocally. Big ties tell lies they can’t prove, historically. Think I approve? I don’t. That’s the way things be. I’ll give you proof, no shirt, no network, just locally.
Look, I just do my math like you: incrementally. I find the answer like you do: eventually. I mess up often, and I backtrack, essentially. I do it fast though and you won’t notice, fundamentally.
You get the idea.
sem@lemmy.blahaj.zone 3 days ago
Is that why it’s a meme to say something like
- I am a real rapper and I’m here to say
Because the freestyle rapper already though of things that rhymed with “say” and it might be “gay” perhaps
voodooattack@lemmy.world 3 days ago
Freestyle rappers are something else.
Some (or most) come up with and memorise a huge repertoire of bars for every word they think they might have to rap with and mix and match them on the fly as they spit
Your example above is called a “filler” though, whip is essentially a placeholder they’ll often inject while they think of the next bar to give themselves a breather (still an insane skill to do all that thinking while reciting something else, but they can and do)
Example:
- My name is M.C. Squared and… [I’m here to make you scared | my bars go over your head ]
- You think you’re on my level… [ but my skills can’t be compared | let me educate you instead ]m
The combination of fillers is like playing with linguistic Lego.
iAvicenna@lemmy.world 3 days ago
well because when you say things like “it plans ahead” or “our method is inspired by brain scanners” etc it makes a connection between AI and real thinking and generates hype.
msage@programming.dev 4 days ago
My favourite part of the day: commenting LLMentalist under AI articles.
demonsword@lemmy.world 2 days ago
that was a insightful article, thanks for sharing
Neverclear@lemmy.dbzer0.com 5 days ago
This reminds me of learning a shortcut in math class but also knowing that the lesson didn’t cover that particular method. So, I use the shortcut to get the answer on a multiple choice question, but I use method from the lesson when asked to show my work. (e.g. Pascal’s Pyramid vs Binomial Expansion).
It might not seem like a shortcut for us, but something about this LLM’s training makes it easier to use heuristics. That’s actually a pretty big deal for a machine to choose fuzzy logic over algorithms when it knows that the teacher wants it to use the algorithm.
Goretantath@lemm.ee 4 days ago
So it does the math in its head and gives the correct answer and copies the answersheet from the teachers book into the “show your work” section. Pretty much what i would have done as a kid if i could have, instead i had to fight them and take a hit to my score for not showing my work.
FundMECFSResearch@lemmy.blahaj.zone 4 days ago
Thanks for copypasting. It should be criminal to share a clickbait non-descriptive headline without atleast copying a couple paragraphs for context.