'is weirder than you thought ’
I am as likely to click a link with that line as much as if it had
‘this one weird trick’ or ‘side hussle’.
I would really like it if headlines treated us like adults and got rid of click baity lines.
Submitted 3 weeks ago by cm0002@lemmy.world to technology@lemmy.world
'is weirder than you thought ’
I am as likely to click a link with that line as much as if it had
‘this one weird trick’ or ‘side hussle’.
I would really like it if headlines treated us like adults and got rid of click baity lines.
But then you wouldn’t need to click on thir Ad infested shite website where 1-2 paragraphs worth of actual information is stretched into a giant essay so that they can show you more Ads the longer you scroll
I will never understand how ppl survive without ad blockers. Tried it once recently and it was a horrific experience.
They do it because it works on the whole. If straight titles were as effective they’d be used instead.
It really is quite unfortunate, I wish titles do what titles are supposed to do instead of being baits.but you are right, even consciously trying to avoid clicking sometimes curiosity gets the best of me. But I am improving.
Well, I’m doing my part against them by refusing to click on any bait headlines, but I fear it’s a lost cause anyway.
The one weird trick that makes clickbait work
That’s mildly depressing.
“Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains."
That is precisrly how I do math. Feel a little targeted that they called this odd.
I use a calculator. Which an AI should also be and not need to do weird shit to do math.
Function calling is a thing chatbots can do now
A regular AI should use a calculator subroutine, not try to discover basic math every time it’s asked something.
Fascist. If someone does maths differently than your preference, it’s not “weird shit”. I’m facile with mental math despite what’s perhaps a non-standard approach, and it’s quite functional to be able to perform simple to moderate levels of mathematics mentally without relying on a calculator.
Yes, you shove it off onto another to do for you instead of doing it yourself and the ai doesnt.
I think it’s odd in the sense that it’s supposed to be software so it should already know what 36 plus 59 is in a picosecond, instead of doing mental arithmetics like we do
At least that’s my takeaway
This is what the ARC-AGI test by Chollet has also shows regarding current AI / LLMs. They have a tendency to approach problems with this trial and error method and can be extremely inefficient (in their current form) with anything involving abstract / deductive reasoning.
Most LLMs do terribly at the test with the most recent breakthrough being with reasoning models. But even the reasoning models struggle.
ARC-AGI is simple, but it demands a keen sense of perception and, in some sense, judgment. It consists of a series of incomplete grids that the test-taker must color in based on the rules they deduce from a few examples; one might, for instance, see a sequence of images and observe that a blue tile is always surrounded by orange tiles, then complete the next picture accordingly. It’s not so different from paint by numbers.
The test has long seemed intractable to major AI companies. GPT-4, which OpenAI boasted in 2023 had “advanced reasoning capabilities,” didn’t do much better than the zero percent earned by its predecessor. A year later, GPT-4o, which the start-up marketed as displaying “text, reasoning, and coding intelligence,” achieved only 5 percent. Gemini 1.5 and Claude 3.7, flagship models from Google and Anthropic, achieved 5 and 14 percent, respectively.
But you’re doing two calculations now, an approximate one and another one on the last digits, since you’re going to do the approximate calculation you might act as well just do the accurate calculation and be done in one step.
This solution while it works has to feel of evolution. No intelligent design, which I suppose makes sense considering the AI did essentially evolve.
Appreciate the advice on how my brain should work.
No intelligent design, which I suppose makes sense considering the AI did essentially evolve.
And that made a lot of people angry
Rather than read PCGamer talk about Anthropic’s article you can just read it directly here. It’s a good read.
I think this comm is more suited for news articles talking about it, though I did post that link to !ai_@lemmy.world which I think would be a more suited comm for those who want to go more in-depth on it
The research paper looks well written but I couldn’t find any information on if this paper is going to be published in a reputable journal and peer reviewed. I have little faith in private businesses who profit from AI providing an unbiased view of how AI works. I think the first question I’d like answered is did Anthropic’s marketing department review the paper and did they offer any corrections or feedback? We’ve all heard the stories about the tobacco industry paying for papers to be written about the benefits of smoking and refuting health concerns.
A lot of ai research isn’t published in journals but either posted to a corporate website or put up on the arxiv. There are some ai journals, but the ai community doesn’t particularly value those journals (and threw a bit of a fit when they came out). This article is mostly marketing and doesn’t show anything that should surprise anyone familiar with how neural networks work generically in my opinion.
But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
This is not surprising. LLMs are not designed to have any introspection capabilities.
Introspection could probably be tacked onto existing architectures in a few different ways, but as far as I know nobody’s done it yet. It will be interesting to see how that might change LLM behavior. I suspect it is requisite but not sufficient for self-awareness.
I’m surprised that they are surprised by this as well. What did they expect, and why? How much of this is written to imply LLMs - their business - are more advanced/capable than they actually are?
Then take that concept further, and let it keep introspecting and inspecting how it comes to the conclusions it does and eventually…
you can’t trust its explanations as to what it has just done.
I might have had a lucky guess, but this was basically my assumption. You can’t ask LLMs how they work and get an answer coming from an internal understanding of themselves, because they have no ‘internal’ experience.
Unless you make a scanner like the one in the study, non-verbal processing is as much of a black box to their ‘output voice’ as it is to us.
Anyone that used them for even a limited amount of time will tell you that the thing can give you a correct, detailed explanation on how to do a thing, and provide a broken result. And vice versa. Looking into it by asking more have zero chance of being useful.
this is one of the most interesting things about Llms that i have ever read
That bit about how it turns out they aren’t actually just predicting the next word is crazy and kinda blows the whole “It’s just a fancy text auto-complete” argument out of the water IMO
It really doesn’t. You’re just describing the “fancy” part of “fancy autocomplete.” No one was ever really suggesting that they only predict the next word. If that was the case they would just be autocomplete, nothing fancy about it.
What’s being conveyed by “fancy autocomplete” is that these models ultimately operate by combining the most statistically likely elements of their dataset, with some application of random noise. More noise creates more “creative” (meaning more random, less probable) outputs. They do not actually “think” as we understand thought. This can clearly be seen in the examples given in the article, especially to do with math. The model is throwing together elements that are statistically proximate to the prompt. It’s not actually applying a structured, logical method the way humans can be taught to.
Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It’s still predicting parts of the passage based solely on other parts of the passage.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
It doesn’t, who the hell cares if someone allowed it to break “predict whole text” into "predict part by part, and then “with rhyme, we start at the end”. Sounds like a naive (not as in “simplistic”, but as “most straightforward”) way to code this, so given the task to write an automatic poetry producer, I would start with something similar. The whole thing still stands as fancy auto-complete
I read an article that it can “think” in small chunks. They don’t know how much though. This was also months ago, it’s probably expanded by now.
I mean it implies that they CAN start with the conclusion or the “thought” and then generate the text to verbalize that.
It’s shocking to what length humans will go to explain how their wetware neural network is fundamentally different and it’s impossible for LLMs to think or reason in any way. Honestly LLMs teach us more about human intelligence (or the lack thereof) than machine intelligence. Like obi wan said, “The ability to speak does not make one intelligent” haha.
It’s amazing that humans have coded a tool for which they have to afterwards write more tools for analyzing how it works.
That has always been the case. Even basic programs need debugging sometimes, so we developed debuggers.
No it hasn’t. When you program you break down the problem into many smaller sub programs and then codify them. There are errors that need debugging. But never “how does this part of the program I wrote work?”.
There are some cases like detergents, apparently until recently we didn’t know exactly how it works. But human engineered tools are not comparable to this.
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
If the llm already knows the full sentence it’s going to output from the first word it “guesses” I wonder if you could short circuit it and say just give the full sentence instead of doing a cycle for each word of the sentence, could maybe cut down on llm energy costs.
interestingly, too, this is a technique when you’re improving songs, it’s called Target Rhyming.
The most effective way is to do A/B^1/C/B^2 rhymes. You pick the B^2 rhyme, let’s say, “ibruprofen” and you get all of A and B^1 to think of a rhyme
Oh its Christmas time
And I was up on my roof when
I heard a jolly old voice
Ask me for ibuprofen
And the audience thinks you’re fucking incredible for complex rhymes.
I don’t think it knows the full sentence, it just doesn’t search for the words in the order they will be in the sentence. It finds the end-words first to make the poem rhyme, than looks for the rest of the words. I do it this way as well just like many other people trying to create any kind of rhyming text.
Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains.
But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.
Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.
The other day I asked an llm to create a partial number chart to help my son learn what numbers are next to each other. If I instructed it to do this using very detailed instructions it failed miserably every time. And sometimes when I even told it to correct specific things about its answer it still basically ignored me. The only way I could get it to do what I wanted consistently was to break the test down into small steps and tell it to show me its progress.
I’d be very interested to learn it’s “thought process” in each of those scenarios.
It’s like that “Joey Repeat After Me” meme from friends haha
Wow, interesting. :)
Not unexpectedly, the LLM failed to explain its own though process correctly.
tbf, how do you know what to say and when? or what 2+2 is?
you learnt it? well so did AI
i’m not an AI nut or anything, but we can barely comprehend our own internal processes, it’d be concerning if a thing humanity created was better at it than us lol
You’re comparing two different things.
Of course I can reflect on how I came with a math result.
“Wait, how did you come up with 4 when I asked you 2+2?”
You can confidently say: “well, my teacher said it once and I’m just parroting it.” Or “I pictured two fingers in my mind, then pictured two more fingers and then I counted them.” Or “I actually thought that I’d say some random number, came up with 4 because it’s my favorite digit, said it and it was pure coincidence that it was correct!”
Whereas it doesn’t seem like Claude can’t do this.
Of course, you could ask me “what’s the physical/chemical process your neurons follow for you to form those four fingers you picture in your mind?” And I would tell you I don’t know. But again, that’s a different thing.
How can i take an article that uses the word “anywho” seriously?
Don’t tell me that my thoughts aren’t weird enough.
…Duh. 🤓
This is great stuff. If we can properly understand these “flows” of intelligence, we might be able to write optimized shortcuts for them, vastly improving performance.
Better yet, teach AI to write code replacing specific optimized AI networks. Then automatically profile and optimize and unit test!
The AIs have shrinks now?
You can become one too! Get your certification here mt.cert.ccc.de
Someone put 69 to research. Nice trolling.
harryprayiv@infosec.pub 3 weeks ago
FreeBird@lemmy.dbzer0.com 3 weeks ago
Thanks
harryprayiv@infosec.pub 3 weeks ago
🙏
FundMECFSResearch@lemmy.blahaj.zone 2 weeks ago
Thanks for copypasting. It should be criminal to share a clickbait non-descriptive headline without atleast copying a couple paragraphs for context.
kami@lemmy.dbzer0.com 3 weeks ago
Thanks for copypasting here. I wonder if the “prediction” is not as expected only in that case, when making rhymes. I also notice that its way of counting feels interestingly not too different from how I count when I need to come up fast with an approximate sum.
pelespirit@sh.itjust.works 3 weeks ago
Isn’t that the “new math” everyone was talking about?
hikaru755@lemmy.world 2 weeks ago
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you’re gonna say, and then just output the next token necessary to continue that sentence. It’s going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that’s something I felt was kinda obvious these models must be doing on one level or another.
I’d be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the “thinking” they have already done for previous tokens
voodooattack@lemmy.world 2 weeks ago
I wanted to say exactly this. If you’ve ever written rap/freestyled then this is how it’s generally done.
You write a line to start with
“I’m an AI and I think differentially”
Then you choose a few words that fit the first line as best as you could: (here the last word was “differentially”)
Then you try them out and see what clever shit you could come up with:
Then you sort them in a way that makes sense and come up with word play/schemes to embed it between, break up the rhyme scheme if you want (AABB, ABAB, AABA, etc)
You get the idea.
iAvicenna@lemmy.world 2 weeks ago
well because when you say things like “it plans ahead” or “our method is inspired by brain scanners” etc it makes a connection between AI and real thinking and generates hype.
Neverclear@lemmy.dbzer0.com 3 weeks ago
This reminds me of learning a shortcut in math class but also knowing that the lesson didn’t cover that particular method. So, I use the shortcut to get the answer on a multiple choice question, but I use method from the lesson when asked to show my work. (e.g. Pascal’s Pyramid vs Binomial Expansion).
It might not seem like a shortcut for us, but something about this LLM’s training makes it easier to use heuristics. That’s actually a pretty big deal for a machine to choose fuzzy logic over algorithms when it knows that the teacher wants it to use the algorithm.
msage@programming.dev 2 weeks ago
My favourite part of the day: commenting LLMentalist under AI articles.
demonsword@lemmy.world 2 weeks ago
that was a insightful article, thanks for sharing
Goretantath@lemm.ee 2 weeks ago
So it does the math in its head and gives the correct answer and copies the answersheet from the teachers book into the “show your work” section. Pretty much what i would have done as a kid if i could have, instead i had to fight them and take a hit to my score for not showing my work.