Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.

⁨438⁩ ⁨likes⁩

Submitted ⁨⁨11⁩ ⁨hours⁩ ago⁩ by ⁨Allah@lemm.ee⁩ to ⁨technology@lemmy.world⁩

https://archive.is/bJvH4

source

Comments

Sort:hotnewtop
  • mavu@discuss.tchncs.de ⁨1⁩ ⁨hour⁩ ago

    No way!

    Statistical Language models don’t reason?

    But OpenAI, robots taking over!

    source
  • ZILtoid1991@lemmy.world ⁨1⁩ ⁨hour⁩ ago

    Thank you Captain Obvious! Only those who think LLMs are like “little people in the computer” didn’t knew this already.

    source
  • BlaueHeiligenBlume@feddit.org ⁨1⁩ ⁨hour⁩ ago

    Of course, that is obvious to all having basic knowledge of neural networks, no?

    source
  • Nanook@lemm.ee ⁨10⁩ ⁨hours⁩ ago

    lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

    source
    • Clent@lemmy.dbzer0.com ⁨27⁩ ⁨minutes⁩ ago

      Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

      source
    • MNByChoice@midwest.social ⁨9⁩ ⁨hours⁩ ago

      The “Apple” part. CEOs only care what companies say.

      source
      • kadup@lemmy.world ⁨8⁩ ⁨hours⁩ ago

        Apple is significantly behind and arrived late to the whole AI hype, so of course it’s in their absolute best interest to keep showing how LLMs aren’t special or amazingly revolutionary.

        They’re not wrong, but the motivation is also pretty clear.

        source
        • -> View More Comments
    • JohnEdwa@sopuli.xyz ⁨9⁩ ⁨hours⁩ ago

      "It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’." -Pamela McCorduck´. It’s called the AI Effect.

      source
      • vala@lemmy.world ⁨3⁩ ⁨hours⁩ ago

        Yesterday I asked an LLM “how much energy is stored in a grand piano?” It responded with saying there is no energy stored in a grad piano because it doesn’t have a battery.

        Any reasoning human would have understood that question to be referring to the tension in the strings.

        Another example is asking “does line cause kidney stones?”. It didn’t assume I mean lime the mineral and went with lime the citrus fruit instead.

        Once again a reasoning human would assume the question is about the mineral.

        Ask these questions again in a slightly different way and you might get a correct answer, but it won’t be because the LLM was thinking.

        source
        • -> View More Comments
      • technocrit@lemmy.dbzer0.com ⁨7⁩ ⁨hours⁩ ago

        There’s nothing more pseudo-scientific than “intelligence” maximization. I’m going to write a program to play tic-tac-toe. If y’all don’t think it’s “AI”, then you’re just haters. Nothing will ever be good enough for y’all. You want scientific evidence of intelligence?!?! I can’t even define intelligence so there! \s

        source
        • -> View More Comments
      • kadup@lemmy.world ⁨8⁩ ⁨hours⁩ ago

        That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they’re clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

        source
        • -> View More Comments
    • Melvin_Ferd@lemmy.world ⁨9⁩ ⁨hours⁩ ago

      This is why I say these articles are so similar to how right wing media covers issues about immigrants.

      There’s some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They’re taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There’s articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.

      Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

      source
      • hansolo@lemmy.today ⁨9⁩ ⁨hours⁩ ago

        Because it’s a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won’t kill us all is the hard part.

        I’m a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven’t been nuked by SkyNet, all of this will look quaint and silly.

        source
        • -> View More Comments
      • technocrit@lemmy.dbzer0.com ⁨7⁩ ⁨hours⁩ ago

        Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

        You mean laws like this? jfc.

        www.inc.com/sam-blum/…/91198975

        source
        • -> View More Comments
  • surph_ninja@lemmy.world ⁨3⁩ ⁨hours⁩ ago

    You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

    source
    • petrol_sniff_king@lemmy.blahaj.zone ⁨2⁩ ⁨hours⁩ ago

      Maybe you failed all your high school classes, but that ain’t got none to do with me.

      source
      • surph_ninja@lemmy.world ⁨59⁩ ⁨minutes⁩ ago

        Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

        source
    • LemmyIsReddit2Point0@lemmy.world ⁨3⁩ ⁨hours⁩ ago

      We also reward people who can memorize and regurgitate even if they don’t understand what they are doing.

      source
    • silasmariner@programming.dev ⁨1⁩ ⁨hour⁩ ago

      Some of them, sometimes. But some are adulated and free and contribute vast swathes to our culture and understanding.

      source
  • vala@lemmy.world ⁨3⁩ ⁨hours⁩ ago

    No shit

    source
  • bjoern_tantau@swg-empire.de ⁨5⁩ ⁨hours⁩ ago

    Image

    source
  • crystalmerchant@lemmy.world ⁨1⁩ ⁨hour⁩ ago

    I mean… Is that not reasoning, I guess? It’s what my brain does-- recognizes patterns and makes split second decisions.

    source
    • mavu@discuss.tchncs.de ⁨1⁩ ⁨hour⁩ ago

      Yes, this comment seems to indicate that your brain does work that way.

      source
  • Jhex@lemmy.world ⁨6⁩ ⁨hours⁩ ago

    this is so Apple, claiming to invent or discover something “first” 3 years later than the rest of the market

    source
    • postmateDumbass@lemmy.world ⁨2⁩ ⁨hours⁩ ago

      Trust Apple. Everyone else who were in the space first are lying.

      source
  • LonstedBrowryBased@lemm.ee ⁨5⁩ ⁨hours⁩ ago

    Yah of course they do they’re computers

    source
    • finitebanjo@lemmy.world ⁨5⁩ ⁨hours⁩ ago

      That’s not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

      source
      • EncryptKeeper@lemmy.world ⁨4⁩ ⁨hours⁩ ago

        TBH idk how people can convince themselves otherwise.

        They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

        source
        • -> View More Comments
      • turmacar@lemmy.world ⁨4⁩ ⁨hours⁩ ago

        I think because it’s language.

        There’s a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking “if you put in the wrong figures, will the correct ones be output” and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

        People are people, the main thing that’s changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most of things people interact with every day, most people just assume work like it appears to on the surface.

        And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

        source
        • -> View More Comments
  • technocrit@lemmy.dbzer0.com ⁨7⁩ ⁨hours⁩ ago

    Why would they “prove” something that’s completely obvious?

    thinking processes

    The abstract of their paper is completely pseudo-scientific from the first sentence.

    source
    • tauonite@lemmy.world ⁨2⁩ ⁨hours⁩ ago

      That’s called science

      source
    • TheRealKuni@midwest.social ⁨5⁩ ⁨hours⁩ ago

      Why would they “prove” something that’s completely obvious?

      I don’t want to be critical, but I think if you step back a bit and look and what you’re saying, you’re asking why we would bother to experiment and prove what we think we know.

      That’s a perfectly normal and reasonable scientific pursuit. Yes, in a rational society the burden of proof would be on the grifters, but that’s never how it actually works. It’s always the doctors disproving the cure-all, not the snake oil salesmen failing to prove their own prove their own product.

      There is value in this research, even if it fits what you already believe on the subject. I would think you would be thrilled to have your hypothesis confirmed.

      source
      • postmateDumbass@lemmy.world ⁨2⁩ ⁨hours⁩ ago

        The sticky wicket is the proof that humans (functioning ‘normally’) do more than pattern.

        source
    • yeahiknow3@lemmings.world ⁨7⁩ ⁨hours⁩ ago

      They’re just using the terminology that’s widespread in the field. The paper’s purpose is to prove that this terminology is unsuitable.

      source
      • technocrit@lemmy.dbzer0.com ⁨7⁩ ⁨hours⁩ ago

        I understand that people in this field regularly use pseudo-scientific language. But the terminology has never been suitable so it shouldn’t be used in the first place. They’re just feeding into the grift. That’s how they get paid.

        source
    • Mbourgon@lemmy.world ⁨5⁩ ⁨hours⁩ ago

      Not when large swaths of people are being told to use it everyday. Upper management has bought in on it.

      source
  • SplashJackson@lemmy.ca ⁨6⁩ ⁨hours⁩ ago

    Just like me

    source
    • alexdeathway@programming.dev ⁨6⁩ ⁨hours⁩ ago

      python code for reversing the linked list.

      source
  • sev@nullterra.org ⁨9⁩ ⁨hours⁩ ago

    Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.

    source
    • kescusay@lemmy.world ⁨9⁩ ⁨hours⁩ ago

      I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy “dataset” that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.

      But I don’t think we’re anywhere near there yet.

      source
      • homura1650@lemm.ee ⁨12⁩ ⁨minutes⁩ ago

        LLMs (at least in their current form) are proper neural networks.

        source
      • riskable@programming.dev ⁨6⁩ ⁨hours⁩ ago

        The only reason we’re not there yet is memory limitations.

        Eventually some company will come out with AI hardware that lets you link up a petabyte of ultra fast memory to chips that contain a million parallel matrix math processors. Then we’ll have an entirely new problem: AI that trains itself incorrectly too quickly.

        Just you watch: The next big breakthrough in AI tech will come around 2032-2035 (when the hardware is available) and everyone will be bitching that “chain reasoning” (or whatever the term turns out to be) isn’t as smart as everyone thinks it is.

        source
    • auraithx@lemmy.dbzer0.com ⁨8⁩ ⁨hours⁩ ago

      Unlike Markov models, modern LLMs use transformers that attend to full contexts, enabling them to simulate structured, multi-step reasoning (albeit imperfectly). While they don’t initiate reasoning like humans, they can generate and refine internal chains of thought when prompted, and emerging frameworks (like ReAct or Toolformer) allow them to update working memory via external tools. Reasoning is limited, but not physically impossible, it’s evolving beyond simple pattern-matching toward more dynamic and compositional processing.

      source
      • vrighter@discuss.tchncs.de ⁨3⁩ ⁨hours⁩ ago

        previous input goes in. Completely static, prebuilt model processes it and comes up with a probability distribution.

        There is no “unlike markov chains”. They are markov chains. Ones with a long context (a markov chain also kakes use of all the context provided to it, so I don’t know what you’re on about there). LLMs are just a (very) lossy compression scheme for the state transition table. Computed once, applied blindly to any context fed in.

        source
        • -> View More Comments
      • spankmonkey@lemmy.world ⁨8⁩ ⁨hours⁩ ago

        Reasoning is limited

        Most people wouldn’t call zero of something ‘limited’.

        source
        • -> View More Comments
      • riskable@programming.dev ⁨6⁩ ⁨hours⁩ ago

        I’m not convinced that humans don’t reason in a similar fashion. When I’m asked to produce pointless bullshit at work my brain puts in a similar level of reasoning to an LLM.

        Think about “normal” programming: An experienced developer (that’s self-trained on dozens of enterprise code bases) doesn’t have to think much at all about 90% of what they’re coding. It’s all bog standard bullshit so they end up copying and pasting from previous work, Stack Overflow, etc because it’s nothing special.

        The remaining 10% is “the hard stuff”. They have to read documentation, search the Internet, and then—after all that effort to avoid having to think—they sigh and start actually start thinking in order to program the thing they need.

        LLMs go through similar motions behind the scenes! Probably because they were created by software developers but they still fail at that last 90%: The stuff that requires actual thinking.

        Eventually someone is going to figure out how to auto-generate LoRAs based on test cases combined with trial and error that then get used by the AI model to improve itself and that is when people are going to be like, “Oh shit! Maybe AGI really is imminent!” But again, they’ll be wrong.

        AGI won’t happen until AI models get good at retraining themselves with something better than basic reinforcement learning. In order for that to happen you need the working memory of the model to be nearly as big as the hardware that was used to train it. That, and loads and loads of spare matrix math processors ready to go for handing that retraining.

        source
  • brsrklf@jlai.lu ⁨9⁩ ⁨hours⁩ ago

    You know, despite not really believing LLM “intelligence” works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point…

    But that study seems to prove they’re still not even good at that. At first I was wondering how hard the puzzles must have been, and then there’s a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar… Also, failing to apply a step-by-step solution they were given.

    source
    • auraithx@lemmy.dbzer0.com ⁨8⁩ ⁨hours⁩ ago

      This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.

      source
    • technocrit@lemmy.dbzer0.com ⁨7⁩ ⁨hours⁩ ago

      Computers are awesome at “recognizing patterns” as long as the pattern is a statistical average of some possible worthless data set.

      source
  • reksas@sopuli.xyz ⁨10⁩ ⁨hours⁩ ago

    does ANY model reason at all?

    source
    • 4am@lemm.ee ⁨9⁩ ⁨hours⁩ ago

      No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.

      source
      • SARGE@startrek.website ⁨8⁩ ⁨hours⁩ ago

        … So you’re saying there’s a chance?

        source
        • -> View More Comments
    • MrLLM@ani.social ⁨2⁩ ⁨hours⁩ ago

      I think I do. Might be an illusion, though.

      source
    • auraithx@lemmy.dbzer0.com ⁨8⁩ ⁨hours⁩ ago

      Define reason.

      Like humans? Of course not. models lack intent, awareness, and grounded meaning. They don’t “understand” problems, they generate token sequences.

      source
      • reksas@sopuli.xyz ⁨6⁩ ⁨hours⁩ ago

        as it is defined in the article

        source
  • Grizzlyboy@lemmy.zip ⁨3⁩ ⁨hours⁩ ago

    What a dumb title. I proved it by asking a series of questions. It’s not AI, stop calling it AI, it’s a dumb af language model. Can you get a ton of help from it, as a tool? Yes! Can it reason? NO! It never could and for the foreseeable future, it will not.

    It’s phenomenal at patterns, much much better than us meat peeps. That’s why they’re accurate as hell when it comes to analyzing medical scans.

    source
  • sp3ctr4l@lemmy.dbzer0.com ⁨9⁩ ⁨hours⁩ ago

    This has been known for years, this is the default assumption of how these models work.

    You would have to prove that some kind of actual reasoning has arisen as some kind of emergent conplexity phenomenon… not the other way around.

    Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.

    source
    • riskable@programming.dev ⁨6⁩ ⁨hours⁩ ago

      Define, “reasoning”. For decades software developers have been writing code with conditionals. That’s “reasoning.”

      LLMs are “reasoning”… They’re just not doing human-like reasoning.

      source
      • sp3ctr4l@lemmy.dbzer0.com ⁨6⁩ ⁨hours⁩ ago

        Howabout uh…

        The ability to take a previously given set of knowledge, experiences and concepts, and combine then in a consistent, non contradictory manner, to generate hitherto unrealized knowledge, or concepts, and then also be able to verify that those new knowledge and concepts are actually new, and actually valid, or at least be able to propose how one could test whether or not they are valid.

        Arguably this is or involves meta-cognition, but that is what I would say… is the difference between what we typically think of as ‘machine reasoning’, and ‘human reasoning’.

        source
  • mfed1122@discuss.tchncs.de ⁨9⁩ ⁨hours⁩ ago

    This sort of thing has been published a lot for awhile now, but why is it assumed that this isn’t what human reasoning consists of? Isn’t all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they’re “just” memorizing patterns don’t prove anything, unless coupled with research on the human brain to prove we do something different.

    source
    • technocrit@lemmy.dbzer0.com ⁨6⁩ ⁨hours⁩ ago

      why is it assumed that this isn’t what human reasoning consists of?

      Because science doesn’t work work like that. Nobody should assume wild hypotheses without any evidence whatsoever.

      source
      • mfed1122@discuss.tchncs.de ⁨6⁩ ⁨hours⁩ ago

        Sorry, I can see why my original post was confusing, but I think you’ve misunderstood me. I’m not claiming that I know the way humans reason. In fact you and I are on total agreement that it is unscientific to assume hypotheses without evidence. This is exactly what I am saying is the mistake in the statement “AI doesn’t actually reason, it just follows patterns”. That is unscientific if we don’t know whether or “actually reasoning” consists of following patterns, or something else. As far as I know, the jury is out on the fundamental nature of how human reasoning works. It’s my personal, subjective feeling that human reasoning works by following patterns. But I’m not saying “AI does actually reason like humans because it follows patterns like we do”. Again, I see how what I said could have come off that way. What I mean more precisely is:

        It’s not clear whether AI’s pattern-following techniques are the same as human reasoning, because we aren’t clear on how human reasoning works. My intuition tells me that humans doing pattern following seems equally as valid of an initial guess as humans not doing pattern following, so shouldn’t we have studies to back up the direction we lean in one way or the other?

        I think you and I are in agreement, we’re upholding the same principle but in different directions.

        source
    • LesserAbe@lemmy.world ⁨8⁩ ⁨hours⁩ ago

      Agreed. We don’t seem to have a very cohesive idea of what human consciousness is or how it works.

      source
      • technocrit@lemmy.dbzer0.com ⁨6⁩ ⁨hours⁩ ago

        … And so we should call machines intelligent? That’s not how science works.

        source
        • -> View More Comments
    • Endmaker@ani.social ⁨8⁩ ⁨hours⁩ ago

      You’ve hit the nail on the head.

      Personally, I wish that there’s more progress in our understanding of human intelligence.

      source
      • technocrit@lemmy.dbzer0.com ⁨6⁩ ⁨hours⁩ ago

        Their argument is that we don’t understand human intelligence so we should call computers intelligent.

        That’s not hitting any nail on the head.

        source
    • count_dongulus@lemmy.world ⁨8⁩ ⁨hours⁩ ago

      Humans apply judgment, because they have emotion. LLMs do not possess emotion. Mimicking emotion without ever actually having the capability of experiencing it is sociopathy. An LLM would at best apply patterns like a sociopath.

      source
      • mfed1122@discuss.tchncs.de ⁨7⁩ ⁨hours⁩ ago

        But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we’re not looking for emotional judgements - we’re trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I’m not sure that emotions have any relevance to the topic of AI or human reasoning and problem solving.

        As for analogizing LLMs to sociopaths, I think that’s a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others’ feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we’re giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.

        source
        • -> View More Comments
      • riskable@programming.dev ⁨6⁩ ⁨hours⁩ ago

        That just means they’d be great CEOs!

        source
  • flandish@lemmy.world ⁨10⁩ ⁨hours⁩ ago

    stochastic parrots. all of them. just upgraded “soundex” models.

    this should be no surprise, of course!

    source
  • WorldsDumbestMan@lemmy.today ⁨2⁩ ⁨hours⁩ ago

    It has so much data, it might as well be reasoning. As it helped me with my problem.

    source
  • atlien51@lemm.ee ⁨8⁩ ⁨hours⁩ ago

    Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:

    🫢

    source
    • monkeyslikebananas2@lemmy.world ⁨6⁩ ⁨hours⁩ ago

      Can’t really replace. At best, this tech will make employees more productive at the cost of the rainforests.

      source
      • atlien51@lemm.ee ⁨2⁩ ⁨hours⁩ ago

        Yes but asshole employers haven’t realized this yet

        source
  • SattaRIP@lemmy.blahaj.zone ⁨11⁩ ⁨hours⁩ ago

    Why tf are you spamming rape stories?

    source
    • hybridep@lemmy.wtf ⁨10⁩ ⁨hours⁩ ago

      And this is relevant to this post in what regard?

      90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?

      source
      • Allah@lemm.ee ⁨9⁩ ⁨hours⁩ ago

        thanks alot kind person for taking my side

        source
      • Melvin_Ferd@lemmy.world ⁨9⁩ ⁨hours⁩ ago

        It’s not relevant to the post… But what the fuck

        source
    • pulsewidth@lemmy.world ⁨11⁩ ⁨hours⁩ ago

      Thanks for highlighting this. Blocked em. I know these horrible things happen, but if they’re happening on the other side of the world and there is literally nothing i can do to help all they do is spread sadness and despair, and at worst provoke racism (as all the stories being shared are the same country, yet these incidents happen worldwide).

      source
      • Allah@lemm.ee ⁨10⁩ ⁨hours⁩ ago

        did i do it here? also that’s where i live, if i can’t talk about womens struggle then i appologize

        source
        • -> View More Comments
    • catloaf@lemm.ee ⁨10⁩ ⁨hours⁩ ago

      Racism

      source
      • Allah@lemm.ee ⁨10⁩ ⁨hours⁩ ago

        i am racist for speaking about my own culture problem?

        source
        • -> View More Comments
  • Blaster_M@lemmy.world ⁨9⁩ ⁨hours⁩ ago

    Would like a link to the original research paper, instead of a link of a screenshot of screenshot

    source
  • hornedfiend@sopuli.xyz ⁨8⁩ ⁨hours⁩ ago

    While I hate LLMs with passion and my opinion of them boiling down to them being glorified search engines and data scrapers, I would ask Apple: how sour are the grapes, eh?

    source
  • 1rre@discuss.tchncs.de ⁨9⁩ ⁨hours⁩ ago

    The difference between reasoning models and normal models is reasoning models are two steps, to oversimplify it a little they prompt “how would you go about responding to this” then prompt “write the response”

    It’s still predicting the most likely thing to come next, but the difference is that it gives the chance for the model to write the most likely instructions to follow for the task, then the most likely result of following the instructions - both of which are much more conformant to patterns than a single jump from prompt to response.

    source
  • MuskyMelon@lemmy.world ⁨10⁩ ⁨hours⁩ ago

    I use LLMs as advanced search engines. Much less ads and sponsored results.

    source
  • Naich@lemmings.world ⁨9⁩ ⁨hours⁩ ago

    So they have worked out that LLMs do what they were programmed to do in the way that they were programmed? Shocking.

    source