Comment on 95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
WhatAmLemmy@lemmy.world 15 hours agoYou do realise that everyone actually educated in statistical modeling knows that you have no idea what you’re talking about, right?
REDACTED@infosec.pub 15 hours ago
Note that I’m not one of the people talking about it on X, I don’t know who they are. I just linked it with a simple “this looks like reasoning to me”.
Traister101@lemmy.today 13 hours ago
They can’t reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).
Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I’m joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever
REDACTED@infosec.pub 13 hours ago
You partly described reasoning tho
en.m.wikipedia.org/wiki/Reasoning_system
Traister101@lemmy.today 10 hours ago
If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don’t understand how to read LLMs, the most advanced “Ai” they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.
Again to recap here LLMs and similar neural network “Ai” is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term “Reasoning System” which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren’t stupid enough to say those are capable of reasoning
cmhe@lemmy.world 10 hours ago
This link is about reasoning system, not reasoning. Reasoning involves actually understanding the knowledge, not just having it. Testing or validating where knowledge is contradictionary.
LLM doesn’t understand the difference between hard and soft rules of the world. Everything is up to debate, everything is just text and words that can be ordered with some probabilities.
It cannot check if something is true, it just knows that someone on the internet talked about something, sometimes with and often without resolution.
It is a gossip machine, that trys to ‘reason’ about whatever it has heard people say.
WhatAmLemmy@lemmy.world 9 hours ago
Yes, your confidence in something you apparently know nothing about is apparent.
Have you ever thought that openai, and most xitter influencers, are lying for profit?