Comment on Major shifts at OpenAI spark skepticism about impending AGI timelines

<- View Parent
LANIK2000@lemmy.world ⁨3⁩ ⁨months⁩ ago

Language models are literally incapable of reasoning beyond what is present in the dataset or the prompt. Try giving it a known riddle and change it so it becomes trivial, for example “With a boat, how can a man and a goat get across the river?”, despite it being a one step solution, it’ll still try to shove in the original answer and often enough not even solve it. Best part, if you then ask it to explain it’s reasoning (not tell it what it did wrong, that’s new information you provide, ask it why it did what it did), it’ll completely shit it self. There’s no evidence at all they have any cognitive capacity.

I even managed to break it once through normal conversation, something happened in my life that was unique enough for the dataset and thus was incomprehensible to the AI. It just wasn’t able to follow the events, no matter how many times I explained.

source
Sort:hotnewtop