I read an article that it can “think” in small chunks. They don’t know how much though. This was also months ago, it’s probably expanded by now.
I read an article that it can “think” in small chunks. They don’t know how much though. This was also months ago, it’s probably expanded by now.
FunnyUsername@lemmy.world 1 day ago
anything that claims it “thinks” on any way I immediately dismiss as an advertisement of some sort. these models are doing very interesting things, but it is in no way “thinking” as a sentient mind does.
LarmyOfLone@lemm.ee 1 day ago
You know they don’t think - even though “It’s a peculiar truth that we don’t understand how large language models (LLMs) actually work.”?
It’s truly shocking to read this from a mess of connected neurons and synapses like yourself. You’re simply doing fancy word prediction of the next word /s
stephen01king@lemmy.zip 22 hours ago
Anybody who claims they don’t “think” before we even figure out completely how they work and even how human thoughts work are just spreading anti-AI sentiment beyond what is considered logical.
You should become a better example than an AI by only arguing based on facts rather than things you hallucinate if you want to prove your own position on this matter.
pelespirit@sh.itjust.works 1 day ago
I wish I could find the article. It was researchers and they were freaked out just as much as anyone else. It’s like slightly over chance that it “thought,” not some huge revolutionary leap.
FunnyUsername@lemmy.world 1 day ago
there has been a flooding of these articles. everyone wants to sell their llm as “the smartest one closest to a real human” even though the entire concept of calling them AI is a marketing misnomer
pelespirit@sh.itjust.works 1 day ago
Maybe? Didn’t seem like a sales job at the time, more like a warning. You could be right though.