Yup. Accurately guessing the next thought (or action) is all brains need to do so I don’t see what the alleged “magic” is supposed to solve.
Comment on There once was a programmer
apinanaivot@sopuli.xyz 1 year agoAll ChatGPT is doing is guessing the next word.
You are saying that as if it’s a small feat. Accurately guessing the next word requires understanding of what the words and sentences mean in a specific context.
worldsayshi@lemmy.world 1 year ago
blackbirdbiryani@lemmy.world 1 year ago
Don’t get me wrong, it’s incredible. But it’s still a variation of the Chinese room experiment, it’s not a real intelligence, but really good at pretending to be one. I might trust it more if there were variants based on strictly controlled datasets.
jadero@programming.dev 1 year ago
I have read more than is probably healthy about the Chinese room and variants since it was first published. I’ve gone back and forth on several ideas:
Since the advent of ChatGPT, or, more properly, my awareness of it, the confusion has only increased. My current thinking, which is by no means robust, is that humans may be little more than “meatGPT” systems. Admittedly, that is probably a cynical reaction to my sense that a lot of people seem to be running on automatic a lot of the time combined with an awareness that nearly everything new is built on top of or a variation on what came before.
I don’t use ChatGPT for anything (yet) for the same reasons I don’t depend too heavily on advice from others:
I’ve not yet seen anything to suggest that ChatGPT is reliably any better than a bullshitter. Which is not nothing, I guess, but is at least a little dangerous.
nogrub@lemmy.world 1 year ago
what often puts me of that people almost never fakt check me when i tell them something wich also tells me they wouldn’t do the same with chatgpt
worldsayshi@lemmy.world 1 year ago
The Chinese room thought experiment doesn’t prove anything and probably confuses the discussion more than it clarifies.
In order for the Chinese room to convince an outside observer of knowing Chinese like a person the room as a whole basically needs to be sentient and understand Chinese. The person in the room doesn’t need to understand Chinese. “The room” understands Chinese.
Fraylor@lemm.ee 1 year ago
So theoretically could you program an AI using strictly verified programming textbooks/research etc, is it currently possible to make an AI that would do far better at programming? I love the concepts around AI but I know fuckall about ML and the actual intricacies of it. So sorry if it’s a dumb question.
PixelProf@lemmy.ca 1 year ago
Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.
On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.
Fraylor@lemm.ee 1 year ago
Interesting, that makes sense. Thank you for such a thoughtful response.