Comment on DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
SnipingNinja@slrpnk.net 1 year agoI don’t have an exact example for you to test it out so I’ll try to explain in general terms:
Let’s say you give a task to ChatGPT that a human can do easily but ChatGPT fails at it consistently, isn’t that proof that it doesn’t understand.
It might be hard to grasp from this without example, but the problem with any example would be that OpenAI can become aware of a problem and tweak the algorithm to correct just that specific example.
One example I remembered while typing this is how it fails at giving you a list of words which fit a certain criteria like having a specific number of letters. This is not the best example I had come across in the past but it still seems to fail at this one.
Anyway, hopefully you got my point about lack of understanding.
Barack_Embalmer@lemmy.world 1 year ago
Fair enough but it just seems like a fluffy distinction.
And I don’t think they “tweak the algorithm” so much as generate a load more training data of that one specific task to get it up to spec.
In any case, humans make mistakes on lots of stuff too, so if the criterion for “true” understanding is to make no mistakes then humans cannot be said to understand either.
SnipingNinja@slrpnk.net 1 year ago
As I said, my example wasn’t the best one, but you’re right that based on it humans can be judged badly too