Comment on ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.

<- View Parent
echo64@lemmy.world ⁨10⁩ ⁨months⁩ ago

Because you can talk to it and it’s programmed to make you think it knows a lot and is capable of doing so much more.

People expect it to do more because chatgpt was trained to make people expect it to do more.

It’s all lies, of course. Chargpt fails at more than the simplest of tasks and can’t use any new information because the internet is full of ai generated text now, which is poison to training models. But it’s good at pretending.

source
Sort:hotnewtop