Honestly a lot of the issues result from null results only existing in the gaps between information (unanswered questions, questions closed as unanswerable, searches that return no results, etc), and thus being nonexistent in training data. Models are therefore predisposed toward giving an answer of any kind, and if one doesn’t exist it’ll make one up.
Comment on Grok’s “white genocide” obsession came from “unauthorized” prompt edit, xAI says
spankmonkey@lemmy.world 3 weeks agoUnintentionally is the right word because the people who designed it did not intend for it to be bad information. They chose an approach that resulted in bad information because of the data they chose to train and the steps that they took throughout the process.
ilinamorato@lemmy.world 2 weeks ago
knightly@pawb.social 3 weeks ago
Incorrect. The people who designed it did not set out with a goal of producing a bot that reguritates true information. If that’s what they wanted they’d never have used a neural network architecture in the first place.