Which is what the article says? That’s why it also talks about diversifying the input.
Comment on AI models propagate false race-based medical information, Stanford researchers find
jmd_akbar@aussie.zone 11 months ago
Output is based on the input…
stopthatgirl7@kbin.social 11 months ago
Candelestine@lemmy.world 11 months ago
And last I knew, it’s not exactly checking anything in any way. So, if people said xyz, you get xyz.
It’s basically an advanced gossip machine. Which, tbf, also applies to a lot of us.
stevedidWHAT@lemmy.world 11 months ago
What are you talking about, GPT constantly filters and flags input.
You’re talking out of your ass.
Candelestine@lemmy.world 11 months ago
Fair. It’s just not that great at it yet.
stevedidWHAT@lemmy.world 11 months ago
According to who? Against what baselines?
What’s up with people just throwing blatantly false stuff out on the internet as if it’s fact?
It blocks a whole wide range of stuff and is very effective against those. There’s always room for improvement but I reject the notion that we do it without science and without actual measurable facts and stats.