Comment on AI models propagate false race-based medical information, Stanford researchers find
Candelestine@lemmy.world 1 year agoAnd last I knew, it’s not exactly checking anything in any way. So, if people said xyz, you get xyz.
It’s basically an advanced gossip machine. Which, tbf, also applies to a lot of us.
stevedidWHAT@lemmy.world 1 year ago
What are you talking about, GPT constantly filters and flags input.
You’re talking out of your ass.
Candelestine@lemmy.world 1 year ago
Fair. It’s just not that great at it yet.
stevedidWHAT@lemmy.world 1 year ago
According to who? Against what baselines?
What’s up with people just throwing blatantly false stuff out on the internet as if it’s fact?
It blocks a whole wide range of stuff and is very effective against those. There’s always room for improvement but I reject the notion that we do it without science and without actual measurable facts and stats.
bane_killgrind@kbin.social 1 year ago
According to this article that says it's propagating false medical information...
Candelestine@lemmy.world 1 year ago
So, are you unaware of how easy it is to get past its blocks and/or get misinformation from it? It’s being continuously updated and improved, of course, which I did not originally acknowledge. That was admittedly unfair of me.