A group of Stanford researchers found that large language models can propagate false race-based medical information.
A.I. will never be as good as humans at propagating false race-based information. They should focus on doing the things we suck at like protein folding and talking to girls.
jmd_akbar@aussie.zone 1 year ago
Output is based on the input…
Candelestine@lemmy.world 1 year ago
And last I knew, it’s not exactly checking anything in any way. So, if people said xyz, you get xyz.
It’s basically an advanced gossip machine. Which, tbf, also applies to a lot of us.
stevedidWHAT@lemmy.world 1 year ago
What are you talking about, GPT constantly filters and flags input.
You’re talking out of your ass.
stopthatgirl7@kbin.social 1 year ago
Which is what the article says? That’s why it also talks about diversifying the input.