It will never be objective if its dataset is something like the internet. It will always be prone to bias because that's the double-edged sword of LLMs, they have to have vast quantities of data and the only place they can get that is the internet which is biased opinions everywhere.
Comment on ChatGPT Has Liberal Bias, Say Researchers
luthis@lemmy.nz 1 year agoIt actually is biased though. UpperEchelon did a video exposing this. I swing to the left myself, but I would prefer if the LLMs were objective.
Phanatik@kbin.social 1 year ago
theywilleatthestars@lemmy.world 1 year ago
Political objectivity is impossible.
luthis@lemmy.nz 1 year ago
I would argue that asking a machine to list known information is not impossible.
Here’s a very clear example where chatGPT refused to answer a question regarding Biden but happily answered the exact same question for Trump.
youtu.be/_Klkr6PtYzI?t=520
And before anyone starts, NO! I’m not a supporter of the oompaloompa king.
Sekoia@lemmy.blahaj.zone 1 year ago
Mhm, but with the way LLMs work, it’s not possible to actually remove bias since it’s baked into the training data. Any adjustment towards “neutral” would be biased by what the adjuster considers neutral.
PipedLinkBot@feddit.rocks [bot] 1 year ago
Here is an alternative Piped link(s): piped.video/_Klkr6PtYzI?t=520
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
GigglyBobble@kbin.social 1 year ago
Only of emotions are involved. Of course it's possible as long as we train our AI with flawed human-generated data though.