Oh, come on, doesn’t America spread its propaganda through its products?
China exports state propaganda with low-cost open source AI models
Submitted 3 days ago by Pro@programming.dev to technology@lemmy.world
Comments
romantired@shibanu.app 3 days ago
mrdown@lemmy.world 2 days ago
I keep getting propaganda from the terrorist staye of israel
brucethemoose@lemmy.world 3 days ago
Well, since they’re open models, that’s easy to fix. And has been.
huggingface.co/perplexity-ai/r1-1776
huggingface.co/microsoft/MAI-DS-R1
It is not cost prohibitive or hard.
You want to decensor “Open”AI though? Tough. They don’t even offer completion endpoints anymore, for crying out kid.
wampus@lemmy.ca 3 days ago
America / Trump’s EO’s on AI basically say they need to tune their models to be as racist, or more racist, than Grok currently is. So, I mean, Pot meet Kettle.
At least with China’s approach they seem to be ‘saying’ the right thing with regards to open sourcing it and having a more collaborative approach internationally. The USA and Trump is just “NO DEI AT ALL!!! MAKE IT SUPPORT DEAR LEADERS RIGHT THINK OR NO GOVT CONTRACTS FOR YOU!!! THIS IS NOT BIAS, THIS IS US REMOVING BIAS!!!”
SuperFola@programming.dev 3 days ago
Jokes on them, I don’t use this AI bullshit.
OsrsNeedsF2P@lemmy.ml 3 days ago
There’s two things at play here. First, all models being released these days have safety built into the training. In the West, we might focus on preventing people from harming others or hacking, and in China, they’re preventing people from getting politically supportive of China. But in a way, we are all “exporting” our propaganda.
Second, as called out in the article, these responses are clearly based on the training data. That is where the misinformation starts, and you can’t “fix” the problem without first fixing that data.
pycorax@sh.itjust.works 3 days ago
I don’t think anyone can say with a straight face that these 2 cases are both propaganda. So called “western ptopaganda” here is really just advising the user that maybe self harm, etc. is not such a good idea. It’s not explicitly telling the user completely unverifiable false facts.
insight06@lemmy.world 2 days ago
I wonder what China’s saying about Grok right now.