When DeepSeek-R1 receives prompts containing topics the CCP considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%.
Submitted 4 months ago by King@blackneon.net to technology@lemmy.zip
Comments
BrikoX@lemmy.zip 4 months ago
Cooper8@feddit.online 4 months ago
is this with or without the prompt including politically sensitive topics?
LibertyLizard@slrpnk.net 4 months ago
It’s pretty clear that deepseek is not open source, or at least shouldn’t be considered within the spirit of open source.
It seems like if we want to have truly open source llms, a new standard for transparency is needed.
Cooper8@feddit.online 4 months ago
Check out Apertus , the Swiss are showing how it should be done. 100% open: architecture, training data, weights, recipes, and final models all publicly available and licenses Apache 2.0. https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html
BroBot9000@lemmy.world 4 months ago
Until the Swiss privacy laws change and the clankers start reporting to the new Fash government with the older versions getting deleted.
Just don’t fucking use Ai.
rimu@piefed.social 4 months ago