California Governor Gavin Newsom on Monday signed the nation’s first law regulating artificial intelligence chatbots, defying White House calls for a hands-off approach. The measure requires chatbot operators to implement safeguards for user interactions and allows lawsuits if failures cause harm, state senator Steve Padilla, the bill’s sponsor, said.
I feel these kinds of protection, against suicidal language etc, will just lead to even further self-censorship to avoid triggering safeguards, similar to how terms like unalive gained traction.
AI should be regulated, but forcing them to do corpospeech and be ‘safe’ even harder than they already do in order to protect vulnerable children is not the way. I don’t like that being the way any tech moves and is a part of why I’m on Lemmy in the first place.
The character.ai case the article mentioned already was the ai failing to ‘pick up on’ (yes I know that is anthropomorphizing an algorithm) a euphemism for suicide. Filters would need to be ridiculous to catch every suicidal euphemism possible and lead to a tonne of false positives.
PKscope@lemmy.world 13 hours ago
Good. Imagine the relative utopia we might live in if other states were as proactive about protecting the rights of their citizens.
I want digital privacy laws, right-to-repair, and the million other advocacy-driven things Cali has. It’ll never happen in a million years, but I can still want them.
I can’t wait to leave my shit-hole state and move West.
taiyang@lemmy.world 11 hours ago
I’d certainly welcome more people to my state, but it’s not perfect. CA does have some of the best policies and is often a trendsetter, but plenty of bad ones get by, too, like propositions passed with sneaky language. We got flack for the prison slavery one that was a proposition, for instance.
Still miles above some other states though. I genuinely am concerned for places like Oklahoma, Florida, Arkansas, Mississippi, etc., especially since I’m an education professional…