If they aren’t liable for what their product does, who is?
The users who claim it’s fit for the purpose they are using it for. Now if the manufacturers themselves are making dodgy claims, that should stick to them too.
Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
Stopthatgirl7@lemmy.world 2 months agoIf they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?
If they aren’t liable for what their product does, who is?
The users who claim it’s fit for the purpose they are using it for. Now if the manufacturers themselves are making dodgy claims, that should stick to them too.
lunarul@lemmy.world 2 months ago
Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.
Stopthatgirl7@lemmy.world 2 months ago
So you think these compilations should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.
lunarul@lemmy.world 2 months ago
I did not say companies should have no liability for publishing misinformation. Of course if someone uses AI to generate misinformation and tries to pass it off as factual information they should be held accountable. But it doesn’t seem like anyone did that in this case. Just a journalist putting his name in the AI to see what it generates. Nobody actually spread those results as fact.