Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
Ilovethebomb@lemm.ee 4 days ago
I’d love to see more AI providers getting sued for the blatantly wrong information their models spit out.
Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
Ilovethebomb@lemm.ee 4 days ago
I’d love to see more AI providers getting sued for the blatantly wrong information their models spit out.
catloaf@lemm.ee 4 days ago
I don’t think they should be liable for what their text generator generates. I think people should stop treating it like gospel. At most, they should be liable for misrepresenting what it can do.
RvTV95XBeo@sh.itjust.works 4 days ago
If these companies are marketing their AI as being able to provide “answers” to your questions they should be liable for any libel they produce.
If they market it as “come have our letter generator give you statistically associated collections of letters to your prompt” then I guess they’re in the clear.
TheFriar@lemm.ee 4 days ago
So you don’t think these massive megacompanies should be held responsible for making disinformation machines? Why not?
HeyThisIsntTheYMCA@lemmy.world 3 days ago
because when you provide computer code for money you don’t want there to be any liability assigned
medgremlin@midwest.social 3 days ago
Which is why, in many cases, there should be liability assigned. If a self-driving car kills someone, the programming of the car is at least partially to blame, and the company that made it should be liable for the wrongful death suit, and probably for criminal charges as well. Citizens United already determined that corporations are people…now we just need to put a corporation in prison for their crimes.
Ilovethebomb@lemm.ee 4 days ago
I want them to have more warnings and disclaimers than a pack of cigarettes. Make sure the users are very much aware they can’t trust anything it says.
Stopthatgirl7@lemmy.world 4 days ago
If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?
lunarul@lemmy.world 4 days ago
Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.
Stopthatgirl7@lemmy.world 4 days ago
So you think these compilations should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.
kibiz0r@midwest.social 4 days ago
If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.
Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.
Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.