There has to be a liabillity standard tho, otherwise it completely does away with any possibillity of even nominal accountabillity. If harm is caused because of a human, there is liabillity (whether directly or to whoever is responsible for that persons actions). The same should be true for whoever employs LLM for some purpose that results in harm. The LLM cant be jailed or “shutdown” really, its incumbent upon the handler to assume liabillity for the activities they are involved with
Comment on Why do people hate AI so much?
schnurrito@discuss.tchncs.de 15 hours ago
I don’t, not in general.
There are good and bad uses of AI. For example I used AI to generate my profile picture here on Lemmy (would you have noticed?). In general the creation of art is one of the best uses of AI I can think of; it doesn’t have serious consequences if it goes wrong, and it can easily be reviewed by a human whether it looks as it should.
But using AI to make actually meaningful business decisions without any human review at all? Using AI for customer service? Any company that does that deserves VERY negative consequences.
I don’t agree with talking points like “AI companies should be required to pay copyright holders of their training data” or “AI is bad because of the environmental impact” or “AI is bad because of RAM prices” or “AI companies should be legally responsible for any mistakes the AI makes (such as libel or encouraging users’ suicide)” or such things; I think all of these are nonsense.
I believe in general that AI gets too much attention in the media. It’s really not that impactful.
cheese_greater@lemmy.world 14 hours ago
schnurrito@discuss.tchncs.de 14 hours ago
whoever employs LLM
incumbent upon the handler to assume liabillity
I agree. If you make any kind of real-world decision based on the output of AI, you should be liable for it as if you’d made that decision yourself.
But I remember reading some news stories about cases where people (often minors) chatted with chatbots and managed to get those chatbots into states where the chatbots encouraged that the users harm themselves (in some cases even commit suicide?). As tragic as that is, I don’t see how it’s morally right to hold the AI companies responsible for that unless it can be shown they did this on purpose. All the AI did in such cases was what it was advertised and understood to do: generate plausible-sounding text based on user input. Those are the cases I’m talking about.
cheese_greater@lemmy.world 14 hours ago
Its a difficult issue, no doubt about it
rabiezaater@piefed.social 11 hours ago
Glad to see some sanity for once on here. It’s definitely not all good, but it’s not all bad either, and when people attribute all the evils of the world to it, they are being disingenuous.