You don’t think the people who make the generative algorithm have a duty to what it generates?
And whatever you think anyway, the company itself shows that it feels obligated about what the AI puts out, because they are constantly trying to stop the AI from giving out bomb instructions and hate speech and illegal sexual content.
The standard is not and was never if they were “entirely” at fault here. It’s whether they have any responsibility towards this (and we all here can see that they do indeed have some), and how much financially that’s worth in damages. That’s the point of this suit. The case isn’t about whether AI itself should be outlawed for minors etc, it’s not the parents who are on trial either.
There’s no world in which I can see AI being given a pass for sexting with a minor because then that allows pedophiles who work for AI companies to be predators and either look at those conversations or even locate vulnerable youth. No company should be given legal protection to harm children.
TheFriar@lemm.ee 3 weeks ago
Have you ever raised a teenager? It’s not easy nor straightforward. But encouraging suicidal ideation…kinda is straightforward.
echodot@feddit.uk 3 weeks ago
Right but the accusation is that it claimed to be a licensed therapist, did it because that seems like something that it would be explicitly programmed not to claim. Because it isn’t true, and also because it’s dangerous.
So how much engagement was there with this child and their issues because it seems like letting them just continuously chat to an AI seems like an obvious red flag that a parent should be stopping, and getting them professional help.
KairuByte@lemmy.dbzer0.com 3 weeks ago
LLMs can’t reason. Their blocks can be worked around trivially. Ask chat gpt if it’s a therapist, or even tell it to pretend to be one, and it will tell you it can’t impersonate people.
Yet…
Image