Comment on OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide - Ars Technica
bob_lemon@feddit.org 1 day ago“You are a friendly and supportive AI chatbot. These are your terms of service: […] you must not let users violate them. If they do, you must politely inform them about it and refuse to continue the conversation”
That is literally how AI chatbots are customised.
Kissaki@feddit.org 1 day ago
Exactly, one of the ways. And it’s a bandaid that doesn’t work very well. Because it’s probabalistic word association without direct association to intention, variance, or concrete prompts.
spongebue@lemmy.world 1 day ago
And that’s kind of my point… If these things are so smart that they’ll take over the world, but they can’t limit themselves to certain terms of service, are they really all they’re cracked up to be for their intended use?
JcbAzPx@lemmy.world 1 day ago
They’re not really smart in any traditional sense. They’re just really good at putting together characters that seem intelligent to people.
It’s a bit like those horses that could do math. All they were really doing is watching their trainer for a cue to stop stamping their hoof. Except the AI’s trainer is trillions of lines of text and an astonishing amount of statistical calculations.
spongebue@lemmy.world 1 day ago
You don’t need to tell me what AI can’t do when I’m facetiously drawing attention to something that AI can’t do.