Comment on Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries

<- View Parent
Signtist@lemm.ee ⁨6⁩ ⁨months⁩ ago

I highly doubt that OpenAI or any other AI developer would see any real repercussions, even if they had a security hole that someone managed to exploit to cause harm. Companies exist to make money, and OpenAI is no exception; if it’s more profitable to release a dangerous product than a safe one, and they won’t get in trouble for it, they’ll likely have no issues with releasing their product with security holes.

Unfortunately, the question can’t “should we be charging them for this?” Nobody is going to force them to pay, and they have no reason to do it on their own. Barring an entire cultural revolution, the question instead must be “should we do it anyway to prevent this from being used in harmful ways?” And the answer is yes. The world exists to make money, usually for people who already have money, so if you’re working within the confines of that society, you need to factor that into your reasoning.

Companies have long since decided that ethics is nothing more than a burden getting in the way of their profits, and you’ll have a hard time going against the will of the companies in a capitalist country.

source
Sort:hotnewtop