Comment on New report illuminates why OpenAI board said Altman “was not consistently candid”
xor@lemmy.blahaj.zone 11 months agoBut this isn’t an M rated game, it’s a transformative new technology with potentially horrifying consequences to misuse
PsychedSy@sh.itjust.works 11 months ago
By answering questions? We are general intelligences that can answer questions. Oh shit oh fuck what am I doing talking.
photonic_sorcerer@lemmy.dbzer0.com 11 months ago
Hey guess what, we general intelligences are capable of terrible things.
xor@lemmy.blahaj.zone 11 months ago
Okay, so let’s do a thought experiment, and take off all the safeguards.
Oops, you made:
Saying “don’t misuse it” isn’t enough to stop people misusing it
And that’s just with chatgpt - AI isn’t just a question and answer machine - I suggest you read about “the paperclip maximiser” as a very good example of how misalignment of general purpose AI can go horribly wrong
elbarto777@lemmy.world 11 months ago
I was going to say that a well-determined individual would find this information regardless. But the difference here is that it being so easily accessible would increase the risks of someone doing something reaaaally stupid by a factor of 100. Yikes.
PersnickityPenguin@lemm.ee 11 months ago
The last 2 already exist it’s called stable diffusion. And for awhile Bing did it too.
Socsa@sh.itjust.works 11 months ago
ChatGPT was very far from the first publically available generative AI. It didn’t even do images at first.
Also, there are plenty of YouTube channels which show you how to make all sorts of extremely dangerous explosives already.
xor@lemmy.blahaj.zone 11 months ago
But the concern isn’t which was the first generative ai - their “idea” was that AIs - of all types, including generalised - should just be released as-is, with no further safeguards.
That doesn’t consider that OpenAI doesn’t only develop text generation AIs. Generalised AI can do horrifying things, even just by accidental misconfiguration (see the paperclip optimiser example).
But even a GANN like chatGPT can be coerced to generate non-text data with the right prompting.
Even in that example, one can’t just dig up those sorts of videos without, at minimum, leaving a trail. But an unresticted pretrained model can be distributed and run locally, and used without trace to generate any content whatsoever that it’s capable of generating.
And with a generalised AI, the only constraint to the prompt “kill everybody except me” becomes available compute.