10k%! A friend works in brand marketing at Google. They’d been using internally for months before market pressure forced them to start onboarding public end users. I’ve been in the earliest of the external betas (bc I give a lot of product feedback over the years?) and from the beginning the user experiences have been the most locked down of all the consumer LLMs
Comment on New report illuminates why OpenAI board said Altman “was not consistently candid”
seiryth@lemmy.world 11 months ago
The thing that shits me about this is google appear to the public to be late to the party but the reality is they DID put safety before profit when it came to AI. The sheer amount of research and papers put out by them on AI should have proven to people they know what they’re doing.
And then openAI throw caution into the wind and essentially make google and others panic knee jerk because there’s really money to be made, and now everyone seems to be throwing caution into the wind and pushing it into the mainstream before society is ready.
All in the name of shareholders.
blazeknave@lemmy.world 11 months ago
Toes@ani.social 11 months ago
I think it’s not enough, disable all the safe guards and let people decide if the output is what they want, hate being treated like a child trying to buy a M rated game.
xor@lemmy.blahaj.zone 11 months ago
But this isn’t an M rated game, it’s a transformative new technology with potentially horrifying consequences to misuse
PsychedSy@sh.itjust.works 11 months ago
By answering questions? We are general intelligences that can answer questions. Oh shit oh fuck what am I doing talking.
photonic_sorcerer@lemmy.dbzer0.com 11 months ago
Hey guess what, we general intelligences are capable of terrible things.
xor@lemmy.blahaj.zone 11 months ago
Okay, so let’s do a thought experiment, and take off all the safeguards.
Oops, you made:
Saying “don’t misuse it” isn’t enough to stop people misusing it
And that’s just with chatgpt - AI isn’t just a question and answer machine - I suggest you read about “the paperclip maximiser” as a very good example of how misalignment of general purpose AI can go horribly wrong
hansl@lemmy.world 11 months ago
And while you’re at it, remove safety on guns. And seatbelts. And might as well get rid of those pesky boom gates. I can hear the trains just fine, I don’t like being treated like a child. /s
konalt@lemmy.world 11 months ago
Guns and car crashes may break my bones, but words will never hurt me
hansl@lemmy.world 11 months ago
That makes a great song jingle but it’s been proven that you are more a product of words around you than you want to admit in your comment.