Comment on Somebody managed to coax the Gab AI chatbot to reveal its prompt
Seasoned_Greetings@lemm.ee 7 months agoIn your analogy a proposed regulation would just be requiring the book in question to report that it’s endorsed by a nazi. We may not be inclined to change our views because of an LLM like this but you have to consider a world in the future where these things are commonplace.
There are certainly people out there dumb enough to adopt some views without considering the origins.
afraid_of_zombies@lemmy.world 7 months ago
They are commonplace now. At least 3 people I work with always have a chatgpt tab open.
Seasoned_Greetings@lemm.ee 7 months ago
And you don’t think those people might be upset if they discovered something like this post was injected into their conversations before they have them and without their knowledge?
afraid_of_zombies@lemmy.world 7 months ago
No. I don’t think anyone who searches out in gab for a neutral LLM would be upset to find Nazi shit, on gab
Seasoned_Greetings@lemm.ee 7 months ago
You think this is confined to gab? You seem to be looking at this example and taking it for the only example capable of existing.
Your argument that there’s not anyone out there at all that can ever be offended or misled by something like this is both presumptuous and quite naive.
What happens when LLMs become widespread enough that they’re used in schools? We already have a problem, for instance, with young boys deciding to model themselves and their world view after figureheads like Andrew Tate.
In any case, if the only thing you have to contribute to this discussion boils down to “nuh uh won’t happen” then you’ve missed the point and I don’t even know why I’m engaging you.