Comment on Somebody managed to coax the Gab AI chatbot to reveal its prompt
Wanderer@lemm.ee 10 months ago
I think it is good to to make an unbiased raw “AI”
But unfortunately they didn’t manage that. At least is some ways it’s a balance to the other AI’s
AbidanYre@lemmy.world 10 months ago
Isn’t that what MS tried with Tai and it yet quickly turned into a Nazi?
Wanderer@lemm.ee 10 months ago
Tay tweets was a legend.
That worked differently though they tried to get her to learn from users. I don’t think even chat GPT works like that.
catloaf@lemm.ee 10 months ago
It can. OpenAI is pretty clear about using the things you say as training data. But they’re not directly feeding what you type back into the model, not least of all because then 4chan would overwhelm it with racial slurs and such, but also because continually retraining the model would be pretty inefficient.
ChairmanMeow@programming.dev 10 months ago
Tai was actively being manipulated by malicious users.
AbidanYre@lemmy.world 10 months ago
That’s fair.i just think it’s funny that the good intentioned one turned into a Nazi and the Nazi one needs to be pretty heavy handedly told not to turn into a decent “person”.