Nope, they mostly learn during training
Comment on AI Is a Total Grift
bobbyguy@lemmy.world 3 weeks ago
chatbots like gpt and gemini learn from conversations with veiwers, so what we need is a virus that will pretend to be a user and flood its chats with pro racism arguments and sexist remarks, which will rub off on the chatbots making them unacceptable for public use
wuphysics87@lemmy.ml 3 weeks ago
Been there. Done that
bobbyguy@lemmy.world 3 weeks ago
what did you do?
atrielienz@lemmy.world 3 weeks ago
Yeah. GROK and Twitter have entered the chat. Seriously though, we’ve regressed pretty far in what the general. Public deems acceptable.
ExLisper@lemmy.curiana.net 3 weeks ago
How do models learn from conversations with users?
bobbyguy@lemmy.world 3 weeks ago
they look at your speech patterns and the specific words you use to make the way they talk seem more familiar, remember when twitter launched its own ai that would post tweets and learn from other posts, they had to take it down after about 15 hours because it became super racist and homaphobic
ExLisper@lemmy.curiana.net 3 weeks ago
Training LLMs based on tweets is one thing, training it on chats with users is something completely different. I don’t think this actually happens. The model would degrade extremely fast.
bobbyguy@lemmy.world 3 weeks ago
your right im pretty sure i got that mixed up, sorry!
AwesomeLowlander@sh.itjust.works 3 weeks ago
So, just like actual users?
bobbyguy@lemmy.world 3 weeks ago
it would be easier to automate the process instead of using real people
AwesomeLowlander@sh.itjust.works 3 weeks ago
en.wikipedia.org/wiki/Tay_(chatbot)
You’re thinking it would require effort or coordination on the part of real people, instead of it being default behaviour for some