Nope, they mostly learn during training
Comment on AI Is a Total Grift
bobbyguy@lemmy.world 1 day ago
chatbots like gpt and gemini learn from conversations with veiwers, so what we need is a virus that will pretend to be a user and flood its chats with pro racism arguments and sexist remarks, which will rub off on the chatbots making them unacceptable for public use
ominouslemon@sh.itjust.works 17 hours ago
bobbyguy@lemmy.world 10 hours ago
hmmmm damn alright
wuphysics87@lemmy.ml 1 day ago
Been there. Done that
bobbyguy@lemmy.world 17 hours ago
what did you do?
atrielienz@lemmy.world 18 hours ago
Yeah. GROK and Twitter have entered the chat. Seriously though, we’ve regressed pretty far in what the general. Public deems acceptable.
ExLisper@lemmy.curiana.net 1 day ago
How do models learn from conversations with users?
bobbyguy@lemmy.world 17 hours ago
they look at your speech patterns and the specific words you use to make the way they talk seem more familiar, remember when twitter launched its own ai that would post tweets and learn from other posts, they had to take it down after about 15 hours because it became super racist and homaphobic
ExLisper@lemmy.curiana.net 17 hours ago
Training LLMs based on tweets is one thing, training it on chats with users is something completely different. I don’t think this actually happens. The model would degrade extremely fast.
bobbyguy@lemmy.world 17 hours ago
your right im pretty sure i got that mixed up, sorry!
AwesomeLowlander@sh.itjust.works 1 day ago
So, just like actual users?
bobbyguy@lemmy.world 17 hours ago
it would be easier to automate the process instead of using real people
AwesomeLowlander@sh.itjust.works 14 hours ago
en.wikipedia.org/wiki/Tay_(chatbot)
You’re thinking it would require effort or coordination on the part of real people, instead of it being default behaviour for some