Comment on Gemini AI tells the user to die — the answer appeared out of nowhere when the user asked Google's Gemini for help with his homework

<- View Parent
kitnaht@lemmy.world ⁨3⁩ ⁨days⁩ ago

They don’t. The models are trained on sanitized data, and don’t permanently “learn”. They have a large context window to pull from (reaching 200k ‘tokens’ in some instances) but lots of people misunderstand how this stuff works on a fundamental level.

source
Sort:hotnewtop