Comment on In heat
howrar@lemmy.ca 6 hours agoA sentence saying she had her ovaries removed and that she is fertile don’t statistically belong together, so you’re not even getting that.
Comment on In heat
howrar@lemmy.ca 6 hours agoA sentence saying she had her ovaries removed and that she is fertile don’t statistically belong together, so you’re not even getting that.
JcbAzPx@lemmy.world 6 hours ago
You think that because you understand the meaning of words. LLM AI doesn’t. It uses math and math doesn’t care that it’s contradictory, it cares that the words individually usually came next in it’s training data.
howrar@lemmy.ca 5 hours ago
It has nothing to do with the meaning. If your training set consists of a bunch of strings consisting of A’s and B’s together and another subset consisting of C’s and D’s together (i.e.
[AB]+
and[CD]+
in regex) and the LLM outputs “ABBABBBDA”, then that’s statistically unlikely because D’s don’t appear with A’s and B’s. I have no idea what the meaning of these sequences are, nor do I need to know to see that it’s statistically unlikely.In the context of language and LLMs, “statistically likely” roughly means that some human somewhere out there is more likely to have written this than the alternatives because that’s where the training data comes from. The LLM doesn’t need to understand the meaning. It just needs to be able to compute probabilities, and the probability of this excerpt should be low because the probability that a human would’ve written this is low.
monotremata@lemmy.ca 5 hours ago
Honestly this isn’t really all that accurate. Like, a common example when introducing the Word2Vec mapping is that if you take the vector for “king” and add the vector for “woman,” the closest vector matching the resultant is “queen.” So there are elements of “meaning” being captured there. The Deep Learning networks can capture a lot more abstraction than that, and the Attention mechanism introduced by the Transformer model greatly increased the ability of these models to interpret context clues.
You’re right that it’s easy to make the mistake of overestimating the level of understanding behind the writing. That’s absolutely something that happens. But saying “it has nothing to do with the meaning” is going a bit far. There is semantic processing happening, it’s just less sophisticated than the form of the writing could lead you to assume.
JcbAzPx@lemmy.world 3 hours ago
Unless they grabbed discussion forums that happened to have examples of multiple people. It’s pretty common when talking about fertility, problems in that area will be brought up.
People can use context and meaning to avoid that mistake, LLMs have to be forced not to through much slower QC by real people (something Google hates to do).
_stranger_@lemmy.world 5 hours ago
It’s not even words, it “thinks” in “word parts” called tokens.