And the text even ends with a mention of her being in early menopause…
Comment on In heat
serenissi@lemmy.world 15 hours ago
How can she be fertile if her ovaries are removed?
Cordyceps@sopuli.xyz 15 hours ago
Comment on In heat
serenissi@lemmy.world 15 hours ago
How can she be fertile if her ovaries are removed?
And the text even ends with a mention of her being in early menopause…
_stranger_@lemmy.world 13 hours ago
Because you’re not getting an answer to a question, you’re getting characters selected to appear like they statistically belong together given the context.
howrar@lemmy.ca 13 hours ago
A sentence saying she had her ovaries removed and that she is fertile don’t statistically belong together, so you’re not even getting that.
JcbAzPx@lemmy.world 12 hours ago
You think that because you understand the meaning of words. LLM AI doesn’t. It uses math and math doesn’t care that it’s contradictory, it cares that the words individually usually came next in it’s training data.
howrar@lemmy.ca 12 hours ago
It has nothing to do with the meaning. If your training set consists of a bunch of strings consisting of A’s and B’s together and another subset consisting of C’s and D’s together (i.e.
[AB]+
and[CD]+
in regex) and the LLM outputs “ABBABBBDA”, then that’s statistically unlikely because D’s don’t appear with A’s and B’s. I have no idea what the meaning of these sequences are, nor do I need to know to see that it’s statistically unlikely.In the context of language and LLMs, “statistically likely” roughly means that some human somewhere out there is more likely to have written this than the alternatives because that’s where the training data comes from. The LLM doesn’t need to understand the meaning. It just needs to be able to compute probabilities, and the probability of this excerpt should be low because the probability that a human would’ve written this is low.
_stranger_@lemmy.world 12 hours ago
It’s not even words, it “thinks” in “word parts” called tokens.