Comment on [deleted]
fubarx@lemmy.world 1 week ago
When LLMs first came out, I asked them a few fun logic puzzles. The kind that Martin Gardner used to publish in Scientific American.
Got total gibberish answers. A while later, tried again. This time, perfect word-for-word responses. Had LLMs become sentient and developed logic? Turned out they had found all the old Scientific American back issues to train on.
Guessing the same is going on with the carwash question. The more posts come out about it, the more likely the LLM responses will get closer to publisher answers.
Lather. Rinse. Repeat.
SuspciousCarrot78@lemmy.world 1 week ago
Possible. I do hope they take the more principled approach of solving the global problem for that class of question (I tried to) rather than cheating on the local maxima.
You want generalisability, not parroting.