unpossum@sh.itjust.works 1 week ago
GLM 4.5 is from August. Isn’t the real tl;dr that a seven month old open model, which was behind proprietary models at the time, did better than most humans would?
unpossum@sh.itjust.works 1 week ago
GLM 4.5 is from August. Isn’t the real tl;dr that a seven month old open model, which was behind proprietary models at the time, did better than most humans would?
MHard@lemmy.world 1 week ago
The task described in this article is asking questions about a document that was provided to the llm in the context.
I would hope that if you give a human a text and ask them to cite facts from it they would do better than 99% correct.
Also, when the tokens exceeded 200k, the llm error rate was higher than 10%
unpossum@sh.itjust.works 1 week ago
That’s literally what school exams are about, isn’t it?
Token window is a problem for all llms though, that’s not easily solved, but it can be worked around to a certain extent.