Comment on An AI Thought Experiment on Substack Is Sending The Stock Market Spiraling
XLE@piefed.social 18 hours agoTo belabor the chess analogy: I would say a chessbot didn’t work if it randomly caused pieces to appear. Or if it made exceedingly lousy moves. You’d apparently say it was working because it technically changed the board.
Literally nobody is saying the token predictor isn’t predicting token. It’s just predicting wrong token, which normal people call “not working,” while tech evangelists prefer to call it “hallucination” or “misalignment” depending on the narrative they’re aiming for.
Iconoclast@feddit.uk 18 hours ago
The goal of the token predictor is to produce coherent language - not factual information. If you can understand what it’s saying, it’s working - even if the content of what it says is factually inaccurate.
XLE@piefed.social 13 hours ago
Accuracy is the only thing people want, and the only thing AI companies talk about. The text has already legible, and it’s been that way for years. I think you’re alone on your quest to lower the bar for the word “works”