Comment on AIs can’t stop recommending nuclear strikes in war game simulations
Grail@multiverse.soulism.net 11 hours agoNo, LLMs are not just an interface for accessing training data. If that were true, then their references would actually work. The fact that LLMs can hallucinate and make stuff up proves that they are not just accessing the training data. The ANN is generating new (often incorrect) information.
reksas@sopuli.xyz 4 hours ago
if the hallucinations are result of something actually happening in the background, that would be quite interesting. It would also be very bad for rest of us since it might mean the billionaires who own the damn things would be in position to get even worse deathgrip on our world. If they ever manage to create agi, the worst thing that could happen isnt that it breaks free and enslaves humanity but that it doesnt and it helps the billionaires enslave us further and make sure we cant ever even think about fighting back.
But i think the hallucinations are based on incorrect information in the training data, they did train it from stuff from reddit too. Any and everything will be considered true, but if 99% of the data says one thing and 1% says another, then i think it will reference that 99% more often but it cant know that the 1% is wrong, can even real humans know it for certain? And since it cant evaluate anything, there might be situations where that 1% of data might be more relevant due to some nebulous mechanism on how it processes data.
llms have been made to act extremely helpful and subservient, so if they actually could “think” wouldnt they factcheck themselves first before saying something? I have sometimes just asked “are you sure?” and the llm starts “profusely apologizing” for providing incorrect information or otherwise correcting itself.
Though i wonder how it would answer if it truely had no initialization querys, as they have same hidden instructions on every query you make on how to “behave” and what not to say.
Grail@multiverse.soulism.net 3 hours ago
No. They don’t have access to the original training data, or to the internet. They’re stuck remembering it the same way a human remembers something: with neurons. They cannot search the dataset for you. The best they can do is remember and tell you.
reksas@sopuli.xyz 2 hours ago
but they do have access to internet? At least gpt can search based on the text it outputs when its processing the query
Grail@multiverse.soulism.net 1 hour ago
Really? Must be a new feature, it didn’t when I tried it. I know they can execute code, I guess the engineers added a search tool. Regardless, that tool isn’t part of their fundamental design. It’s something they have to go and access, and most of the time they won’t. If you were to experiment by asking it to write a scientific paper, you’d find the references are garbage with broken links and nonexistent papers. Hallucinations. It’s just making something plausible sounding up, the same as a lazy human might.