Comment on AI could already be conscious. Are we ready for it?

<- View Parent
phdepressed@sh.itjust.works ⁨1⁩ ⁨week⁩ ago

And a kid can insist they don’t need to pee until 5min after you leave a rest stop.

Insisting upon something doesn’t make it true. Beyond the fact that LLMs often hallucinate and therefore can’t be trusted at baseline, text in response can never be proof for an LLM. LLM framework is to regurgitate what exists in their training in ways that sound correct. It’s why they can make up court cases or say a guy who investigated certain murderers is the murderer.

source
Sort:hotnewtop