An LLM is a deterministic function that produces the same output for a given input - I’m using “deterministic” in the computer science sense. In practice, there is some output variability due to race conditions in pipelined processing and floating point arithmetic, that are allowable because they speed up computation. End users see variability because of pre-processing of the prompt and extra information LLM vendors inject when running the function, as well as how the outputs are selected.
I have a hard time considering something that has an immutable state as sentient, but since there’s no real definition of sentience, that’s a personal decision.
Enkers@sh.itjust.works 1 week ago
Technical challenges aside, there’s no explicit reason that LLMs can’t do self-reinforcement of their own models.
I think animal brains are also “fairly” deterministic, but their behaviour is also dependent on the presence of various neurotransmitters, so there’s a temporal/contextual element to it, so situationally our emotions can affect our thoughts which LLMs don’t really have either.
I guess it’d be possible to forward feed an “emotional state” as part of the LLM’s context to emulate that sort of animal brain behaviour.