Comment on Judge disses Star Trek icon Data’s poetry while ruling AI can’t author works

<- View Parent
ProfessorScience@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology

Do you have an example I could check out? I’m curious how a study would show a process to be “fundamentally incapable” in this way.

LLMs do not synthesize. They do not have persistent context.

That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that “conversations” with an LLM are really just including all previous parts of the conversation in a new prompt. Isn’t this analagous to short term memory? Now suppose you were to take all of an LLM’s conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There’s no technical reason that this can’t be done, although in practice it’s computationally expensive. Would you consider that LLM system to have persistent context?

On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?

source
Sort:hotnewtop