Comment on Why I don't use AI in 2025

<- View Parent
General_Effort@lemmy.world ⁨23⁩ ⁨hours⁩ ago

Thank you for the long reply. I took some time to digest it. I believe I know what you mean.

I can also say that the consciousness resides in a form of virtual reality in the brain, allowing us to manipulate reality in our minds to predict outcomes of our actions.

We imagine what happens. Physicists use their imagination to understand physical systems. Einstein was famous for his thought experiments, such as imagining riding on a beam of light.

We also use our physical intuition for unrelated things. In math or engineering, everything is a point in some space; a data point. An RGB color is a point in 3D color space. An image can be a single point in some high dimensional space.

All our ancestor’s back to the beginning of life had to navigate an environment. Much of the evolution of our nervous system was occupied with navigating spaces and predicting physics. (This is why I believe language to be much easier than self-driving cars. See Moravec’s paradox.)

One problem is, when I think abstract thoughts and concentrate, I tend to be much less aware of myself. I can’t spare the “CPU cycles”, so to say. I don’t think self-awareness is a necessary component of this “virtual environment”.

There are people who are bad at visualizing; a condition known as aphantasia. There must be, at least, quite some diversity in the nature of this virtual environment.

Some ideas about brain architecture seem to be implied. It should be possible to test some of these ideas by reference to neurological experiments or case studies, such as the work on split-brain patients. Perhaps the phenomenon of blindsight is directly relevant.

I am reminded of the concept of latent representations in AI. Lately, as reasoning models have become the rage, there are attempts to let the reasoning happen in latent space.

source
Sort:hotnewtop