Comment on Why I don't use AI in 2025
Buffalox@lemmy.world 2 days agoJust because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.
Comment on Why I don't use AI in 2025
Buffalox@lemmy.world 2 days agoJust because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.
General_Effort@lemmy.world 2 days ago
If I can’t prove it, I don’t know how I can claim to understand it.
It’s axiomatic that equality is symmetric. It’s also axiomatic that 1+1=2. There is not a whole lot to understand. I have memorized that. Actually, having now thought about this for a bit, I think I can prove it.
What makes the difference between a human learning these things and an AI being trained for them?
Then how will you know the difference between strong AI and not-strong AI?
Buffalox@lemmy.world 2 days ago
I’ve already stated that that is a problem:
From a previous answer to you:
Because I don’t think we have a sure methodology.
I think therefore I am, is only good for the conscious mind itself.
I can’t prove that other people are conscious, although I’m 100% confident they are.
In exactly the same way we can’t prove when we have a conscious AI.
But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.
General_Effort@lemmy.world 2 days ago
I don’t think there’s an agreed definition.
Strong AI or AGI, or whatever you will, is usually talked about in terms of intellectual ability. It’s not quite clear why this would require consciousness. Some tasks are aided by or maybe even necessitate self-awareness; for example, chatbots. But it seems to me that you could leave out such tasks and still have something quite impressive.
Then, of course, there is no agreed definition of consciousness. Many will argue that the self-awareness of chatbots is not consciousness.
I would say most people take strong AI and similar to mean an artificial person, for which they take consciousness as a necessary ingredient. Of course, it is impossible to engineer an artificial person. It is like creating a technology to turn a peasant into a king. It is a category error. A less kind take could be that stochastic parrots string words together based on superficial patterns without any understanding.
Indeed, I do not see the relation between consciousness and reasoning in this example.
Self-awareness means the ability to distinguish self from other, which implies computing from sensory data what is oneself and what is not. That could be said to be a form of reasoning. But I do not see such a relation for the example.
By that standard, are all humans conscious?
FWIW, I asked GPT-4o mini via DDG.
Screenshot
Image
I don’t know if that means it understands. It’s how I would have done it (yesterday, after looking up Peano Axioms in Wikipedia), and I don’t know if I understand it.
Buffalox@lemmy.world 1 day ago
You do it wrong, you provided the “answer” to the logic proposition, and got a parroted the proof for it. Completely different situation.
The AI must be able to figure this out in responses that require this very basic understanding. I don’t recall the exact example, but here is a similar example, where the AI fails to simply count the number of R’s in strawberry, claiming there are only 2, and refusing to accept there is 3, then when explained there is 1 in straw and 2 in berry, it made some very puzzling argument, that counting the R in Straw is some sort of clever trick.
This is fixed now, and had to do with tokenizing info incorrectly. So you can’t “prove” this wrong by showing an example of a current AI that doesn’t make the mistake.
Unfortunately I can’t find a link to the original story, because I’m flooded with later results. But you can easily find the 2 R’s in strawberry problem.
Yes, but if you instruct a parrot or LLM to say yes when asked if it is separate from it’s surroundings, it doesn’t mean it is just because it says so.
So need to figure out if it actually understands what it means. Self awareness on the human level requires a high level of logical thought and abstract understanding. My example shows this level of understanding clearly isn’t there.
As I wrote earlier, we really can’t prove consciousness, the way to go around it is to figure out some of the mental abilities required for it, if those can be shown not to be present, we can conclude it’s probably not there.
When we have Strong AI, it may take a decade to be widely acknowledged. And this will stem from failure to disprove it, rather than actually proof.
You never asked how I define intelligence, self awareness or consciousness, you asked how I operationally define it, that a very different question.
en.wikipedia.org/wiki/Operational_definition
I was a bit confused by that question, because consciousness is not a construct, the brain is, of which consciousness is an emerging property.
Also:
Seem to me to be able to define that for consciousness, would essentially mean to posses the knowledge necessary to replicate it.
Nobody on planet earth has that knowledge yet AFAIK.