You’ve edited this comment at least 3 times since I’ve replied, each time with more random shit that doesn’t make any sense. You just keep thumbing thru a thesaurus and replacing words with bigger words you clearly don’t understad.
This is probably why your posts/comments don’t make sense. Stop trying to sound intelligent and focus on communicating your point. But I don’t have the patience to ever try and explain anything to you again.
Best of luck.
masterspace@lemmy.ca 2 days ago
I see what you’re saying but I think the problem is that you would need to test an AI while it’s unaware of being tested, or use a novel trick that it’s unaware of, to try and catch it producing non-human output.
If it’s aware that it’s being tested, then presumably it will try to pass the test and try to limit itself to human cognition to do so.
i.e. It’s possible that an AI’s intelligence includes enough human-like intelligence to completely mimic a human and pass a Turing test, but not enough to know to keep to those boundaries; but it’s also possible that it both know enough to mimic us and enough to keep to our bounds.
AbouBenAdhem@lemmy.world 2 days ago
In the original Turning test, the black box isn’t the machine—it’s the human. The test is to see whether a (known) machine is an accurate model of an unknown system.
While the tester is blind as to which is which, the experimenter knows the construction of the machine and can presumably tell if it’s artificially constraining itself. When I say “the inability to act otherwise”, I’m assuming the experimenter can distinguish a true inability from an induced one (even if the tester can’t).
masterspace@lemmy.ca 2 days ago
In the case of intelligences and neural networks that is not so straight forward. The humans and machines that are behind the curtain have to be motivated to try and replicate a human, or the test would fail, whether that’s because a human control is unhelpful or because the machine isn’t bothering trying to replicate a human.
AbouBenAdhem@lemmy.world 2 days ago
In a Turing test, yes. What I’m suggesting is to change the motivation, to see if the machine fails like a human even when motivated not to.