While the tester is blind as to which is which, the experimenter knows the construction of the machine and can presumably tell if it’s artificially constraining itself.
In the case of intelligences and neural networks that is not so straight forward. The humans and machines that are behind the curtain have to be motivated to try and replicate a human, or the test would fail, whether that’s because a human control is unhelpful or because the machine isn’t bothering trying to replicate a human.
AbouBenAdhem@lemmy.world 3 days ago
In a Turing test, yes. What I’m suggesting is to change the motivation, to see if the machine fails like a human even when motivated not to.