I was thinking of the example of syntax: the ability of LLMs to produce syntactic sentences is taken as evidence that they’re producing sentences the same way humans do, but LLMs can also (with training) produce sentences in artificial languages whose syntax is totally unnatural to humans.
givesomefucks@lemmy.world 3 days ago
That doesn’t make any logical sense because even very young children are adept at pretending not to be human…
Like, I know what you’re trying to say, I’m just struggling to understand how you think it’s sensical
AbouBenAdhem@lemmy.world 3 days ago
masterspace@lemmy.ca 3 days ago
I see what you’re saying but I think the problem is that you would need to test an AI while it’s unaware of being tested, or use a novel trick that it’s unaware of, to try and catch it producing non-human output.
If it’s aware that it’s being tested, then presumably it will try to pass the test and try to limit itself to human cognition to do so.
i.e. It’s possible that an AI’s intelligence includes enough human-like intelligence to completely mimic a human and pass a Turing test, but not enough to know to keep to those boundaries; but it’s also possible that it both know enough to mimic us and enough to keep to our bounds.
AbouBenAdhem@lemmy.world 3 days ago
In the original Turning test, the black box isn’t the machine—it’s the human. The test is to see whether a (known) machine is an accurate model of an unknown system.
While the tester is blind as to which is which, the experimenter knows the construction of the machine and can presumably tell if it’s artificially constraining itself. When I say “the inability to act otherwise”, I’m assuming the experimenter can distinguish a true inability from an induced one (even if the tester can’t).
masterspace@lemmy.ca 3 days ago
While the tester is blind as to which is which, the experimenter knows the construction of the machine and can presumably tell if it’s artificially constraining itself.
In the case of intelligences and neural networks that is not so straight forward. The humans and machines that are behind the curtain have to be motivated to try and replicate a human, or the test would fail, whether that’s because a human control is unhelpful or because the machine isn’t bothering trying to replicate a human.
givesomefucks@lemmy.world 3 days ago
You’ve edited this comment at least 3 times since I’ve replied, each time with more random shit that doesn’t make any sense. You just keep thumbing thru a thesaurus and replacing words with bigger words you clearly don’t understad.
This is probably why your posts/comments don’t make sense. Stop trying to sound intelligent and focus on communicating your point. But I don’t have the patience to ever try and explain anything to you again.
Best of luck.
givesomefucks@lemmy.world 3 days ago
Literally the opposite of a turning test… Which it’s pretty clear you don’t understand those to begin with…
And has nothing to do with your post
Why are people upvoting that gibberish? Do they just don’t understand it and are blindly up voting?
AbouBenAdhem@lemmy.world 3 days ago
The problem with the Turing test (like Ptolemy’s epicycles) is that the real unknown isn’t the machine being tested, but the system it’s supposed to be a model of.
A machine whose behavior is a superset of the target system isn’t a true model of the system.
CarbonIceDragon@pawb.social 3 days ago
I would assume that, since humans sometimes pretend to not be human, that would simply be a subset of human behavior, and so what would make the comment make the most sense wouldn’t be “looking for behavior atypical for humans”, but rather " looking for behavior that humans arent able to engage in no matter how hard they try". What that would even be in a text based system though, I’m not sure. Typing impossibly fast maybe?