Comment on Mind-reading AI can translate brainwaves into written text: Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thou...

<- View Parent
knightly@pawb.social ⁨1⁩ ⁨year⁩ ago

This seems like circular reasoning. SAT scores don’t measure intelligence because llm can pass it which isn’t intelligent.

The purpose of the SAT isn’t to measure intelligence, it is to rank students on their ability to answer test questions.

A copy of the answer key could get a perfect score, do you think that means it’s “intelligence” is equivalent to a person with perfect SATs?

Why isn’t the llm intelligent?

For the same reason that the SAT answer key or an instruction manual isn’t, the ability to answer questions is not the foundation of intelligence but an emergent property thereof.

You still haven’t answered what intelligence is or what an a.i. would be.

Computer scientists, neurologists, and philosophers can’t answer that either, or else we’d already have the algorithms we’d need to build human-equivalent AI.

Without a definition you just fall into the trap of “A.I. is whatever computers cant do” which has been going on for a while:

Exactly, you’re just falling into the Turing Trap instead. Just because a company can convince you that it’s program is intelligent doesn’t mean it is, or else chatbots from 10 years ago would qualify.

There is one goalpost that has stayed steady, the turing test, which llm seems to have passed, at least for shorter conversation.

The Turing Test is just a slightly modified version of a Victorian-era social deduction game. It doesn’t measure intelligence, but the ability to mimic a human conversation. Turing himself acknowledged this: smithsonianmag.com/…/turing-test-measures-somethi…

source
Sort:hotnewtop