No one has been working on AI for a while…
It’s rage bait… Or the guy is the most ego centric arrogant who only thinks the reality in is his 15 headlines a day.
Comment on Palantir CEO Says a Surveillance State Is Preferable to China Winning the AI Race
leftzero@lemmy.dbzer0.com 1 day ago
The what race? No one has been working on AI for a while… if he means LLMs and similar generative models there’s only the race to see how long it takes for the models to be so poisoned by being trained on their own slop that they no longer can produce the illusion of giving useful results (seems like the current generation is almost there, already giving diminishing results), and the race to extract as much money as possible from the economy before the first one ends and the bubble pops…
No one has been working on AI for a while…
It’s rage bait… Or the guy is the most ego centric arrogant who only thinks the reality in is his 15 headlines a day.
survirtual@lemmy.world 1 day ago
…what?
LLMs are AI. What is this?
I am asking seriously. Can someone explain the context of this nonsense?
Are we really entering a luddite phase again?
themurphy@lemmy.ml 1 day ago
Doesnt matter if we take LLMs out of the equations. AI is being worked on in many forms constantly.
Palantir is an example, which makes the statement laughable.
survirtual@lemmy.world 1 day ago
Right, I know.
AI has been worked on for generations. We’ve been benefiting from the fruits of that labor for a long time, mainly starting with search and translations.
Now we have the ability to have a conversation with machines and it is somehow not intelligence?
I am really confused.
Intelligence does not mean consciousness or alive. It is means intelligence, which can be summarized as advanced pattern matching & predictive behavior.
Like…a beetle is intelligent and alive. Is an LLM more intelligent than a beetle? What about an image classifying model, like CLIP? It can perceive and describe objects in an image in natural language, what insect can do that?
This is a form of intelligence. It was artificially created. It is artificial intelligence. How are people this delusional?
themurphy@lemmy.ml 1 day ago
I understand where you come from with the beetle example, though I would still consider most living creatures more intelligent.
But it is a diffenition of intelligens we debate now. The beetles intelligens is not interesting for us, but it sure is capable of image, sound and movement capabilities on a much higher level in real time.
JcbAzPx@lemmy.world 1 day ago
It’s really not though.
leftzero@lemmy.dbzer0.com 1 day ago
No they’re not. They’re fancy autocomplete. Statistics engines. Extremely more expensive but not particularly more capable Markov chains.
Them being marketed as AI doesn’t make them AI, it just makes them a scam.
m532@lemmy.ml 1 day ago
The thousands of researchers researching it all conspired together, naming it wrong, just to fool you, the one true expert for AI!
Or its just real AI.
Hammock_tann@lemmy.world 1 day ago
Technically, LLMs aren’t ai. What they do is basically predict relationship between words. They can’t reason or count or learn.
survirtual@lemmy.world 1 day ago
“Technically”? Wrong word. By all technical measures, they are technically 100% AI.
What you might be trying to say is they aren’t AGI (artificial general intelligence). I would argue they might just be AGI. For instance, they can reason about what they are better than you can, while also being able to draw a pelican riding a unicycle.
What they certainly aren’t is ASI (artificial super-intelligence). You can say they technically aren’t ASI and you would be correct. ASI would be capable of improving itself faster than a human would be capable.
leftzero@lemmy.dbzer0.com 1 day ago
Exactly. Nothiing technical about it: they simply produce the statistically most likely token (in their training model) to follow a given list of tokens.
Any information contained in their output (other than the fact that each of the tokens is probably the most statistically likely to appear after the previous ones in the texts used as their models, which I imagine could be useful for philologists) is purely circumstantial, and was already contained in their training model.
There’s no reasoning involved in the process (other than possibly in the writing of the texts in their training mode if they predate LLM, if we’re feeling optimistic about human intelligence), nor any mechanism in the LLM for reasoning to take place.
They are as far from AI as Markov chains were, just slightly more correct in their token likelihood predictions and several orders of magnitude more costly.
And them being sold as AI doesn’t make them any closer, it just means the people and companies selling them are scammers.
I_Clean_Here@lemmy.world 1 day ago
You are a pedant and a fool.
survirtual@lemmy.world 1 day ago
Careful, my other comment got removed because of a witty but still insightful dig.
They are very sensitive here about how the AI isn’t really AI.