blog.google/…/google-gemma-ai-cancer-therapy-disc… how did it do this?
Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup
just_another_person@lemmy.world 1 day agoLLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.
The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.
communist@lemmy.frozeninferno.xyz 1 day ago
just_another_person@lemmy.world 1 day ago
Lol 🤣 I’m SO EMBARRASSED. You’re totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.
I’ll speak to this topic again since I’ve clearly been tested with your knowledge from a Google Blog.
communist@lemmy.frozeninferno.xyz 1 day ago
yes, google reported about their ai discovering a novel cancer treatment, of course they did?
now tell me about how it isn’t true.
just_another_person@lemmy.world 1 day ago
I sure do. Knowledge, and being in the space for a decade.
Here’s a fun one: go ask your LLM why it can’t create novel ideas, it’ll tell you right away 🤣🤣🤣🤣
LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.
I can already tell from your tone you’re mostly driven by bullshit PR hype from people like Sam Altman , and are an “AI” fanboy, so I won’t waste my time arguing with you. You’re in love with human-made logic loops and datasets, bruh. There is, and never was, a way for any of it to become some supreme being of ideas and knowledge. You’re drunk on Kool-Aid, kiddo.
technocrit@lemmy.dbzer0.com 10 hours ago
Wow a corporate press release? The peak of science!!! jfc.
communist@lemmy.frozeninferno.xyz 3 hours ago
It doesn’t have to be to invalidate the claim. It proposed a novel hypothesis, this is the easiest thing to check in the world.
nymnympseudonym@piefed.social 1 day ago
And how do you think animal brains develop comprehension…?
just_another_person@lemmy.world 1 day ago
Animal brains have pliable neuron networks and synapses to build and persist new relationships between things. LLMs do not. This is why they can’t have novel or spontaneous ideation. They don’t “learn” anything, no matter what Sam Altman is pitching you.
Now…if someone develops this ability, then they might be able to move more towards that…which is the point of this article and why the guy is leaving to start his own project doing this thing.
So you sort of sarcastically answered your own stupid question 🤌
nymnympseudonym@piefed.social 1 day ago
This Nobel prize winner seems to disagree with you.
Neural nets do indeed learn new relationships. Maybe you are thinking of the fact that most architectures require training to be a separate process from interacting; that is not the case for all architectures.
just_another_person@lemmy.world 1 day ago
From your own linked paper:
Literally what I just said. This is specifically addressing the problem I mentioned, and goes on further to exacting specificity on why it does not exist in production tools for the general public (it’ll never make money, and it’s slow, honestly). In fact, there is a minor argument later on that developing a separate supporting system negates even referring to the outcome as an LLM, and the supported referenced papers linked at the bottom dig even deeper into the exact thing I mentioned on the limitations of said models used in this way.