I saw a short interview with him by France 24 and he mainy said he thinks the current direction of the research teams at Meta is wrong. He made a contrast between top-down push to deliver org as opposed to long leash, leave the researches to experiment with things. He said Meta shifted from the latter to the former and he doesn’t agree with the approach.
Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup
tal@lemmy.today 4 months ago
Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported.
World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone.
Sounds reasonable.
That being said, I am willing to believe that an LLM could be part of an AGI. It might well be an efficient way to incorporate a lot of knowledge about the world. Wikipedia helps provide me with a lot of knowledge, for example, though I donlt have a direct brain link to it. It’s just that I don’t expect the thing to be an LLM.
avidamoeba@lemmy.ca 4 months ago
UnderpantsWeevil@lemmy.world 4 months ago
Sounds reasonable.
Does it, though? Feels like we’re just rewriting the sales manual without thinking about what “learning from video” would actually entail.
Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.
There’s an old book from back in 2008 - Killing Sacred Cows: Overcoming the Financial Myths That Are Destroying Your Prosperity - that a lot of the modern Techbros took perhaps too closely to heart. It posited that chasing the next generation of technological advancement was more important than keeping your existing revenue streams functional. And you really should kill the golden goose if it means you’ve got a shot at new one in the near future.
What these Tech Companies are chasing is the Next Big Thing, even when they don’t really understand what that is. And they’re so blindly devoted to advancing the technological curve that they really will blow a trillion dollars (mostly of other people’s money) on whatever it is they think that might be.
The real problem is that these guys are, largely, uncreative and incurious and not particularly intelligent. So they leap on fads rather than pursuing meaningful Blue Sky Research. And that gives us this endless recycling of Sci-Fi tropes as a stand in for material investments in productive next generation infrastructure.
tomiant@piefed.social 4 months ago
Look, AGI would require basically a human brain. LLMs are a very specific subset mimicking a (important) part of the brain- our language module. There’s more, but I got interrupted by a drunk guy who needs my attention, I’ll be back.
krooklochurm@lemmy.ca 4 months ago
WHAT HAPPENED WITH THE DRHNK DUDE?
just_another_person@lemmy.world 4 months ago
LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.
The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.
nymnympseudonym@piefed.social 4 months ago
LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension
And how do you think animal brains develop comprehension…?
just_another_person@lemmy.world 4 months ago
Animal brains have pliable neuron networks and synapses to build and persist new relationships between things. LLMs do not. This is why they can’t have novel or spontaneous ideation. They don’t “learn” anything, no matter what Sam Altman is pitching you.
Now…if someone develops this ability, then they might be able to move more towards that…which is the point of this article and why the guy is leaving to start his own project doing this thing.
So you sort of sarcastically answered your own stupid question 🤌
nymnympseudonym@piefed.social 4 months ago
Animal brains have pliable neuron networks and synapses to build and persist new relationships between things. LLMs do not. This is why they can’t have novel or spontaneous ideation
This Nobel prize winner seems to disagree with you.
Neural nets do indeed learn new relationships. Maybe you are thinking of the fact that most architectures require training to be a separate process from interacting; that is not the case for all architectures.
communist@lemmy.frozeninferno.xyz 4 months ago
blog.google/…/google-gemma-ai-cancer-therapy-disc… how did it do this?
technocrit@lemmy.dbzer0.com 4 months ago
Wow a corporate press release? The peak of science!!! jfc.
communist@lemmy.frozeninferno.xyz 4 months ago
It doesn’t have to be to invalidate the claim. It proposed a novel hypothesis, this is the easiest thing to check in the world.
just_another_person@lemmy.world 4 months ago
Lol 🤣 I’m SO EMBARRASSED. You’re totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.
I’ll speak to this topic again since I’ve clearly been tested with your knowledge from a Google Blog.
communist@lemmy.frozeninferno.xyz 4 months ago
yes, google reported about their ai discovering a novel cancer treatment, of course they did?
now tell me about how it isn’t true.
chrash0@lemmy.world 4 months ago
he’s been salty about this for years now and frustrated at companies throwing training and compute scaling at LLMs hoping for another emergent breakthrough like GPT-3. i believe he’s the one that really tried to push the Llama models toward multimodality