Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup
communist@lemmy.frozeninferno.xyz 3 weeks agoyes, google reported about their ai discovering a novel cancer treatment, of course they did?
now tell me about how it isn’t true.
just_another_person@lemmy.world 3 weeks ago
I sure do. Knowledge, and being in the space for a decade.
Here’s a fun one: go ask your LLM why it can’t create novel ideas, it’ll tell you right away 🤣🤣🤣🤣
LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.
I can already tell from your tone you’re mostly driven by bullshit PR hype from people like Sam Altman , and are an “AI” fanboy, so I won’t waste my time arguing with you. You’re in love with human-made logic loops and datasets, bruh. There is, and never was, a way for any of it to become some supreme being of ideas and knowledge. You’re drunk on Kool-Aid, kiddo.
communist@lemmy.frozeninferno.xyz 3 weeks ago
You sound drunk on kool-aid, this is a validated scientific report from yale, tell me a problem with the methodology or anything of substance.
just_another_person@lemmy.world 3 weeks ago
🤦🤦🤦 No…it really isn’t:
Not only is there no validation, they have only begun even looking at it.
Again: LLMs can’t make novel ideas. This is PR, and because you’re unfamiliar with how any of it works, you assume MAGIC.
Like every other bullshit PR release of it’s kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It’s not that it is intelligent or making “discoveries”, it’s just moving really fast.
You feed it 10^2^ combinations of amino acids, and it’s eventually going to find new chains needed for protein folding. The thing you’re missing there is:
It’s a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM
Nothing at any stage if developed, I outted, or validated by any models, because…they can’t do that.
BrundleFly2077@sh.itjust.works 3 weeks ago
Wow, if you really do know something about this subject, you’re being a real asshole about it 🙄
communist@lemmy.frozeninferno.xyz 3 weeks ago
You addressed that they haven’t tested the hypothesis completely while completely overlooking the fact that an ai suggested a novel hypothesis… even if it comes out to be wrong it is still undeniably a novel hypothesis. This is what was validated by yale…
markon@lemmy.world 3 weeks ago
I was almost with you on the whole expert act until the part where you said we feed the model “10^2 combinations of amino acids.” You realize 10^2 is literally just 100, right? You are writing paragraphs acting like the smartest guy in the room, but you think protein folding gets solved by checking a list shorter than a grocery receipt. That is honestly hilarious. It kind of explains your whole point though. No wonder you think it is just a “simple sorting mechanism” if you think the dataset is that small. You might want to check the math before the next lecture because being off by about 300 zeros makes the arrogance look a bit silly.
Eheran@lemmy.world 3 weeks ago
Wow, you stayed way cooler than I would have. Lemmy is extremely anti-LLM or AI in general.
technocrit@lemmy.dbzer0.com 3 weeks ago
Oof. Tell me you don’t understand science without telling me you don’t understand science.
communist@lemmy.frozeninferno.xyz 3 weeks ago
It is validated, yale confirmed that it is the case, it was not REPRODUCED, which is irrelevant to my claim that they created a novel hypothesis.
markon@lemmy.world 3 weeks ago
A decade in the space is impressive. It shows dedication and time invested. That alone deserves recognition.
Still, the points you are repeating are familiar. They are recycled claims from years ago. If the goal is to critique novelty, repeating the same arguments does not advance it.
You say LLMs have zero intentional logic. That is true if by intentional logic you mean human consciousness or goals. It is false if you mean emergent behaviors and the ability to combine information in ways no single source explicitly wrote. Eliminating nuance with absolute terms makes it easy to dismiss valid evidence.
Calling someone an AI fanboy signals preference for labels over analysis. That approach does not strengthen an argument. Specific examples do. Concrete failures, reproducible tests, or papers are what advance discussion.
It is also not accurate to suggest that anyone pitches LLMs as supreme beings. Most people treat them as complex tools that produce surprising results. Their speed, scale, and capacity to identify patterns exceed human ability, but they remain tools. Critiquing them as if they were gods is a strawman.
If you want this discussion to matter, show a single reproducible example where an LLM fails in a way your logic cannot explain. Otherwise, repeating slogans and metaphors only illustrates a resistance to evidence.
I am not here to argue for ideology. I am here to examine claims. That is a choice. It is also a choice to resist slogans and demand specificity. Fun, fun. Another fun day.