just_another_person
@just_another_person@lemmy.world
- Comment on How many of you get a boner from seeing this? 21 hours ago:
Still pretty rapey vibes
- Comment on How many of you get a boner from seeing this? 21 hours ago:
What in the actual fuck is this?
- Comment on Thunderbird Adds Native Microsoft Exchange Email Support - The Thunderbird Blog 1 day ago:
Did it…not have that already? I swear it did, but honestly I thought Exchange was dead long ago.
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 1 day ago:
From your own linked paper:
To design a neural long-term memory module, we need a model that can encode the abstraction of the past history into its parameters. An example of this can be LLMs that are shown to be memorizing their training data [98, 96, 61]. Therefore, a simple idea is to train a neural network and expect it to memorize its training data. Memorization, however, has almost always been known as an undesirable phenomena in neural networks as it limits the model generalization [7], causes privacy concerns [98], and so results in poor performance at test time. Moreover, the memorization of the training data might not be helpful at test time, in which the data might be out-of-distribution. We argue that, we need an online meta-model that learns how to memorize/forget the data at test time. In this setup, the model is learning a function that is capable of memorization, but it is not overfitting to the training data, resulting in a better generalization at test time.
Literally what I just said. This is specifically addressing the problem I mentioned, and goes on further to exacting specificity on why it does not exist in production tools for the general public (it’ll never make money, and it’s slow, honestly). In fact, there is a minor argument later on that developing a separate supporting system negates even referring to the outcome as an LLM, and the supported referenced papers linked at the bottom dig even deeper into the exact thing I mentioned on the limitations of said models used in this way.
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 1 day ago:
It most certainly did not…because it can’t.
You find me a model that can take multiple disparate pieces of information and combine them into a new idea not fed with a pre-selected pattern, and I’ll eat my hat. The very basis of how these models operates is in complete opposition of you thinking it can spontaneously have a new and novel idea. New…that’s what novel means.
I can pointlessly link you to papers, blogs from researchers explaining, or just asking one of these things for yourself, but you’re not going to listen, which is on you for intentionally deciding to remain ignorant to how they function.
Here’s Terrence Kim describing how they set it up using GRPO: terrencekim.net/…/scaling-llms-for-next-generatio…
And then another researcher describing what actually took place: joshuaberkowitz.us/…/googles-cell2sentence-c2s-sc…
So you can obviously see…not novel ideation. They fed it a bunch of trained data, and it correctly used the different pattern alignment to say “If it works this way otherwise, it should work this way with this example.”
Sure, it’s not something humans had gotten to get, but that’s the entire point of the tool. Good for the progress, certainly, but that’s it’s job. It didn’t come up with some new idea about anything because it works from the data it’s given, and the logic boundaries of the tasks it’s set to run. It’s not doing anything super special here, just very efficiently.
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 1 day ago:
Nah, I’m just not going to write a novel on Lemmy, ma dude.
I’m not even spouting anything that’s not readily available information anyway. This is all well known, hence everybody calling out the bubble.
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 1 day ago:
🤦🤦🤦 No…it really isn’t:
Teams at Yale are now exploring the mechanism uncovered here and testing additional AI-generated predictions in other immune contexts.
Not only is there no validation, they have only begun even looking at it.
Again: LLMs can’t make novel ideas. This is PR, and because you’re unfamiliar with how any of it works, you assume MAGIC.
Like every other bullshit PR release of it’s kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It’s not that it is intelligent or making “discoveries”, it’s just moving really fast.
You feed it 10^2^ combinations of amino acids, and it’s eventually going to find new chains needed for protein folding. The thing you’re missing there is:
- all the logic programmed by humans
- The data collected and sanitized by humans
- The task groups set by humans
- The output validated by humans
It’s a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM
Nothing at any stage if developed, I outted, or validated by any models, because…they can’t do that.
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 1 day ago:
I sure do. Knowledge, and being in the space for a decade.
Here’s a fun one: go ask your LLM why it can’t create novel ideas, it’ll tell you right away 🤣🤣🤣🤣
LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.
I can already tell from your tone you’re mostly driven by bullshit PR hype from people like Sam Altman , and are an “AI” fanboy, so I won’t waste my time arguing with you. You’re in love with human-made logic loops and datasets, bruh. There is, and never was, a way for any of it to become some supreme being of ideas and knowledge. You’re drunk on Kool-Aid, kiddo.
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 1 day ago:
Animal brains have pliable neuron networks and synapses to build and persist new relationships between things. LLMs do not. This is why they can’t have novel or spontaneous ideation. They don’t “learn” anything, no matter what Sam Altman is pitching you.
Now…if someone develops this ability, then they might be able to move more towards that…which is the point of this article and why the guy is leaving to start his own project doing this thing.
So you sort of sarcastically answered your own stupid question 🤌
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 1 day ago:
Lol 🤣 I’m SO EMBARRASSED. You’re totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.
I’ll speak to this topic again since I’ve clearly been tested with your knowledge from a Google Blog.
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 1 day ago:
LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.
The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.
- Comment on Rename the "shutdown" shortcut to "power off" in KDE Plasma? 1 day ago:
KRunner is the launcher that is responsible for launching all things, yes. You can’t edit system entries easily without patching some stuff in.
- Comment on Rename the "shutdown" shortcut to "power off" in KDE Plasma? 1 day ago:
The piece of software you’re asking about is KRunner, but I don’t think editing existing entries is supported. It’s possible, sure, but it’s probably a bigger mess than you’d like to deal with.
I would just make a shortcut to the shutdown action and let it populate in the results, then just use that to trigger a shutdown.
- Comment on Google CEO: If an AI bubble pops, no one is getting out clean 1 day ago:
This is code for “Hey government, you better be ready to bail us all out”.
- Comment on Do you think there would eventually be technology to delete/replace memories (like the *Men In Black* device). How much do you fear such technology? (like misuse by governments/criminals) 2 days ago:
I’ll just leave this here and back away slowly…
Good luck to you.
- Comment on RIP Mac Pro, I guess. 3 days ago:
They’re overpriced crap now. You buy at a 3x premium for the hardware, and you get the OS. That’s it.
- Comment on [deleted] 3 days ago:
Framework has a refurb store with deep discounts. No need to buy new.
- Comment on [deleted] 3 days ago:
Framework is the only correct answer here.
- Comment on what is the best fruit to leave in a fridge? 1 week ago:
Berries will last 3x longer in the fridge than on the counter. Longer if you give them a citric acid or diluted vinegar bath after you bring them home.
- Comment on It's 2025, And We're Getting A Brand New 8-Bit Game Console 1 week ago:
I don’t think that’s a great model for the maker who is clearly trying to SELL things.
Seems that might be a competing idea…
- Comment on It's 2025, And We're Getting A Brand New 8-Bit Game Console 1 week ago:
Is this…sarcasm?
- Comment on AI-powered consulting startups to watch into 2026 1 week ago:
getfucked.ai is super cool…I heard. All the best people are saying it.
- Comment on It's 2025, And We're Getting A Brand New 8-Bit Game Console 1 week ago:
Evercade sitributes digitally.
- Comment on It's 2025, And We're Getting A Brand New 8-Bit Game Console 1 week ago:
It’s cool as a fun project, but I don’t see how this could possibly be commercially viable, especially with cartridges. The need for physical distribution alone is already a huge money burden on both the producer and the consumer.
- Comment on Authors Guild Asks Supreme Court to Hold Internet Providers Accountable for Copyright Theft 1 week ago:
Bit of a stretch the way this is angled…
- Comment on What are some good memoirs or autobiography about someone who had a rough childhood, especially victims of child abuse/neglect? 2 weeks ago:
Miyazaki maybe?
- Comment on when are the upcoming political elections held in america? 2 weeks ago:
Nov 4th (few days ago). Democrats won everything.
- Comment on NVIDIA's H100 GPU Takes Data Centers to Space 2 weeks ago:
This will be another massive fucking failure in the AI space.
Instead of making these dumbass and inefficient stacks they’ve all built this crap on MORE efficient, they’re like “Space is cold…sounds good.”
This planet is doomed.
- Comment on OpenAI signs $38 billion compute deal with Amazon, partnering with cloud leader for first time 2 weeks ago:
These MFers are in SO MUCH TROUBLE it is absolutely insane. There is no way out of this by way of revenue, and this shit should be ILLEGAL. Idiots and morons have taken over the government, and are going to collapse the stock market because OpenAI can’t fucking stop making promises.
When we kick Trump out, these assholes should be put in jail.
- Comment on Is re-visiting a place of trauma a good idea? Have anyone done it? 2 weeks ago:
You should probably ask a therapist about this because the level of trauma is very subjective.