SuspciousCarrot78
@SuspciousCarrot78@lemmy.world
- Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%] 3 days ago:
Point 1 - no. LLM outputs are not always hallucinations (generally speaking - some are worse than others) but where they might veer off into fantasy, I’ve reinforced with programming. Think of it like giving your 8yr old a calculator instead of expecting them to work out 7532x565 in their head. And a dictionary. And encyclopedia. And Cliff’s notes. And watch. And compass. And a … you get the idea.
The role of the footer is to show you which tool it used (its own internal priors, what you taught it, calculator etc) and what ratio the answer is based on those. Those are router assigned. That’s just one part of it though.
Point 2 is a mis-read. These aren’t instructions or system prompts telling the model “don’t make things up” - that works about as well as telling a fat kid not to eat cake.
Instead, what happens is the deterministic elements fire first. The model gets the answer, on which the model then builds context on. That’s not guardrails on AI, that’s just not using AI where AI is the wrong tool. Whether that’s “real AI” is a philosophy question - what I do know and can prove is that it leads to fewer wrong answers.
- Comment on GitHub hits CTRL-Z, decides it will train its AI with user data after all 4 days ago:
Thank you!
Done.
Also, go Codeberg.
- Comment on Microsoft blocks registry trick that unlocked performance-boosting native NVMe driver on Windows 11 — workarounds still exist to enable support, however 6 days ago:
Heh. Mass.gravel as the github repo always makes me do a double take.
- Comment on Microsoft blocks registry trick that unlocked performance-boosting native NVMe driver on Windows 11 — workarounds still exist to enable support, however 6 days ago:
I don’t understand the M$ endgame with Win 11.
Like, it would be very easy to paint this as intentional… but as Halon’s sez “never attribute to malice that which is adequately explained by stupidity”.
Putting aside low hanging fruit (end stage capitalism, ai bad etc)…y u do this, Microsoft? You have good people there, right? Top. Men. Right?
I’d love to read something on this topic from a M$ insider / ex-pat. I’m trying to understanding why M$ is doing the equivalent of Sideshow Bob stepping on garden rakes.
What’s up over there?
- Comment on Wikipedia has banned AI-generated text, with two exceptions 6 days ago:
AI already trains on Wikipedia.
- Comment on Did we win? 1 week ago:
- Make your own apps. “Fine, I’ll do it myself” BDE
- Comment on Did we win? 1 week ago:
- Fuck that, keep an old phone and don’t update it
- When it breaks, buy a Linux phone. Or a dumbphone.
- Only way to win? Don’t play their game
- Comment on Did we win? 1 week ago:
Narrator: No, they did not win.
- Comment on Yann LeCun just raised $1bn to prove the AI industry has got it wrong 1 week ago:
Yes, it has.
web.archive.org/…/yann-lecun-ami-labs-world-model…
Here - have the raw text, copy pasted.
Yann LeCun just raised $1bn to prove the AI industry has got it wrong Ana-Maria Stanciuc 6–7 minutes
The Turing Award winner left Meta four months ago convinced that large language models are a dead end. Today he announced $1.03 billion in seed funding, Europe’s largest ever, to build something different.
In November 2025, Yann LeCun walked into Mark Zuckerberg’s office and told his boss he was leaving. He had spent twelve years building Meta’s AI research operation into one of the most respected in the world, and had become one of the industry’s most vocal critics of the technology dominating it.
Large language models, he argued, were a statistical illusion. Impressive, yes. Intelligent, no. He thought he could build something better, and he thought he could do it faster outside Meta than inside it. On Tuesday, investors put $1.03 billion behind that conviction.
Advanced Machine Intelligence Labs, AMI, pronounced like the French word for “friend”, announced its seed round on 10 March 2026, just four months after its founding. The round values the company at $3.5 billion on a pre-money basis and is believed to be the largest seed round ever raised by a European startup.
Five firms co-led it: Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, the vehicle through which Amazon founder Jeff Bezos makes personal investments. Nvidia, Toyota, Samsung, and Singapore’s Temasek also participated, alongside French VC firm Daphni, South Korean investor SBVA, and a long list of prominent individuals including Tim and Rosemary Berners-Lee, venture capitalist Jim Breyer, entrepreneur Mark Cuban, and former Google chief executive Eric Schmidt.
LeCun initially sought around €500 million, according to a leaked pitch deck reported by Sifted. Demand exceeded that figure significantly. He ended up with €890 million, roughly $1.03 billion, and told journalists this week that interest had been high enough that AMI could be selective about which investors it accepted.
The company’s headquarters are in Paris, with additional offices planned in New York, Montreal, and Singapore. LeCun, who holds dual French-American citizenship and remains a professor of computer science at New York University, will serve as executive chairman. Day-to-day operations will be led by Alexandre LeBrun, a French entrepreneur who previously founded and ran Nabla, the medical AI startup, and who now becomes AMI’s chief executive.
The rest of the founding team is drawn almost entirely from Meta’s AI research organisation. Michael Rabbat, Meta’s former director of research science, joins as vice president of world models. Laurent Solly, Meta’s former vice president for Europe, becomes chief operating officer. Pascale Fung, a former senior director of AI research at Meta, takes the role of chief research and innovation officer. Saining Xie, previously at Google DeepMind, becomes chief science officer.
What exactly is AMI building? The short answer is world models, a category of AI system that LeCun has been arguing for, and working on, for years. The longer answer requires understanding why he thinks the industry has taken a wrong turn. The case against LLMs
Large language models learn by predicting which word comes next in a sequence. They have been trained on vast quantities of human-generated text, and the results have been remarkable, ChatGPT, Claude, and Gemini have demonstrated an ability to generate fluent, plausible language across an enormous range of subjects. But LeCun has spent years arguing, loudly and repeatedly, that this approach has fundamental limits.
His alternative is JEPA: the Joint Embedding Predictive Architecture, a framework he first proposed in 2022. Rather than predicting the future state of the world in pixel-perfect or word-by-word detail, the approach that makes generative AI both powerful and prone to hallucination, JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail. The idea is to build systems that understand physical reality the way humans and animals do: not through language, but through embodied experience.
Within one to two years, LeCun told AFP, AMI plans to begin discussions with corporate partners. Within three to five years, he said, the goal is to produce “fairly universal intelligent systems” capable of being deployed across almost any domain requiring machine intelligence. He wants AMI, he added, to become “the main provider of intelligent systems.”
The timing and geography of the announcement are not coincidental. LeCun has been explicit about AMI’s positioning as a European, and specifically French, counter to the American and Chinese AI giants. “We are one of the few frontier AI labs that are neither Chinese nor American,” he has said. The choice of Paris as headquarters, and the involvement of French investors Cathay Innovation and Daphni, reflects that framing.
Whether that ambition is achievable remains genuinely open. AMI has no product, no revenue, and no near-term prospect of either. LeCun acknowledged to journalists this week that the company would spend its first year focused entirely on research and development. World models, by his own account, are a long-term scientific project, not the kind of AI startup that ships a product in three months and posts revenue in six.
What the $1.03 billion seed round demonstrates, for now, is that the investors backing it are willing to wait. LeCun has one of the most credible research records in AI, he shared the Turing Award in 2018 for work on convolutional neural networks that underpins most of modern machine vision, and his argument that LLMs have fundamental architectural limits has been consistent enough, and long enough, that dismissing it is no longer the safe assumption it once was. The question is whether being right about the problem is the same as being right about the solution.
- Comment on Why do people hate AI so much? 1 week ago:
Hope it helped.
- Comment on Why do people hate AI so much? 1 week ago:
I don’t think people hate AI per se - they hate big tech, and what big tech is doing with it. That’s a legitimate gripe, but it’s not the same thing as the technology being bad.
AI used well can be genuinely useful. I’ve dropped a couple of examples in other threads I won’t rehash here, but the short version is: there are real world uses for this tech (world modelling, medicine, robotics).
Hell I built clinical notes pipeline that takes the tedium of charting from 15-20 mins down to about 3, with a policy gate that rejects LLM output before it ever reaches me if it fails criteria I defined. None that looks anything like the slop-firehose corporate rollout most people are reacting to.
lemmy.world/post/42920187/22058968
lemmy.world/post/44188294/22635793
Worth noting too: taking a black-and-white position on anything is just less cognitively expensive than arriving at a nuanced one. That’s not a character flaw, that’s called “being human”. But that doesn’t mean the nuanced position is wrong.
PS: The electricity/water data centre stuff is maybe more complicated than the headline takes suggest. This might be worth actually reading before treating it as settled.
blog.andymasley.com/…/a-cheat-sheet-for-conversat…
YMMV and ICBW
- Comment on Spotify playing ads for paid subscribers 1 week ago:
Thank you for this!
- Comment on I am an American. I used to be proud of my country. Now it feels like a turd circling the drain. Is there anything going on behind the scene that America is actually doing good in? 1 week ago:
I don’t know. You (as an American) are in a better place to judge that than I.
What I do know is this: people are people. And for every rotten son of a bitch, there’s someone else, quietly, moving heaven and earth to do good - both in big ways and small. If we’re going to tilt at windmills, we may as well tilt at windmills together.
- Comment on Yann LeCun just raised $1bn to prove the AI industry has got it wrong 1 week ago:
- Comment on Tech hobbyist makes shoulder-mounted guided missile prototype with $96 in parts and a 3D printer — DIY MANPADS includes Wi-Fi guidance, ballistics calculations, optional camera for tracking 1 week ago:
- Comment on Why are people so rude on Reddit compared to the Fediverse? 1 week ago:
^ this
Reddit has converged on that as the recipe for success. There’s even a book on it
jacobdesforges.com/you-should-quit-reddit-publish…
Karma, like buttons, up-arrows etc are all the same slot machine. I stand by my “yeet into garbage pile of history” comment, and so do (some) of the people responsible for it.
- Comment on Why are people so rude on Reddit compared to the Fediverse? 1 week ago:
Aww. Can we be luxury gay space communists then?
- Comment on The 49MB Web Page 1 week ago:
taps temple
Ads won’t load if browser literally can’t load em.
- Comment on Why are people so rude on Reddit compared to the Fediverse? 1 week ago:
Oh, I had my run in with anti-ai folks already. Probably we’re talking about the same “lobster”, non?
Wrt insane troll logic: I have on old friend who made a good distinction. “The difference between a glutton for punishment and a gourmand for punishment is that the latter can eat garbage and transmute it into energy.”
I dunno if he was right, but it does remind me to go outside, touch grass and wrestle with my kids from time to time.
- Comment on Why are people so rude on Reddit compared to the Fediverse? 1 week ago:
I mean…that’s just Tuesday on the internet in 2026. Sadly.
“Wrong noises” is a good framing. But it can be good too, in enforcing careful posting discipline (ala “belt and suspenders” - cross your t’s and dot you i’s).
It’s sad that we have to assume defensive posture as s.o.p…but yeah, here we are.
To say it in the language of my people: shit’s fucked.
- Comment on Why are people so rude on Reddit compared to the Fediverse? 1 week ago:
Theory:
People on Lemmy self select to be here, usually as a direct backlash to prevailing Reddit culture, management or behaviour.
Reddit is mainstream, discoverable, friction free for the masses.
OTOH, there is small (albeit deliberate friction) in engagement here, that hearkens back to USENET days. It’s analog, messy and human. There are some bots here (to be sure) so I don’t know how long the Golden Age of Lemmy will exist, but clearly this space was designed by someone who knows the old magics. It shows.
Therefore, if you posit an inverse correlation between “is an utter cunt” and “wants to interact on niche social media forum called Lemmy”, I think you’d have a safe bet.
It’s not a a hard gate by any means, but it gambles (correctly) on friction keeping the biggest trolls away. The ROI for being a cunt is demonstrably higher on Reddit, Tiktok etc
Result: Lemmy is a nicer place to visit. For now.
Also, yes: I am Australian. Does it show? Cunt cunt cunty cunt cuntington III.
- Comment on YSK What are you eating 1 week ago:
Being a fan of the Pareto principle, could a lot of this be summarized with
"“Eat food. Not too much. Mostly plants.” - Pollen
?
- Comment on The 49MB Web Page 1 week ago:
Better?
- Comment on Goodbye Google - I self-host everything now on 4 tiny PCs in a 3D printed rack (CaptainRedsLab) 2 weeks ago:
stern nods
We just became blood brothers. R’amen.
- Comment on Goodbye Google - I self-host everything now on 4 tiny PCs in a 3D printed rack (CaptainRedsLab) 2 weeks ago:
2 x 2GB. Bargain, really.
- Comment on Goodbye Google - I self-host everything now on 4 tiny PCs in a 3D printed rack (CaptainRedsLab) 2 weeks ago:
Oh I am right there with you, beratna
- Comment on Goodbye Google - I self-host everything now on 4 tiny PCs in a 3D printed rack (CaptainRedsLab) 2 weeks ago:
“Why did you climb Mt Everest?”
"Because it was there"
- George Mallory
But also
“Simplicity is the ultimate sophistication” - some dude named after a Ninja turtle
- Comment on Goodbye Google - I self-host everything now on 4 tiny PCs in a 3D printed rack (CaptainRedsLab) 2 weeks ago:
Of course. I posted this for inspiration, because he walks it through step by step. As for crazy spec…well…you tell me
• 12U KWS Rack V2 (3D printed — designed by Ilan Kushnir) • Lenovo ThinkCentre M720q Cluster (3x nodes running Proxmox) • Lenovo ThinkCentre M920q running pfSense (router/firewall) • Terramaster D5-310 HDD Enclosure (12TB + 18TB + NVMe SSDs) • 10-Port 2.5G/10G Ethernet Switch • Google Coral USB Accelerator (AI inference)
Probably only the 4th one down is the exxy one…and someone one should tell him the Coral USB accelerator is for Vision not inference (IIRC).
- Comment on Goodbye Google - I self-host everything now on 4 tiny PCs in a 3D printed rack (CaptainRedsLab) 2 weeks ago:
Don’t let the perfect be the enemy of the good. Also, I agree with phant. It’s punk as fuck.
- Goodbye Google - I self-host everything now on 4 tiny PCs in a 3D printed rack (CaptainRedsLab)www.youtube.com ↗Submitted 2 weeks ago to selfhosted@lemmy.world | 36 comments