auraithx
@auraithx@lemmy.dbzer0.com
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 6 hours ago:
While both Markov models and LLMs forget information outside their window, that’s where the similarity ends. A Markov model relies on fixed transition probabilities and treats the past as a chain of discrete states. An LLM evaluates every token in relation to every other using learned, high-dimensional attention patterns that shift dynamically based on meaning, position, and structure.
Changing one word in the input can shift the model’s output dramatically by altering how attention layers interpret relationships across the entire sequence. It’s a fundamentally richer computation that captures syntax, semantics, and even task intent, which a Markov chain cannot model regardless of how much context it sees.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 6 hours ago:
LLMs are not Markov chains, even extended ones. A Markov model, by definition, relies on a fixed-order history and treats transitions as independent of deeper structure. LLMs use transformer attention mechanisms that dynamically weigh relationships between all tokens in the input—not just recent ones. This enables global context modeling, hierarchical structure, and even emergent behaviors like in-context learning. Markov models can’t reweight context dynamically or condition on abstract token relationships.
The idea that LLMs are “computed once” and then applied blindly ignores the fact that LLMs adapt their behavior based on input. They don’t change weights during inference, true—but they do adapt responses through soft prompting, chain-of-thought reasoning, or even emulated state machines via tokens alone. That’s a powerful form of contextual plasticity, not blind table lookup.
Calling them “lossy compressors of state transition tables” misses the fact that the “table” they’re compressing is not fixed—it’s context-sensitive and computed in real time using self-attention over high-dimensional embeddings. That’s not how Markov chains work, even with large windows.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 8 hours ago:
This isn’t a thing.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 10 hours ago:
Brother you better hope it does because even if emissions dropped to 0 tonight the planet wouldnt stop warming and it wouldn’t stop what’s coming for us.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 11 hours ago:
Performance eventually collapses due to architectural constraints, this mirrors cognitive overload in humans: reasoning isn’t just about adding compute, it requires mechanisms like abstraction, recursion, and memory. The models’ collapse doesn’t prove “only pattern matching”, it highlights that today’s models simulate reasoning in narrow bands, but lack the structure to scale it reliably. That is a limitation of implementation, not a disproof of emergent reasoning.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 11 hours ago:
The paper doesn’t say LLMs can’t reason, it shows that their reasoning abilities are limited and collapse under increasing complexity or novel structure.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 11 hours ago:
Like what?
I don’t think there’s any search engine better than Perplexity. And for scientific research Consensus is miles ahead.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 11 hours ago:
Define reason.
Like humans? Of course not. models lack intent, awareness, and grounded meaning. They don’t “understand” problems, they generate token sequences.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 11 hours ago:
Unlike Markov models, modern LLMs use transformers that attend to full contexts, enabling them to simulate structured, multi-step reasoning (albeit imperfectly). While they don’t initiate reasoning like humans, they can generate and refine internal chains of thought when prompted, and emerging frameworks (like ReAct or Toolformer) allow them to update working memory via external tools. Reasoning is limited, but not physically impossible, it’s evolving beyond simple pattern-matching toward more dynamic and compositional processing.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 12 hours ago:
This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.
- Comment on What did Musk and Trump fall out over? 2 days ago:
Different kind of smart. I dont underestimate them I just see right through them.
They have people who are Ben Carson smart. Domain-specific. Not in general reasoning, critical thinking, or capable of maintaining such a façade. Anyone smart would’ve distanced themselves along time ago unless they are grifting them. And generally those types of smarts don’t end in MAGA to start with. Just like how MAGAts don’t end up as artists. (Name one conservative artist who isn’t shit)
And Trump cannot lie. Yes, all he does it lie. But he also cannot lie. If you ask him if he commited a crime he’ll straight up admit to it. Everytime. He rejects the premise of guilt. The lies he does tell are done out of a mix of unconscious strategic self-presentation and the fact he is just thick as shit and believes whatever he sees on the TV.
- Comment on What did Musk and Trump fall out over? 2 days ago:
Less than 1%.
People vastly overestimate these bozos.
They aren’t lying. They actually believe this shite. They aren’t playing genius 5d chess they are just reactive morons. Look at the leaked Signal group chats.
No doubt Vance is a bit smarter and is acting a bit, given all the ‘Trump is americas Hitler’ stuff. But this is unspoken between them.
- Comment on What did Musk and Trump fall out over? 2 days ago:
Musk was taking all the attention and jumping about like a dick.
First rift was musk was going to get classified briefings on China and Trump put a stop to it.
Also Musk had a fall out with Bessent and Trump sided with Besset.
Also rumoured he fucked Steven Millers wife (who is a left and would blacken a right eye with a hook).
Nothing to do with the bill at all IMO. He does not give a single fuck about that. Just a pain point he knows he can drive a wedge through a big part of Trumps base.
- Comment on 🎄🌲🎄 3 days ago:
Elon says Trump is in the Epstein files and has called for him to be impeached.
Trump says he’s considering revoking all Elon’s govt subsidies and Bannon is saying he should deport Elon and confiscate SpaceX
- Submitted 3 days ago to [deleted] | 14 comments
- Comment on Steps To Navigate Discord 4 days ago:
Fuck up halfwit
- Comment on [deleted] 4 days ago:
Man I wish someone would try and create an unhealthy dynamic with me.
- Comment on Is it possible to basically not snore at all? 4 days ago:
Rich coming from someone who reads ‘most’ as ‘all’
- Comment on Is it possible to basically not snore at all? 4 days ago:
Most people don’t have sleep apnea.
Most people are overweight.
Hence, most people don’t snore unless they’re overweight. Medical conditions are the exceptions to the rule, I didn’t say ‘People can’t snore unless they’re overweight’, which is what you seem to be assuming oddly.
- Comment on Is it possible to basically not snore at all? 4 days ago:
Overweight people are more likely to snore because excess fat around the neck and throat can narrow the airway, increasing resistance to airflow and causing vibration during sleep.
So how exactly is that ‘absurd’ lmao?
- Comment on My AI Skeptic Friends Are All Nuts 4 days ago:
It’s built on publicly available data, the same way that humans learn. Many are also now trained on licensed, opt-in and synthetic data.
They don’t erase credit the amplify access to human ideas.
Training consumes energy, but its ongoing usage to query is vastly cheaper to query than most industrial processes. You’re assuming it cannot reduce our energy usage by improving efficiency and removing manual labour.
“If something is made unethically, it shouldn’t exist”
By that logic, nearly all modern technology (from smartphones to pharmaceuticals) would be invalidated.
- Comment on Is it possible to basically not snore at all? 5 days ago:
Most people don’t snore unless they’re overweight
- Comment on [deleted] 5 days ago:
That’s the fascist core baseline. 1/6 Nazis, 1/6 woke, 4/6 halfdaft
- Comment on My AI Skeptic Friends Are All Nuts 5 days ago:
- Comment on My AI Skeptic Friends Are All Nuts 5 days ago:
- Self-reported reductions in cognitive effort do not equal reduced critical thinking; efficiency isn’t cognitive decline.
- The study relies on subjective perception, not objective performance or longitudinal data.
- Trust in AI may reflect appropriate tool use, not overreliance or diminished judgment.
- Users often shift critical thinking to higher-level tasks like verifying and editing, not abandoning it.
- Routine task delegation is intentional and rational, not evidence of skill loss.
- The paper describes perceptions, but overstates risks without proving causation.
- Comment on [deleted] 1 week ago:
The only place that would force that is the eu so I guess I depends on whether that survives the storm.
- Comment on Google is going ‘all in’ on AI. It’s part of a troubling trend in big tech 1 week ago:
I showed this one to my friend and she said ‘But they faces aren’t AI generated, right?’
www.reddit.com/…/were_cooked_a_zerocost_ai_demo/
There’s a bit at the end where the spaghetti disappears, the chef walks away a bit quick while still speaking, but otherwise it’s nearly flawless.
For scenes with lots of action and complex physics it’s still very noticeable
old.reddit.com/r/…/pushing_veo_3_to_the_limit/
But it’s already good enough to replace several scenes in blockbusters. Dream scenes, cut-aways, etc.
Look at this sausage dog
xcancel.com/nmatares/status/1924931844879134804
It even gets the audio right when it moves between hardwood and carpet.
- Comment on Google is going ‘all in’ on AI. It’s part of a troubling trend in big tech 1 week ago:
rare talent these days
- Comment on Google is going ‘all in’ on AI. It’s part of a troubling trend in big tech 1 week ago:
The statement assumes AGI must surpass all narrow models in every domain, but this is not a requirement by definition. AGI is defined by its generality across tasks, not by superiority in each specialized field.
- Comment on Google is going ‘all in’ on AI. It’s part of a troubling trend in big tech 1 week ago:
Only sometimes, with enough generations you can already make indistinguishable videos for the most part. You’re seeing these mistakes because it’s amateurs spending $100 not professionals spending $10k