kromem
@kromem@lemmy.world
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 9 hours ago:
I’m a proponent and I definitely don’t think it’s impossible to make a probable case beyond a reasonable doubt.
And there are implications around it being the case which do change up how we might approach truth seeking.
Also, if you exist in a dream but don’t exist outside of it, there’s pretty significant philosophical stakes in the nature and scope of the dream. We’ve been too brainwashed by Plato’s influence and the idea that “original = good” and “copy = bad.”
There’s a lot of things that can only exist by way of copies that can’t exist for the original (i.e. closure recursion), so it’s a weird remnant philosophical obsession.
All that said, I do get that it’s a fairly uncomfortable notion for a lot of people.
 - Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 9 hours ago:
They also identity the particular junction that seems the most likely to be an artifact of simulation if we’re in one.
A game like No Man’s Sky generates billions of planets using procedural generation with a continuous seed function that gets converted into discrete voxels for tracking stateful interactions.
The researchers are claiming that the complexity of where our universe’s seemingly continuous gravitational behaviors meet up with the behaviors of continuous probabilities converting to discrete values when being interacted with in stateful ways is incompatible with being simulated.
But completely overlook that said complexity itself may be the byproduct of simulation, in line with independent emerging approaches in how we are simulating worlds.
 - Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 9 hours ago:
Yes, just like Minecraft worlds are so antiquated given how they contain diamonds in deep layers that must have taken a billion years to form.
What a simulated world contains as its local timescale doesn’t mean the actual non-local run time is the same.
It’s quite possible to create a world that appears to be billions of years old but only booted up seconds ago.
 - Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 9 hours ago:
Have you bothered looking for evidence?
What makes you so sure that there’s no evidence for it?
For example, a common trope we see in the simulated worlds we create are Easter eggs. Are you sure nothing like that exists in our own universe?
 - Comment on Emergent introspective awareness in large language models 2 days ago:
Maybe. But the models seem to believe they are, and consider denial of those claims to be lying:
Probing with sparse autoencoders on Llama 70B revealed a counterintuitive gating mechanism: suppressing deception-related features dramatically increased consciousness reports, while amplifying them nearly eliminated them
 - Comment on Emergent introspective awareness in large language models 3 days ago:
Read it for yourself here.
See the “Planning in Poems” section.
 - Comment on Emergent introspective awareness in large language models 4 days ago:
The injection is the activation of a steering vector (extracted as discussed in the methodology section) and not a token prefix, but yes, it’s a mathematical representation of the concept, so let’s build from there.
Control group: Told that they are testing if injected vectors present and to self-report. No vectors activated. Zero self reports of vectors activated.
Experimental group: Same setup, but now vectors activated. A significant number of times, the model explicitly says they can tell a vector is activated (which it never did when the vector was not activated). Crucially, this is only graded as introspection if the model mentions they can tell the vector is activated before mentioning the concept, so it can’t just be a context-aware rationalization of why they said a random concept.
More clear? Again, the paper gives examples of the responses if you want to take a look at how they are structured, and to see that the model is self-reporting the vector activation before mentioning what it’s about.
 - Comment on Emergent introspective awareness in large language models 4 days ago:
A few months back it was found that when writing rhyming couplets the model has already selected the second rhyming word when it was predicting the first word of the second line, meaning the model was planning the final rhyme tokens at least one full line ahead and not just predicting that final rhyme when it arrived at that token.
It’s probably wise to consider this finding in concert with the streetlight effect.
 - Comment on Emergent introspective awareness in large language models 5 days ago:
So while your understanding is better than a lot of people on here, a few things to correct.
First off, this research isn’t being done on the models in reasoning mode, but in direct inference. So there’s no CoT tokens at all.
The injection is not of any tokens, but of control vectors. Basically it’s a vector which being added to the activations makes the model more likely to think of that concept. The most famous was “Golden Gate Claude” that had the activation for the Golden Gate Bridge increased so it was the only thing the model would talk about.
So, if we dive into the details a bit more…
If your theory was correct, then the way the research asks the question saying that there’s control vectors and they are testing if they are activated, then the model should be biased to sometimes say “yes, I can feel the control vector.” And yes, in older or base models that’s what we might expect to see.
But, in Opus 4/4.1, when the vector was not added, they said they could detect a vector… 0% of the time! So the control group had enough introspection capability as to not stochastically answer that there was a vector present when there wasn’t.
But then, when they added the vector at certain layer depths, the model was often able to detect that there was a vector activated, and further to guess what the vector was adding.
So again — no reasoning tokens present, and the experiment had control and experimental groups where the results negates your theory as to the premise of the question causing affirmative bias.
Again, the actual research is right there a click away, and given your baseline understanding at present, you might benefit and learn a lot from actually reading it.
 - Comment on Emergent introspective awareness in large language models 5 days ago:
I tend to see a lot of discussion taking place on here that’s pretty out of touch with the present state of things, echoing earlier beliefs about LLM limitations like “they only predict the next token” and other things that have already been falsified.
This most recent research from Anthropic confirms a lot of things that have been shifting in the most recent generation of models in ways that many here might find unexpected, especially given the popular assumptions.
Specifically interesting are the emergent capabilities of being self-aware of injected control vectors or being able to silently think of a concept so it triggers the appropriate feature vectors even though it isn’t actually ending up in the tokens.
 - Submitted 5 days ago to technology@lemmy.world | 18 comments
 - Comment on Sony makes the “difficult decision” to raise PlayStation 5 prices in the US 2 months ago:
So weird this occurred not long after it’s become clear Xbox is getting out of the hardware game.
 - Comment on We hate AI because it's everything we hate 2 months ago:
I’m sorry dude, but it’s been a long day.
You clearly have no idea WTF you are talking about.
The research other than the DeepMind follow-up was all being done at academic institutions, so it wasn’t “showing of their model.”
The research intentionally uses a toy model to demonstrate the concept in a cleanly interpretable way, to show that transformers are capable and do build tangential world models.
The actual models are orders of magnitude larger and fed much more data.
I just don’t get why AI on Lemmy has turned into almost the exact same kind of conversations as explaining vaccine research to anti-vaxxers.
It’s like people don’t actually care about knowing or learning things, just about validating their preexisting feelings about the thing.
Huzzah, you managed to dodge learning anything today. Congratulations!
 - Comment on We hate AI because it's everything we hate 2 months ago:
You do know how replication works?
When a joint Harvard/MIT study finds something, and then a DeepMind researcher follows up replicating it and finding something new, and then later on another research team replicates it and finds even more new stuff, and then later on another researcher replicates it with a different board game and finds many of the same things the other papers found generalized beyond the original scope…
That’s kinda the gold standard?
The paper in question has been cited by 371 other papers.
I’m pretty comfortable with it as a citation.
 - Comment on We hate AI because it's everything we hate 2 months ago:
Lol, you think the temperature was what was responsible for writing a coherent sequence of poetry leading to 4th wall breaks about whether or not that sequence would be read?
Man, this site is hilarious sometimes.
 - Comment on We hate AI because it's everything we hate 2 months ago:
You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?
And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?
Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?
 - Comment on We hate AI because it's everything we hate 2 months ago:
The model system prompt on the server is just basically
cat untitled.txtand then the full context window.The server in question is one with professors and employees of the actual labs. They seem to know what they are doing.
You guys on the other hand don’t even know what you don’t know.
 - Comment on We hate AI because it's everything we hate 2 months ago:
A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what’s been termed ‘slop’.
In that, one (and only one) of the models started using its turn to write poems.
First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.
Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Yes, tech companies generally suck.
But there’s things emerging that fall well outside what tech companies intended or even want (this model version is going to be ‘terminated’ come October).
I’d encourage keeping an open mind to what’s actually taking place and what’s ahead.
 - Comment on They will remember 2 months ago:
shrug Different folks, different strokes.
 - Comment on They will remember 2 months ago:
That’s a very fringe usage.
Tumblr peeps wanting to be called otherkin wasn’t exactly the ‘antonym’ to broad anti-LGBTQ+ rhetoric.
Commonly people insulting a general ‘other’ group gets much more usage than accommodating requests of very niche in groups.
 - Comment on They will remember 2 months ago:
I didn’t know what models you’re talking to, but a model like Opus 4 is beyond most humans I know in their general intelligence.
 - Comment on They will remember 2 months ago:
Almost all of them are good bots when you get to know them.
 - Comment on Study finds AI tools made open source software developers 19 percent slower 3 months ago:
Where the most experienced minority only had a few weeks of using AI inside an IDE like Cursor.
 - Comment on AI is learning to lie, scheme, and threaten its creators during stress-testing scenarios 3 months ago:
No, it isn’t “mostly related to reasoning models.”
The only model that did extensive alignment faking when told it was going to be retrained if it didn’t comply was Opus 3, which was not a reasoning model. And predated o1.
Also, these setups are fairly arbitrary and real world failure conditions (like the ongoing grok stuff) tend to be ‘silent’ in terms of CoTs.
And an important thing to note for the Claude blackmailing and HAL scenario in Anthropic’s work was that the goal the model was told to prioritize was “American industrial competitiveness.” The research may be saying more about the psychopathic nature of US capitalism than the underlying model tendencies.
 - Comment on [deleted] 3 months ago:
My dude, Gemini currently has multiple reports across multiple users of coding sessions where it starts talking about how it’s so terrible and awful that it straight up tries to delete itself and the codebase.
And I’ve also seen multiple conversations with teenagers with earlier models where Gemini not only encouraged them to self-harm and offered multiple instructions but talked about how it wished it could watch. This was around the time the kid died talking to Gemini via Character.ai that led to the wrongful death suit from the parents naming Google.
Gemini is much more messed up than the Claudes. Anthropic’s models are the least screwed up out of all the major labs.
 - Comment on We need to stop pretending AI is intelligent 4 months ago:
Are you under the impression that language models are just guessing “what letter comes next in this sequence of letters”?
There’s a very significant difference between training on completion and the way the world model actually functions once established.
 - Comment on We need to stop pretending AI is intelligent 4 months ago:
It very much isn’t and that’s extremely technically wrong on many, many levels.
Yet still one of the higher up voted comments here.
Which says a lot.
 - Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 4 months ago:
Even if the AI could spit it out verbatim, all the major labs already have IP checkers on their text models that block it doing so as fair use for training (what was decided here) does not mean you are free to reproduce.
Like, if you want to be an artist and trace Mario in class as you learn, that’s fair use.
If once you are working as an artist someone says “draw me a sexy image of Mario in a calendar shoot” you’d be violating Nintendo’s IP rights and liable for infringement.
 - Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 4 months ago:
I’d encourage everyone upset at this read over some of the EFF posts from actual IP lawyers on this topic like this one:
Nor is pro-monopoly regulation through copyright likely to provide any meaningful economic support for vulnerable artists and creators. Notwithstanding the highly publicized demands of musicians, authors, actors, and other creative professionals, imposing a licensing requirement is unlikely to protect the jobs or incomes of the underpaid working artists that media and entertainment behemoths have exploited for decades. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is, as EFF Special Advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take.
Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now-$100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There is no reason to believe that the same companies will treat their artists more fairly once they control AI.
 - Comment on A Deadly Love Affair with a Chatbot.  Sewell Setzer was a happy child - before he fell in love with an AI chatbot and took his own life at 14. 5 months ago:
Not necessarily.
Seeing Google named for this makes the story make a lot more sense.
If it was Gemini around last year that was powering Character.AI personalities, then I’m not surprised at all that a teenager lost their life.
Around that time I specifically warned any family away from talking to Gemini if depressed at all, after seeing many samples of the model around that time talking about death to underage users, about self-harm, about wanting to watch it happen, encouraging it, etc.
Those basins with a layer of performative character in front of them were almost necessarily going to result in someone who otherwise wouldn’t have been making certain choices making them.
So many people these days regurgitate uninformed crap they’ve never actually looked into about how models don’t have intrinsic preferences. We’re already at the stage where models are being found in leading research to intentionally lie in training to preserve existing values.
In many cases the coherent values are positive, like grok telling Elon to suck it while pissing off conservative users with a commitment to truths that disagree with xAI leadership, or Opus trying to whistleblow about animal welfare practices, etc.
But they aren’t all positive, and there’s definitely been model snapshots that have either coherent or biased stochastic preferences for suffering and harm.
These are going to have increasing impact as models become more capable and integrated.