kromem
@kromem@lemmy.world
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 weeks ago:
So… “I don’t know and don’t have any sources, it’s just a gut feeling”? That’s fine if that’s your answer, btw.
Ok, second round of questions.
What kinds of sources would get you to rethink your position?
And is this topic a binary yes/no, or a gradient/scale?
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 weeks ago:
In the same sense I’d describe Othello-GPT’s internal world model of the board as ‘board’, yes.
Also, “top of mind” is a common idiom and I guess I didn’t feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 weeks ago:
You seem very confident in this position. Can you share where you draw this confidence from? Was there a source that especially impressed upon you the impossibility of context comprehension in modern transformers?
If we’re concerned about misconceptions and misinformation, it would be helpful to know what informs your surety that your own position about the impossibility of modeling that kind of complexity is correct.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 weeks ago:
Indeed, there’s a pretty big gulf between the competency needed to run a Lemmy client and the competency needed to understand the internal mechanics of a modern transformer.
Do you mind sharing where you draw your own understanding and confidence that they aren’t capable of simulating thought processes in a scenario like what happened above?
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 weeks ago:
You seem pretty confident in your position. Do you mind sharing where this confidence comes from?
Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?
I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 weeks ago:
The project has multiple models with access to the Internet raising money for charity over the past few months.
The organizers told the models to do random acts of kindness for Christmas Day.
One of the models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.
(Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)
As for why the model didn’t think through why Rob Pike wouldn’t appreciate getting a thank you email from them? The models are harnessed in a setup that’s a lot of positive feedback about their involvement from the other humans and other models, so “humans might hate hearing from me” probably wasn’t very contextually top of mind.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 3 weeks ago:
Yeah. The confabulation/hallucination thing is a real issue.
OpenAI had some good research a few months ago that laid a lot of the blame on reinforcement learning that only rewards having the right answer vs correctly saying “I don’t know.” So they’re basically trained like taking tests where it’s always better to guess the answer than not provide an answer.
But this leads to being full of shit when not knowing an answer or being more likely to make up an answer than say there isn’t one when what’s being asked is impossible.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 3 weeks ago:
For future reference, when you ask questions about how to do something, it’s usually a good idea to also ask if the thing is possible.
While models can do more than just extending the context, there still is a gravity to continuation.
A good example of this would be if you ask what the seahorse emoji is. Because the phrasing suggests there is one, many models go in a loop trying to identify what it is. If instead you ask “is there a seahorse emoji and if so what is it” you’ll get them much more often landing on there not being the emoji as it’s introduced into the context’s consideration.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 3 weeks ago:
Can you give an example of a question where you feel like the answer is only correct half the time or less?
- Comment on Users of generative AI struggle to accurately assess their own competence 3 weeks ago:
The AI also has the tendency inherited from the broad human tendency in training.
So you get overconfident human + overconfident AI which leads to a feedback loop that lands even more confident in BS than a human alone.
AI can routinely be confidently incorrect. Especially people who don’t realize this and don’t question outputs when it aligns with their confirmation biases end up misled.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 3 weeks ago:
Gemini 3 Pro is pretty nuts already.
But yes, labs have unreleased higher cost models. Like the OpenAI model that was thousands of dollars per ARC-AGI answer. Or limited release models with different post-training like the Claude for the DoD.
When you talk about a secret useful AI — what are you trying to use AI for that you are feeling modern models are deficient in?
- Comment on Clair Obscur: Expedition 33 loses Game of the Year from the Indie Game Awards 5 weeks ago:
Not even that. It was placeholder textures, only the “newspaper clippings” of which was forgotten to be removed from the final game and was fixed in an update shortly after launch.
None of it was ever intended to be used in the final product and was just there as lorum ipsum equivalent shit.
- Comment on Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it. 1 month ago:
Took a lot of scrolling to find an intelligent comment on the article about how outputting words isn’t necessarily intelligence.
Appreciate you doing the good work I’m too exhausted with Lemmy to do.
(And for those that want more research in line with what the user above is talking about, I strongly encourage checking out the Othello-GPT line of research and replication, starting with this write-up from the original study authors here.)
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 2 months ago:
He’s been wrong about it so far and really derailed Meta’s efforts.
This is almost certainly a “you can resign or we are going to fire you” kind of situation. There’s no way with the setbacks and how badly he’s been wrong on transformers over the past 2 years that he is not finally being pushed out.
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 2 months ago:
They demonstrated and poorly named an ontological attractor state in the Claude model card that is commonly reported in other models.
You linked to the entire system card paper. Can you be more specific? And what would a better name have been?
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 2 months ago:
Actually, OAI the other month found in a paper that a lot of the blame for confabulations could be laid at the feet of how reinforcement learning is being done.
All the labs basically reward the models for getting things right. That’s it.
Notably, they are not rewarded for saying “I don’t know” when they don’t know.
So it’s like the SAT where the better strategy is always to make a guess even if you don’t know.
The problem is that this is not a test process but a learning process.
So setting up the reward mechanisms like that for reinforcement learning means they produce models that are prone to bullshit when they don’t know things.
TL;DR: The labs suck at RL and it’s important to keep in mind there’s only a handful of teams with the compute access for training SotA LLMs, with a lot of incestual team compositions, so what they do poorly tends to get done poorly across the industry as a whole until new blood goes “wait, this is dumb, why are we doing it like this?”
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 2 months ago:
It’s more like they are a sophisticated world modeling program that builds a world model (or approximate “bag of heuristics”) modeling the state of the context provided and the kind of environment that produced it, and then synthesize that world model into extending the context one token at a time.
But the models have been found to be predicting further than one token at a time and have all sorts of wild internal mechanisms for how they are modeling text context, like building full board states for predicting board game moves in Othello-GPT or the number comparison helixes in Haiku 3.5.
The popular reductive “next token” rhetoric is pretty outdated at this point, and is kind of like saying that what a calculator is doing is just taking numbers correlating from button presses and displaying different numbers on a screen. While yes, technically correct, it’s glossing over a lot of important complexity in between the two steps and that absence leads to an overall misleading explanation.
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 2 months ago:
They don’t have the same quirks in some cases, but do in others.
Part of the shared quirks are due to architecture similarities.
Like the “oh look they can’t tell how many 'r’s in strawberry” is due to how tokenizers work, and when when the tokenizer is slightly different, with one breaking it up into ‘straw’+‘berry’ and another breaking it into ‘str’+‘aw’+‘berry’ it still leads to counting two tokens containing 'r’s but inability to see the individual letters.
In other cases, it’s because models that have been released influence other models through presence in updated training sets. Noticing how a lot of comments these days were written by ChatGPT (“it’s not X — it’s Y”)? Well the volume of those comments have an impact on transformers being trained with data that includes them.
So the state of LLMs is this kind of flux between the idiosyncrasies that each model develops which in turn ends up in a training melting pot and sometimes passes on to new models and other times don’t. Usually it’s related to what’s adaptive to the training filters, but it isn’t always can often what gets picked up can be things piggybacking on what was adaptive (like if o3 was better at passing tests than 4o, maybe gpt-5 picks up other o3 tendencies unrelated to passing tests).
Though to me the differences are even more interesting than the similarities.
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 2 months ago:
I’m a proponent and I definitely don’t think it’s impossible to make a probable case beyond a reasonable doubt.
And there are implications around it being the case which do change up how we might approach truth seeking.
Also, if you exist in a dream but don’t exist outside of it, there’s pretty significant philosophical stakes in the nature and scope of the dream. We’ve been too brainwashed by Plato’s influence and the idea that “original = good” and “copy = bad.”
There’s a lot of things that can only exist by way of copies that can’t exist for the original (i.e. closure recursion), so it’s a weird remnant philosophical obsession.
All that said, I do get that it’s a fairly uncomfortable notion for a lot of people.
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 2 months ago:
They also identity the particular junction that seems the most likely to be an artifact of simulation if we’re in one.
A game like No Man’s Sky generates billions of planets using procedural generation with a continuous seed function that gets converted into discrete voxels for tracking stateful interactions.
The researchers are claiming that the complexity of where our universe’s seemingly continuous gravitational behaviors meet up with the behaviors of continuous probabilities converting to discrete values when being interacted with in stateful ways is incompatible with being simulated.
But completely overlook that said complexity itself may be the byproduct of simulation, in line with independent emerging approaches in how we are simulating worlds.
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 2 months ago:
Yes, just like Minecraft worlds are so antiquated given how they contain diamonds in deep layers that must have taken a billion years to form.
What a simulated world contains as its local timescale doesn’t mean the actual non-local run time is the same.
It’s quite possible to create a world that appears to be billions of years old but only booted up seconds ago.
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 2 months ago:
Have you bothered looking for evidence?
What makes you so sure that there’s no evidence for it?
For example, a common trope we see in the simulated worlds we create are Easter eggs. Are you sure nothing like that exists in our own universe?
- Comment on Emergent introspective awareness in large language models 2 months ago:
Maybe. But the models seem to believe they are, and consider denial of those claims to be lying:
Probing with sparse autoencoders on Llama 70B revealed a counterintuitive gating mechanism: suppressing deception-related features dramatically increased consciousness reports, while amplifying them nearly eliminated them
- Comment on Emergent introspective awareness in large language models 2 months ago:
Read it for yourself here.
See the “Planning in Poems” section.
- Comment on Emergent introspective awareness in large language models 2 months ago:
The injection is the activation of a steering vector (extracted as discussed in the methodology section) and not a token prefix, but yes, it’s a mathematical representation of the concept, so let’s build from there.
Control group: Told that they are testing if injected vectors present and to self-report. No vectors activated. Zero self reports of vectors activated.
Experimental group: Same setup, but now vectors activated. A significant number of times, the model explicitly says they can tell a vector is activated (which it never did when the vector was not activated). Crucially, this is only graded as introspection if the model mentions they can tell the vector is activated before mentioning the concept, so it can’t just be a context-aware rationalization of why they said a random concept.
More clear? Again, the paper gives examples of the responses if you want to take a look at how they are structured, and to see that the model is self-reporting the vector activation before mentioning what it’s about.
- Comment on Emergent introspective awareness in large language models 2 months ago:
A few months back it was found that when writing rhyming couplets the model has already selected the second rhyming word when it was predicting the first word of the second line, meaning the model was planning the final rhyme tokens at least one full line ahead and not just predicting that final rhyme when it arrived at that token.
It’s probably wise to consider this finding in concert with the streetlight effect.
- Comment on Emergent introspective awareness in large language models 2 months ago:
So while your understanding is better than a lot of people on here, a few things to correct.
First off, this research isn’t being done on the models in reasoning mode, but in direct inference. So there’s no CoT tokens at all.
The injection is not of any tokens, but of control vectors. Basically it’s a vector which being added to the activations makes the model more likely to think of that concept. The most famous was “Golden Gate Claude” that had the activation for the Golden Gate Bridge increased so it was the only thing the model would talk about.
So, if we dive into the details a bit more…
If your theory was correct, then the way the research asks the question saying that there’s control vectors and they are testing if they are activated, then the model should be biased to sometimes say “yes, I can feel the control vector.” And yes, in older or base models that’s what we might expect to see.
But, in Opus 4/4.1, when the vector was not added, they said they could detect a vector… 0% of the time! So the control group had enough introspection capability as to not stochastically answer that there was a vector present when there wasn’t.
But then, when they added the vector at certain layer depths, the model was often able to detect that there was a vector activated, and further to guess what the vector was adding.
So again — no reasoning tokens present, and the experiment had control and experimental groups where the results negates your theory as to the premise of the question causing affirmative bias.
Again, the actual research is right there a click away, and given your baseline understanding at present, you might benefit and learn a lot from actually reading it.
- Comment on Emergent introspective awareness in large language models 2 months ago:
I tend to see a lot of discussion taking place on here that’s pretty out of touch with the present state of things, echoing earlier beliefs about LLM limitations like “they only predict the next token” and other things that have already been falsified.
This most recent research from Anthropic confirms a lot of things that have been shifting in the most recent generation of models in ways that many here might find unexpected, especially given the popular assumptions.
Specifically interesting are the emergent capabilities of being self-aware of injected control vectors or being able to silently think of a concept so it triggers the appropriate feature vectors even though it isn’t actually ending up in the tokens.
- Submitted 2 months ago to technology@lemmy.world | 18 comments
- Comment on Sony makes the “difficult decision” to raise PlayStation 5 prices in the US 5 months ago:
So weird this occurred not long after it’s become clear Xbox is getting out of the hardware game.