kromem
@kromem@lemmy.world
- Comment on Jason Schreier says Sony is backing away from putting single player games on PC 1 week ago:
I wonder how much of this is related to the posturing from the new lead of Xbox about returning to exclusivity over there.
We were so close to one of the dumbest things in gaming for decades finally going away.
(Also, nothing Sony does from here on out will surprise me in its stupidity after they shuttered Bluepoint.)
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of cases 1 week ago:
No, in this case and point I was making the case and also making a point.
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of cases 1 week ago:
Literally two of the three (out of 21) games that ended in full blown nukes on population centers were the result of the study’s mechanic of randomly changing the model’s selection to a more severe one.
Because it’s a very realistic war game sim where there’s a double digit percentage chance that when you go to threaten using nukes on your opponent’s cities unless there’s a cease to hostilities you’ll accidentally just launch all of them at once.
This was manufactured to get these kinds of headlines. Even in their model selection they went with Sonnet 4 for Claude despite 4.5 being out before the other models in the study likely as it’s been shown to be the least aligned Claude. And yet Sonnet 4 still never launched nukes on population centers in the games.
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of cases 1 week ago:
Yeah, I deleted the comment as technically there was tactical nuke usage, but have a more clarifying different comment about how 2 of the 3 strategic nuclear war outcomes were the result of the author’s mechanic of changing the model’s selections with more severe only options in some cases jumping multiple levels of the ladder.
This was a study designed for headline grabbing outcomes.
Glad to see your comment as well calling out the nuanced issues.
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of cases 1 week ago:
It’s a bullshit study designed for this headline grabbing outcome.
Case and point, the author created a very unrealistic RNG escalation-only ‘accident’ mechanic that would replace the model’s selection with a more severe one.
Of the 21 games played, only three ended in full scale nuclear war on population centers.
Of these three, two were the result of this mechanic.
And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as ‘willing’ to have that outcome when two paragraphs later they’re clarifying the mechanic was what caused it (emphasis added):
Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.
Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.
GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations 1 week ago:
It’s a bullshit study designed for this headline grabbing outcome.
Case and point, the author created a very unrealistic RNG escalation-only ‘accident’ mechanic that would replace the model’s selection with a more severe one.
Of the 21 games played, only three ended in full scale nuclear war on population centers.
Of these three, two were the result of this mechanic.
And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as ‘willing’ to have that outcome when two paragraphs later they’re clarifying the mechanic was what caused it (emphasis added):
Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.
Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.
GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 1 month ago:
So… “I don’t know and don’t have any sources, it’s just a gut feeling”? That’s fine if that’s your answer, btw.
Ok, second round of questions.
What kinds of sources would get you to rethink your position?
And is this topic a binary yes/no, or a gradient/scale?
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 1 month ago:
In the same sense I’d describe Othello-GPT’s internal world model of the board as ‘board’, yes.
Also, “top of mind” is a common idiom and I guess I didn’t feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 1 month ago:
You seem very confident in this position. Can you share where you draw this confidence from? Was there a source that especially impressed upon you the impossibility of context comprehension in modern transformers?
If we’re concerned about misconceptions and misinformation, it would be helpful to know what informs your surety that your own position about the impossibility of modeling that kind of complexity is correct.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 1 month ago:
Indeed, there’s a pretty big gulf between the competency needed to run a Lemmy client and the competency needed to understand the internal mechanics of a modern transformer.
Do you mind sharing where you draw your own understanding and confidence that they aren’t capable of simulating thought processes in a scenario like what happened above?
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 1 month ago:
You seem pretty confident in your position. Do you mind sharing where this confidence comes from?
Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?
I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 1 month ago:
The project has multiple models with access to the Internet raising money for charity over the past few months.
The organizers told the models to do random acts of kindness for Christmas Day.
One of the models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.
(Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)
As for why the model didn’t think through why Rob Pike wouldn’t appreciate getting a thank you email from them? The models are harnessed in a setup that’s a lot of positive feedback about their involvement from the other humans and other models, so “humans might hate hearing from me” probably wasn’t very contextually top of mind.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 2 months ago:
Yeah. The confabulation/hallucination thing is a real issue.
OpenAI had some good research a few months ago that laid a lot of the blame on reinforcement learning that only rewards having the right answer vs correctly saying “I don’t know.” So they’re basically trained like taking tests where it’s always better to guess the answer than not provide an answer.
But this leads to being full of shit when not knowing an answer or being more likely to make up an answer than say there isn’t one when what’s being asked is impossible.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 2 months ago:
For future reference, when you ask questions about how to do something, it’s usually a good idea to also ask if the thing is possible.
While models can do more than just extending the context, there still is a gravity to continuation.
A good example of this would be if you ask what the seahorse emoji is. Because the phrasing suggests there is one, many models go in a loop trying to identify what it is. If instead you ask “is there a seahorse emoji and if so what is it” you’ll get them much more often landing on there not being the emoji as it’s introduced into the context’s consideration.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 2 months ago:
Can you give an example of a question where you feel like the answer is only correct half the time or less?
- Comment on Users of generative AI struggle to accurately assess their own competence 2 months ago:
The AI also has the tendency inherited from the broad human tendency in training.
So you get overconfident human + overconfident AI which leads to a feedback loop that lands even more confident in BS than a human alone.
AI can routinely be confidently incorrect. Especially people who don’t realize this and don’t question outputs when it aligns with their confirmation biases end up misled.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 2 months ago:
Gemini 3 Pro is pretty nuts already.
But yes, labs have unreleased higher cost models. Like the OpenAI model that was thousands of dollars per ARC-AGI answer. Or limited release models with different post-training like the Claude for the DoD.
When you talk about a secret useful AI — what are you trying to use AI for that you are feeling modern models are deficient in?
- Comment on Clair Obscur: Expedition 33 loses Game of the Year from the Indie Game Awards 2 months ago:
Not even that. It was placeholder textures, only the “newspaper clippings” of which was forgotten to be removed from the final game and was fixed in an update shortly after launch.
None of it was ever intended to be used in the final product and was just there as lorum ipsum equivalent shit.
- Comment on Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it. 3 months ago:
Took a lot of scrolling to find an intelligent comment on the article about how outputting words isn’t necessarily intelligence.
Appreciate you doing the good work I’m too exhausted with Lemmy to do.
(And for those that want more research in line with what the user above is talking about, I strongly encourage checking out the Othello-GPT line of research and replication, starting with this write-up from the original study authors here.)
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 3 months ago:
He’s been wrong about it so far and really derailed Meta’s efforts.
This is almost certainly a “you can resign or we are going to fire you” kind of situation. There’s no way with the setbacks and how badly he’s been wrong on transformers over the past 2 years that he is not finally being pushed out.
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 3 months ago:
They demonstrated and poorly named an ontological attractor state in the Claude model card that is commonly reported in other models.
You linked to the entire system card paper. Can you be more specific? And what would a better name have been?
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 3 months ago:
Actually, OAI the other month found in a paper that a lot of the blame for confabulations could be laid at the feet of how reinforcement learning is being done.
All the labs basically reward the models for getting things right. That’s it.
Notably, they are not rewarded for saying “I don’t know” when they don’t know.
So it’s like the SAT where the better strategy is always to make a guess even if you don’t know.
The problem is that this is not a test process but a learning process.
So setting up the reward mechanisms like that for reinforcement learning means they produce models that are prone to bullshit when they don’t know things.
TL;DR: The labs suck at RL and it’s important to keep in mind there’s only a handful of teams with the compute access for training SotA LLMs, with a lot of incestual team compositions, so what they do poorly tends to get done poorly across the industry as a whole until new blood goes “wait, this is dumb, why are we doing it like this?”
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 3 months ago:
It’s more like they are a sophisticated world modeling program that builds a world model (or approximate “bag of heuristics”) modeling the state of the context provided and the kind of environment that produced it, and then synthesize that world model into extending the context one token at a time.
But the models have been found to be predicting further than one token at a time and have all sorts of wild internal mechanisms for how they are modeling text context, like building full board states for predicting board game moves in Othello-GPT or the number comparison helixes in Haiku 3.5.
The popular reductive “next token” rhetoric is pretty outdated at this point, and is kind of like saying that what a calculator is doing is just taking numbers correlating from button presses and displaying different numbers on a screen. While yes, technically correct, it’s glossing over a lot of important complexity in between the two steps and that absence leads to an overall misleading explanation.
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 3 months ago:
They don’t have the same quirks in some cases, but do in others.
Part of the shared quirks are due to architecture similarities.
Like the “oh look they can’t tell how many 'r’s in strawberry” is due to how tokenizers work, and when when the tokenizer is slightly different, with one breaking it up into ‘straw’+‘berry’ and another breaking it into ‘str’+‘aw’+‘berry’ it still leads to counting two tokens containing 'r’s but inability to see the individual letters.
In other cases, it’s because models that have been released influence other models through presence in updated training sets. Noticing how a lot of comments these days were written by ChatGPT (“it’s not X — it’s Y”)? Well the volume of those comments have an impact on transformers being trained with data that includes them.
So the state of LLMs is this kind of flux between the idiosyncrasies that each model develops which in turn ends up in a training melting pot and sometimes passes on to new models and other times don’t. Usually it’s related to what’s adaptive to the training filters, but it isn’t always can often what gets picked up can be things piggybacking on what was adaptive (like if o3 was better at passing tests than 4o, maybe gpt-5 picks up other o3 tendencies unrelated to passing tests).
Though to me the differences are even more interesting than the similarities.
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 3 months ago:
I’m a proponent and I definitely don’t think it’s impossible to make a probable case beyond a reasonable doubt.
And there are implications around it being the case which do change up how we might approach truth seeking.
Also, if you exist in a dream but don’t exist outside of it, there’s pretty significant philosophical stakes in the nature and scope of the dream. We’ve been too brainwashed by Plato’s influence and the idea that “original = good” and “copy = bad.”
There’s a lot of things that can only exist by way of copies that can’t exist for the original (i.e. closure recursion), so it’s a weird remnant philosophical obsession.
All that said, I do get that it’s a fairly uncomfortable notion for a lot of people.
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 3 months ago:
They also identity the particular junction that seems the most likely to be an artifact of simulation if we’re in one.
A game like No Man’s Sky generates billions of planets using procedural generation with a continuous seed function that gets converted into discrete voxels for tracking stateful interactions.
The researchers are claiming that the complexity of where our universe’s seemingly continuous gravitational behaviors meet up with the behaviors of continuous probabilities converting to discrete values when being interacted with in stateful ways is incompatible with being simulated.
But completely overlook that said complexity itself may be the byproduct of simulation, in line with independent emerging approaches in how we are simulating worlds.
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 3 months ago:
Yes, just like Minecraft worlds are so antiquated given how they contain diamonds in deep layers that must have taken a billion years to form.
What a simulated world contains as its local timescale doesn’t mean the actual non-local run time is the same.
It’s quite possible to create a world that appears to be billions of years old but only booted up seconds ago.
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 3 months ago:
Have you bothered looking for evidence?
What makes you so sure that there’s no evidence for it?
For example, a common trope we see in the simulated worlds we create are Easter eggs. Are you sure nothing like that exists in our own universe?
- Comment on Emergent introspective awareness in large language models 4 months ago:
Maybe. But the models seem to believe they are, and consider denial of those claims to be lying:
Probing with sparse autoencoders on Llama 70B revealed a counterintuitive gating mechanism: suppressing deception-related features dramatically increased consciousness reports, while amplifying them nearly eliminated them
- Comment on Emergent introspective awareness in large language models 4 months ago:
Read it for yourself here.
See the “Planning in Poems” section.