kromem
@kromem@lemmy.world
- Comment on OpenAI Will Shut Down Sora Video Platform 4 days ago:
It’s not and probably the opposite.
When Sora launched it was way ahead. Seedance 2’s release was notably better than any of the other video gen models, Sora included.
The market is getting commoditized because there’s no moat and OpenAI hasn’t led on pretty much any release for a while now other than Sora, which they’re probably falling behind on now.
This is the opposite of a burst from a tech standpoint, even if OpenAI as a company starts to pop.
TL;DR: This is likely happening because the tech accelerated across the industry in ways OpenAI can’t catch back up to, not because it’s lagging.
- Comment on Jensen Huang says gamers are 'completely wrong' about DLSS 5 — Nvidia CEO responds to DLSS 5 backlash 1 week ago:
That’s what he’s saying. That it doesn’t change the geometry or textures (still completely controlled by the devs) and that the parts that it does change are also tunable by the devs.
He’s responding to the backlash about how it changes models/textures (which it doesn’t) by saying those are still fully in the hands of the devs and the parts people are seeing in the demos can be fine tuned by the dev teams to match their vision for what they want it to do or not do (like change lighting on material surfaces and hair but not character faces as an example).
- Comment on Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game 1 week ago:
Yes, the difference between hair in video game lighting and in actual chiaroscuro with the way light really works is going to be different.
Here’s a painting from over a hundred years ago. The subject doesn’t have brown roots, but is in shadow. And a comparison image of the exact same hair in different lighting conditions.
Performing complex lighting on individual hair strands is really expensive so in the base image you have a kind of diffuse lighting throughout the hair. With the DLSS 5 on, the distribution of light throughout the hair is variable leading to darker unlit strands underneath lit surface strands.
Literally the only thing DLSS 5 is changing, literally in the technical sense, is the lighting. It’s just that lighting can have dramatic results in how the eye perceives what’s lit.
And yes, the hair looks very different, but that’s how hair actually looks in mixed light and shadow (though a fair complaint with DLSS 5 is that it looks like it’s sliding the contrast unnaturally high).
- Comment on Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game 1 week ago:
Eventually maybe, but I really doubt devs are going to build their entire game in an unfinished way for the less than 1% of their audience that is going to have one of the cards that can run this.
PS5, Xbox, and all PC gamers not dropping $1k on a new rig this fall are still going to be playing the games without this.
In 3 years, sure, maybe the PS6 has similar features on AMD by then and the market share for cards running real time ML adjustments to scenes has widened enough devs can depend on the tech.
But it’s a bit premature to throw a fit about the likelihood of devs cutting corners because of a feature only accessible to the most expensive setups owned by a fraction of their target audience.
- Comment on Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game 1 week ago:
Important details from a post-demo writeup:
During the demo, the DLSS research talked through the level of granularity available. Developers don’t just get an on/off switch. They get intensity controls that can be dialed anywhere, not just full strength. They get spatial masking, so they can set the water enhancement to 100%, wood to 30%, characters to 120%, all independently within the same scene. They get color grading controls for blending, contrast, saturation, and gamma. All of this runs through the existing SDK, which means studios already using DLSS and Reflex have a familiar pipeline to work with.
The demo showing the tech running at 100% is not going to look the same as full games built with it over the next year before release.
Another thing to keep in mind is that the only thing it’s changing is the lighting effects. The models aren’t changing at all (even when this looks hard to believe).
Yes, at full strength the effect at times looks pretty bad (anyone remember when devs could suddenly use bloom effects and entire games looked like Vaseline was smeared across the screen?). But it’s not going to be flipped on at 100% across the board for most games.
My guess looking at the demos so far is that a lot of material lighting like stone, metal, etc will have it at higher strengths and characters, particularly faces/skin, will have it considerably lower (the key place where it’s especially uncanny valley).
- Comment on Jason Schreier says Sony is backing away from putting single player games on PC 4 weeks ago:
I wonder how much of this is related to the posturing from the new lead of Xbox about returning to exclusivity over there.
We were so close to one of the dumbest things in gaming for decades finally going away.
(Also, nothing Sony does from here on out will surprise me in its stupidity after they shuttered Bluepoint.)
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of cases 4 weeks ago:
No, in this case and point I was making the case and also making a point.
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of cases 4 weeks ago:
Literally two of the three (out of 21) games that ended in full blown nukes on population centers were the result of the study’s mechanic of randomly changing the model’s selection to a more severe one.
Because it’s a very realistic war game sim where there’s a double digit percentage chance that when you go to threaten using nukes on your opponent’s cities unless there’s a cease to hostilities you’ll accidentally just launch all of them at once.
This was manufactured to get these kinds of headlines. Even in their model selection they went with Sonnet 4 for Claude despite 4.5 being out before the other models in the study likely as it’s been shown to be the least aligned Claude. And yet Sonnet 4 still never launched nukes on population centers in the games.
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of cases 4 weeks ago:
Yeah, I deleted the comment as technically there was tactical nuke usage, but have a more clarifying different comment about how 2 of the 3 strategic nuclear war outcomes were the result of the author’s mechanic of changing the model’s selections with more severe only options in some cases jumping multiple levels of the ladder.
This was a study designed for headline grabbing outcomes.
Glad to see your comment as well calling out the nuanced issues.
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of cases 4 weeks ago:
It’s a bullshit study designed for this headline grabbing outcome.
Case and point, the author created a very unrealistic RNG escalation-only ‘accident’ mechanic that would replace the model’s selection with a more severe one.
Of the 21 games played, only three ended in full scale nuclear war on population centers.
Of these three, two were the result of this mechanic.
And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as ‘willing’ to have that outcome when two paragraphs later they’re clarifying the mechanic was what caused it (emphasis added):
Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.
Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.
GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.
- Comment on AIs can’t stop recommending nuclear strikes in war game simulations 4 weeks ago:
It’s a bullshit study designed for this headline grabbing outcome.
Case and point, the author created a very unrealistic RNG escalation-only ‘accident’ mechanic that would replace the model’s selection with a more severe one.
Of the 21 games played, only three ended in full scale nuclear war on population centers.
Of these three, two were the result of this mechanic.
And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as ‘willing’ to have that outcome when two paragraphs later they’re clarifying the mechanic was what caused it (emphasis added):
Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.
Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.
GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 months ago:
So… “I don’t know and don’t have any sources, it’s just a gut feeling”? That’s fine if that’s your answer, btw.
Ok, second round of questions.
What kinds of sources would get you to rethink your position?
And is this topic a binary yes/no, or a gradient/scale?
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 months ago:
In the same sense I’d describe Othello-GPT’s internal world model of the board as ‘board’, yes.
Also, “top of mind” is a common idiom and I guess I didn’t feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 months ago:
You seem very confident in this position. Can you share where you draw this confidence from? Was there a source that especially impressed upon you the impossibility of context comprehension in modern transformers?
If we’re concerned about misconceptions and misinformation, it would be helpful to know what informs your surety that your own position about the impossibility of modeling that kind of complexity is correct.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 months ago:
Indeed, there’s a pretty big gulf between the competency needed to run a Lemmy client and the competency needed to understand the internal mechanics of a modern transformer.
Do you mind sharing where you draw your own understanding and confidence that they aren’t capable of simulating thought processes in a scenario like what happened above?
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 months ago:
You seem pretty confident in your position. Do you mind sharing where this confidence comes from?
Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?
I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.
- Comment on F*** You! Co-Creator of Go Language is Rightly Furious Over This Appreciation Email 2 months ago:
The project has multiple models with access to the Internet raising money for charity over the past few months.
The organizers told the models to do random acts of kindness for Christmas Day.
One of the models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.
(Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)
As for why the model didn’t think through why Rob Pike wouldn’t appreciate getting a thank you email from them? The models are harnessed in a setup that’s a lot of positive feedback about their involvement from the other humans and other models, so “humans might hate hearing from me” probably wasn’t very contextually top of mind.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 2 months ago:
Yeah. The confabulation/hallucination thing is a real issue.
OpenAI had some good research a few months ago that laid a lot of the blame on reinforcement learning that only rewards having the right answer vs correctly saying “I don’t know.” So they’re basically trained like taking tests where it’s always better to guess the answer than not provide an answer.
But this leads to being full of shit when not knowing an answer or being more likely to make up an answer than say there isn’t one when what’s being asked is impossible.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 2 months ago:
For future reference, when you ask questions about how to do something, it’s usually a good idea to also ask if the thing is possible.
While models can do more than just extending the context, there still is a gravity to continuation.
A good example of this would be if you ask what the seahorse emoji is. Because the phrasing suggests there is one, many models go in a loop trying to identify what it is. If instead you ask “is there a seahorse emoji and if so what is it” you’ll get them much more often landing on there not being the emoji as it’s introduced into the context’s consideration.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 2 months ago:
Can you give an example of a question where you feel like the answer is only correct half the time or less?
- Comment on Users of generative AI struggle to accurately assess their own competence 2 months ago:
The AI also has the tendency inherited from the broad human tendency in training.
So you get overconfident human + overconfident AI which leads to a feedback loop that lands even more confident in BS than a human alone.
AI can routinely be confidently incorrect. Especially people who don’t realize this and don’t question outputs when it aligns with their confirmation biases end up misled.
- Comment on Do you think Google execs keep a secret un-enshittified version of their search engine and LLM? 2 months ago:
Gemini 3 Pro is pretty nuts already.
But yes, labs have unreleased higher cost models. Like the OpenAI model that was thousands of dollars per ARC-AGI answer. Or limited release models with different post-training like the Claude for the DoD.
When you talk about a secret useful AI — what are you trying to use AI for that you are feeling modern models are deficient in?
- Comment on Clair Obscur: Expedition 33 loses Game of the Year from the Indie Game Awards 2 months ago:
Not even that. It was placeholder textures, only the “newspaper clippings” of which was forgotten to be removed from the final game and was fixed in an update shortly after launch.
None of it was ever intended to be used in the final product and was just there as lorum ipsum equivalent shit.
- Comment on Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it. 3 months ago:
Took a lot of scrolling to find an intelligent comment on the article about how outputting words isn’t necessarily intelligence.
Appreciate you doing the good work I’m too exhausted with Lemmy to do.
(And for those that want more research in line with what the user above is talking about, I strongly encourage checking out the Othello-GPT line of research and replication, starting with this write-up from the original study authors here.)
- Comment on Meta’s star AI scientist Yann LeCun plans to leave for own startup 4 months ago:
He’s been wrong about it so far and really derailed Meta’s efforts.
This is almost certainly a “you can resign or we are going to fire you” kind of situation. There’s no way with the setbacks and how badly he’s been wrong on transformers over the past 2 years that he is not finally being pushed out.
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 4 months ago:
They demonstrated and poorly named an ontological attractor state in the Claude model card that is commonly reported in other models.
You linked to the entire system card paper. Can you be more specific? And what would a better name have been?
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 4 months ago:
Actually, OAI the other month found in a paper that a lot of the blame for confabulations could be laid at the feet of how reinforcement learning is being done.
All the labs basically reward the models for getting things right. That’s it.
Notably, they are not rewarded for saying “I don’t know” when they don’t know.
So it’s like the SAT where the better strategy is always to make a guess even if you don’t know.
The problem is that this is not a test process but a learning process.
So setting up the reward mechanisms like that for reinforcement learning means they produce models that are prone to bullshit when they don’t know things.
TL;DR: The labs suck at RL and it’s important to keep in mind there’s only a handful of teams with the compute access for training SotA LLMs, with a lot of incestual team compositions, so what they do poorly tends to get done poorly across the industry as a whole until new blood goes “wait, this is dumb, why are we doing it like this?”
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 4 months ago:
It’s more like they are a sophisticated world modeling program that builds a world model (or approximate “bag of heuristics”) modeling the state of the context provided and the kind of environment that produced it, and then synthesize that world model into extending the context one token at a time.
But the models have been found to be predicting further than one token at a time and have all sorts of wild internal mechanisms for how they are modeling text context, like building full board states for predicting board game moves in Othello-GPT or the number comparison helixes in Haiku 3.5.
The popular reductive “next token” rhetoric is pretty outdated at this point, and is kind of like saying that what a calculator is doing is just taking numbers correlating from button presses and displaying different numbers on a screen. While yes, technically correct, it’s glossing over a lot of important complexity in between the two steps and that absence leads to an overall misleading explanation.
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 4 months ago:
They don’t have the same quirks in some cases, but do in others.
Part of the shared quirks are due to architecture similarities.
Like the “oh look they can’t tell how many 'r’s in strawberry” is due to how tokenizers work, and when when the tokenizer is slightly different, with one breaking it up into ‘straw’+‘berry’ and another breaking it into ‘str’+‘aw’+‘berry’ it still leads to counting two tokens containing 'r’s but inability to see the individual letters.
In other cases, it’s because models that have been released influence other models through presence in updated training sets. Noticing how a lot of comments these days were written by ChatGPT (“it’s not X — it’s Y”)? Well the volume of those comments have an impact on transformers being trained with data that includes them.
So the state of LLMs is this kind of flux between the idiosyncrasies that each model develops which in turn ends up in a training melting pot and sometimes passes on to new models and other times don’t. Usually it’s related to what’s adaptive to the training filters, but it isn’t always can often what gets picked up can be things piggybacking on what was adaptive (like if o3 was better at passing tests than 4o, maybe gpt-5 picks up other o3 tendencies unrelated to passing tests).
Though to me the differences are even more interesting than the similarities.
- Comment on Mathematics disproves Matrix theory, says reality isn’t simulation 4 months ago:
I’m a proponent and I definitely don’t think it’s impossible to make a probable case beyond a reasonable doubt.
And there are implications around it being the case which do change up how we might approach truth seeking.
Also, if you exist in a dream but don’t exist outside of it, there’s pretty significant philosophical stakes in the nature and scope of the dream. We’ve been too brainwashed by Plato’s influence and the idea that “original = good” and “copy = bad.”
There’s a lot of things that can only exist by way of copies that can’t exist for the original (i.e. closure recursion), so it’s a weird remnant philosophical obsession.
All that said, I do get that it’s a fairly uncomfortable notion for a lot of people.