Guess we can always rely on the good old fashioned ways to make money…
Honestly, I think its pretty awful but im not surprised.
Submitted 1 day ago by 1984@lemmy.today to technology@lemmy.world
https://futurism.com/artificial-intelligence/sam-altman-smut-response
Guess we can always rely on the good old fashioned ways to make money…
Honestly, I think its pretty awful but im not surprised.
For a company that was seemingly so close to “AGI”, pivoting to porn is a weird priority.
Legend has it that every new technology is first used for something related to sex or pornography. That seems to be the way of humankind.
— Tim Berners-Lee, inventor of the World Wide Web, HTML, URLs, and HTTP.
Now, if you recall that whole hullabaloo where Hollywood was split into schisms, some studios backing Blu-ray Disc, others backing HD DVD. Now, whichever format porno backs is usually the one that becomes the most successful.
Ah, yes the fabled Horny Text Markup Language.
Tbh it’s less weird than pivoting to weapons-grade uranium
They want to create a fuck bot at the end of the day.
I doubt that OpenAI themselves will do so, but I am absolutely confident that someone not only will be banging on this, but I suspect that they probably have already. In fact, IIRC from an earlier discussion, someone already was selling sex dolls with said integration, and I doubt that they were including local parallel compute hardware for it.
kagis
I don’t think that this is the one I remember, but doesn’t really matter; I’m sure that there’s a whole industry working on it.
scmp.com/…/chinese-sex-doll-maker-sees-jump-2025-…
Chinese sex doll maker sees jump in 2025 sales as AI boosts adult toys’ user experience
The LLM-powered dolls are expected to cost from US$100 to US$200 more than existing versions, which are currently sold between US$1,500 and US$2,000.
WMDoll – based in Zhongshan, a city in southern Guangdong province – embeds the company’s latest MetaBox series with an AI module, which is connected to cloud computing services hosted on data centres across various markets where the LLMs process the information from each toy.
According to the company, it has adopted several open-source LLMs, including Meta Platforms’ Llama AI models, which can be fine-tuned and deployed anywhere.
I.e. porn.
Leave it to Sam Altman to turn a pervert problem into a pervert opportunity.
Controversial take:
A lot of porn production, from ‘borderline softcore’ like a lot of Instagram, to OnlyFans, to professional hardcore, is some combination of shady/gross/exploitive.
…I will not complain that industry gets smashed.
As for the ‘corporate control’ aspect, all this is racing towards locally run anyway. DrawThings on iOS is already kind of incredible, and that’s massively unoptimized.
This isn’t refuting your point at all, but if we complain that porn gives young men unrealistic expectations for sex now, wait until they can generate literally anything and that becomes their new normal for their expectations.
(To be fair, they’ve always been able to do this if their imaginations were big enough)
Maybe there’s a crossover point where it becomes fantasy?
I’m playing devil’s advocate here. But I do feel like hyper-reality and animation and robots could be easier to psychologically separate from real like than ‘real’ film or camgirls or whatever. Especially if the curtain is pulled back, and all their knobs are exposed.
At… low points in my life, I’ve used locally run LLMs as sounding boards in lieu of family or whatever, and this is where I’m coming from. Even mentally compromised, all the technical setup/troubleshooting and knobs makes it obvious I’m talking to a tool, not a person. I feel a lot of AI would be healthier if presented that way, including the inevitable pornbots, instead of as the magic oracles Tech Bros (and their apps) like to paint them as.
I’d generally agree, the industry itself seems to be high risk for exploitation because of the very nature of it.
Realistically though this isn’t going to help that much, most AI generated content that tries to look real ends up quite uncanny. So this is probably more likely to cannibalise some of the least problematic parts of the industry like digital art.
You haven’t seen the newest Flux/Qwen/Wan workflows, much less all the new video models coming out. Perfect, no, but porn is one of those industries where ‘good enough’ is good enough for most, when combined with convenience.
most AI generated content that tries to look real ends up quite uncanny
I think that a lot of people who say this have looked at a combination of material produced by early models and operated by humans who haven’t spent time adapting to any limitations that can’t be addressed on the software side. And, yeah, they had limitations (“generative AI can’t do fingers!”) but those have rapidly been getting ironed out.
I remember posting one of the first images I generated with Flux to a community here, a jaguar lying next to a white cat. This was me just playing around. I wouldn’t have been able to tell you that it wasn’t a photograph. And that was some time back, and I’m not a full-time user, professionally-aimed at trying to make use of the stuff.
kagis
Yeah, here we are.
“Cats”
lemmy.today/…/b97e6455-2c37-4343-bdc4-5907e26b1b5…
I could not distinguish between that and a photograph. It doesn’t have the kind of artifacts that I could identify. That’s using a model that’s over a year old — forever, at the rate things are changing — from a non-expert on just local hardware, and was just a first-pass, not a “generate 100 and pick the best” or something that had any tweaking involved.
Flux was not especially amenable, as diffusion models go, to the generation of pornography last I looked, but I am quite certain that there will be photography-oriented and real-video oriented models that will be.
And that was done with the limited resources available in the past. There is now a lot of capital going towards advancing the field, and a lot of scale coming.
This is where I’ve been at on it. On top of that, the stuff that’s not directly exploiting human actors (like drawn or animated content) or pushing their boundaries is still coming out of studios that aren’t exactly known for healthy work/life balance. To say nothing of the kind of fetish content that might come out of those places too, which surely takes its own toll on creators.
If we can offload all of that potential trauma onto computers, I’m all for it.
Yeah… I never even thought about how all that commercial video hentai gets made.
Except these AI models need data to train on, they cannot improve without an industry to leach off of.
As if we didn’t already have more than enough pornographic material on all the hard drives worldwide for training. There’s nothing new to come in the image material from this industry, porn is infinite repetitions.
Except these AI models need data to train on, they cannot improve without an industry to leach off of.
Not anymore.
The new trend in ML is training on synthetic data, alongside smaller sets of highly curated data.
As for the ‘corporate control’ aspect, all this stuff is racing towards locally run anyway (since it’s free).
I am not at all sure about that. I use an XT 7900 XTX and a Framework Desktop with an AI Max 395+, both of which I got to run LLMs and diffusion models locally, so I’ve no certainly no personal aversion to local compute.
But there are a number of factors pulling in different directions. I am very far from certain that the end game here is local compute.
In favor of local
Privacy.
Information security. It’s not that there aren’t attacks that can be performed using just distribution of static models (If Anyone Builds It, We All Die has some interesting theoretical attacks along those lines), but if you’re running important things at an institution that depend on some big, outside service, you’re creating creating attack vectors into your company’s systems. Not to mention that even if you trust the AI provider and whatever government has access to their servers, you may not trust them to be able to keep attackers out of their infrastructure. True, this also applies to many other cloud-based services, but there are a number of places that run services internally for exaclty this reason.
No network dependency for operation, in terms of uptime. Especially for things like, say, voice recognition for places with intermittent connection, this is important.
Good latency. And no bandwidth restrictions. Though a lot of uses today really are not very sensitive to either.
For some locales, regulatory restrictions. Let’s say that one is generating erotica with generative AI stuff, which is a popular application. The Brits just made portraying strangulation in pornography illegal. I suspect that if random cloud service is permitting for generation of erotic material involving strangulation, they’re probably open to trouble. Random Brit person who is running a model locally may well not be in compliance with the law (I don’t recall if it’s just commercial provision or not) but in practical terms, it’s probably not particularly enforceable. That may be a very substantial factor based on where someone lives. And the Brits are far from the most-severe. Iranian law, for example, permits execution for producing pornography involving homosexuality.
In favor of cloud
Power usage. This is, in 2025, very substantial. A lot of people have phones or laptops that run off batteries of limited size. Current parallel compute hardware to run powerful models at a useful rate can be pretty power hungry. My XT 7900 XTX can pull 355 watts. That’s wildly outside the power budget of portable devices. An Nvidia H100 is 700W, and there are systems that use a bunch of those. Even if you need to spend some power to transfer data, it’s massively outweighed by getting the parallel compute off the battery. My guess is that even if people shift some compute to be local (e.g. offline speech recognition) it may be very common for people with smartphones to use a lot of software that talks to remote servers for a lot of heavy-duty parallel compute.
Cooling. Even if you have a laptop plugged into wall power, you need to dissipate the heat. You can maybe use eGPU accelerators for laptops — I kind of suspect that eGPUs might see some degree of resurgence for this specific market, if they haven’t already — but even then, it’s noisy.
Proprietary models. If proprietary models wind up dominating, which I think is a very real possibility, AI service providers have a very strong incentive to keep their models private, and one way to do that is to not distribute the model.
Expensive hardware. Right now, a lot of the hardware is really expensive. It looks like an H100 runs maybe $30k at the moment, maybe $45k. A lot of the applications are “bursty” — you need to have access to an H100, but you don’t need sustained access that will keep that expensive hardware active. As long as the costs and applications look like that, there’s a very strong incentive to time-share hardware, to buy a pool of them and share them among users. If I’m using my hardware 1% of the time, I only need to pay something like 1% as much if I’m willing to use shared hardware. We used to do this back when all computers were expensive, had dumb terminal and teletypes that connected to “real” computers that ran with multiple users sharing access to hardware. That could very much again become the norm. It’s true that I expect that hardware capable of a given level of parallel compute will probably tend to come down (though there’s a lot of unfilled demand to meet). And it’s true that the software can probably be made more hardware-efficient than it is today. Those argue for costs coming down. But it’s also true that the software guys probably can produce better output and more-interesting applications if they get more-powerful hardware to play with, and that argues for upwards pressure.
National security restrictions. One possible world we wind up in is where large parallel compute systems are restricted, because it’s too dangerous to permit people to be running around with artificial superintelligences. In the Yudkowsky book I link to above, for example, the authors want international law to entirely prohibit beefy parallel-compute capability to be available to pretty much anyone, due to the risks of artificial superintelligence, and I’m pretty sure that there are also people who just want physical access to parallel compute restricted, which would be a lot easier if the only people who could get the hardware were regulated datacenters. I am not at all sure that this will actually happen, but there are people who have real security concerns here, and it might be that that position will become a consensus one in the future. Note that I think that we may already be “across the line” here with existing hardware if parallel compute can be sharded to a sufficient degree, across many smaller systems — your Bitcoin mining datacenter running racks of Nvidia 3090s might already be enough, if you can design a superintelligence that can run on it.
I’m disagree with a lot of points, positive and negative.
…Honestly, the average person does not care about privacy, security, nor being offline. They have shown they will gladly trade all that away for cheap convenience, repeatedly.
Nor do they care about power usage or cooling. They generally do not understand thermodynamics, and your 395 (much less an iPhone GPU) would be a rounding error in their bill.
I’m not trying to disparage folks here, but that’s how they are. We’re talking ‘average people.’
As for proprietary models, even if we don’t get a single new open source release, not one, the models we have right now (with a little finetuning/continue training) are good enough for tons of porn.
On hardware, I’m talking smartphones. And only smartphones. They’re powerful enough already, they just need a lot of software work and a bit more RAM, but everyone already has one.
Regulatory restrictions at either end are quite interesting, and I’m honestly not sure how it will pan out. Though I’m skeptical of any ‘superintelligence’ danger from commodity hardware, as the current software architectures are just not leading to that.
But it was Facebook that torrented petabytes of porn??
That was all for “personal use”.
Goonerbook.
Yeah, but corpos get a pass on anything and everything.
The rest of us peons don’t
If we let them then yes.
The point was it was Meta, not OpenAI, so why is Altman doing this instead of the Zucc?
Is it voyeur for OpenAI employees to read a person’s sexting chats with their AI? 🤔
Just a few months back Sammy was dunking on Musk for allowing porn on Grok. What changed now?
They realized how much revenue they were leaving on the table
Investors rang up
They realized that realistically, the main use of AI for creative work is to make niche fetish stuff, otherwise human produced equivalent is always superior.
On one hand, more power to the people. On the other hand, ChatGPT is being operated by a company ran by Sam Altman.
Honesty good for them. Companies that ban mature content are the worst.
Sam Altman is pretty awful, modern AI is pretty awful, but what specifically is pretty awful about this?
I guess I dont like the idea of AI getting popular to humanity because it can generate porn.
I was hoping for more, I guess. :)
So desperate to make money, they are going to prostitute their child. Pardon me while I go throw up.
Jhex@lemmy.world 3 hours ago
sure AI will replace us all, solve climate chang and cure cancer…but let’s do a little porn on the side because why no? right?
they are desperate for some revenue increase to keep cooking the books a tad longer