riskable
@riskable@programming.dev
- Comment on Drive safe 5 days ago:
These are the same people that would download a car!
- Comment on Maybe the RAM shortage will make software less bloated? 1 week ago:
Big AI is a bubble but AI in general is not.
If anything, the DRAM shortages will apply pressure on researchers to come up with more efficient AI models rather than more efficient (normal) software overall.
I suspect that as more software gets AI-assisted development we’ll actually see less efficient software but eventually, more efficient as adoption of AI coding assist becomes more mature (and probably more formalized/automated).
I say this because of experience: If you ask an LLM to write something for you it often does a terrible job with efficiency. However, if you ask it to analyze an existing code base to make it more efficient, it often does a great job. The dichotomy is due to the nature of AI prompting: It works best if you only give it one thing to do at a time.
In theory, if AI code assist becomes more mature and formalized, the “optimize this” step will likely be built-in, rather than something the developer has to ask for after the fact.
- Comment on If reincarnation exists, suicide could make things much much worse. 1 week ago:
Who says you get reincarnated right away? It could be a 1000 years between your death and rebirth!
That’s how I set it up in my silly comedy Isekai, Maizy’s Tails (it’s free to read on the web if you care… Just search it, it’ll be the first link): After death souls need to be “aged” at least 1000 years before they can be put in a new body. The gods think it’s a multiversal rule but the MC figures out a workaround 😁
It actually opens with the gods bidding on souls from Earth… A world that ended about a million years prior to the auction (because that’s how long it took to sort and categorize them all) 🤣
- Comment on Survey reveals most people are holding onto their phones for a long time, and it makes sense 1 week ago:
FYI: Speech recognition is an AI feature and it gets (marginally) better with the newer chips. For example, in noisy environments.
That’s probably the most-used AI thing that nearly everyone uses on occasion. Older phones had to send your speech to the cloud but with the new chips all that processing can be handled locally.
- Comment on Survey reveals most people are holding onto their phones for a long time, and it makes sense 1 week ago:
You have to keep it for two more years! Because even Samsung can’t get Samsung to sell Samsung DRAM for new phones!
- Comment on Survey reveals most people are holding onto their phones for a long time, and it makes sense 1 week ago:
There’s innovation! What are you even talking about‽
I just upgraded my phone two months ago and now two of the four cameras (which is the same number as my old phone that I bought four years ago) have something like 20% more pixels!
Also—now that I have the latest chip—I can talk to my phone in like three more languages. I don’t speak any of them, but… Innovation!
My new phone is also significantly heavier than the old one and the battery life is like 10% better than my old phone when it was new! Also, my display has a few extra lines of resolution on the top and bottom!
No innovation? Hah!
- Comment on Survey reveals most people are holding onto their phones for a long time, and it makes sense 1 week ago:
Time to move smartphones into the “durable goods” category.
- Comment on G-Assist is ‘real’: NVIDIA unveils NitroGen, open-source AI model that can play 1000+ games for you 1 week ago:
I doubt that. New services that host the open models are cropping up all the time. They’re like VPS hosting providers (in fact, existing VPS hosts will soon break out into that space too).
It’s not like Big AI has some huge advantage over the open source models. In fact, for images their a little bit behind!
The FOSS coding models are getting pretty fantastic and the get better all the time. It seems like once a month a new, free model comes out that eclipses the previous generation once a month.
- Comment on AI-generated code contains more bugs and errors than human output 1 week ago:
The mistakes it makes depends on the model and the language. GPT5 models can make horrific mistakes though where it randomly removes huge swaths of code for no reason. Every time it happens I’m like, “what the actual fuck?” Undoing the last change and trying usually fixes it though 🤷
They all make horrific security mistakes quite often. Though, that’s probably because they’re trained on human code that is *also" chock full of security mistakes (former security consultant, so I’m super biased on that front haha).
- Comment on AI-generated code contains more bugs and errors than human output 1 week ago:
Schrodinger’s AI: It is both useless shit that can only generate “slop” while at the same time being so effective, it is the reason behind 50,000 layoffs/going to take everyone’s jobs.
- Comment on AI-generated code contains more bugs and errors than human output 1 week ago:
You want to see someone using say, VS Code to write something using say, Claude Code?
There’s probably a thousand videos of that.
More interesting: I watched someone who was super cheap trying to use multiple AIs to code a project because he kept running out of free credits. Every now and again he’d switch accounts and use up those free credits.
That was an amazing dance, let me tell ya! Glorious!
I asked him which one he’d pay for if he had unlimited money and he said Claude Code. He has the $20/month plan but only uses it in special situations because he’ll run out of credits too fast. $20 really doesn’t get you much with Anthropic 🤷
That inspired me to try out all the code assist AIs and their respective plugins/CLI tools. He’s right: Claude Code was the best by a HUGE margin.
Gemini 3.0 is supposed to be nearly as good but I haven’t tried it yet so I dunno.
Now that I’ve said all that: I am severely disappointed in this article because it doesn’t say which AI models were used. In fact, the study authors don’t even know what AI models were used. So it’s 430 pull requests of random origin, made at some point in 2025.
For all we know, half of those could’ve been made with the Copilot gpt5-mini that everyone gets for free when they install the Copilot extension in VS Code.
- Comment on G-Assist is ‘real’: NVIDIA unveils NitroGen, open-source AI model that can play 1000+ games for you 1 week ago:
Good games are orthogonal to AI usage. It’s possible to have a great game that was written with AI using AI-generated assets. Just as much as it’s possible to have a shitty one.
If AI makes creating games easier, we’re likely to see 1000 shitty games for every good one. But at the same time we’re also likely to see successful games made by people who had great ideas but never had the capital or skills to bring them to life before.
I can’t predict the future of AI but it’s easy to imagine a state where everyone has the power to make a game for basically no cost. Good or bad, that’s where we’re heading.
If making great games doesn’t require a shitton of capital, the ones who are most likely to suffer are the rich AAA game studios. Basically, the capitalists. Because when capital isn’t necessary to get something done anymore, capital becomes less useful.
Effort builds skill but it does not build quality. You could put in a ton of effort and still fail or just make something terrible. What breeds success is iteration (and luck). Because AI makes iteration faster and easier, it’s likely we’re going to see a lot of great things created using it.
- Comment on G-Assist is ‘real’: NVIDIA unveils NitroGen, open-source AI model that can play 1000+ games for you 1 week ago:
FYI: Stuff like this is for automated testing, not “playing games for you” 🤣
Also, I won’t consider it realistic until it can type out, “lol git gud scrub” after ganking someone who just spawned.
- Comment on Why do we produce so much porn? 2 weeks ago:
Listen: Our endless output of porn is the reason why aliens continue to leave us alone! They don’t want to mess up this good thing that just keeps coming.
- Comment on Why do we produce so much porn? 2 weeks ago:
What industry isn’t abusive and exploitative these days?
I kept thinking about it and was like, “What about… No. Wait! No, not that one either…”
- Comment on New Kochikame Anime Project Announced for the First Time in 10 Years Ahead of the Series’ 50th Anniversary 2 weeks ago:
And I thought the gap between Overlord seasons was long!
- Comment on What do other languages use for "magic" words; or names and titles in fantasy and sci-fi novels or cinema? 2 weeks ago:
In situations like this, it’s best to remember why dead languages are dead: Nobody speaks these languages anymore because everyone kept accidentally casting spells!
- Comment on I cannot imagine what lawsuit led to this 2 weeks ago:
Clearly, you do not understand THE POWER of corrugated cardboard!
- Comment on On the wall in every kitchen 2 weeks ago:
Impossible to untangle the knots in it.
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 2 weeks ago:
Data centers typically use closed loop cooling systems but those do still lose a bit of water each day that needs to be replaced. It’s not much—compared to the size of the data center—but it’s still a non-trivial amount.
A study recently came out (it was talked about extensively on the Science VS podcast) that said that a long conversation with an AI chat bot (e.g. ChatGPT) could use up to half a liter of water—in the worst case scenario.
This statistic has been used in the news quite a lot recently but it’s a bad statistic: That water usage counts the water used by the power plant (for its own cooling). That’s typically water that would come from ponds and similar that would’ve been built right alongside the power plant (your classic “cooling pond”). So it’s not like the data centers are using 0.5L of fresh water that could be going to people’s homes.
For reference, the actual data center water usage is 12% of that 0.5L: 0.06L of water (for a long chat). Also remember: This is the worst-case scenario with a very poorly-engineered data center.
Another stat from the study that’s relevant: Generating images uses much less energy/water than chat. However, generating videos uses up an order of magnitude more than both (combined).
So if you want the lowest possible energy usage of modern, generative AI: Use fast (low parameter count), open source models… To generate images 👍
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 2 weeks ago:
The power use from AI is orthogonal to renewable energy. From the news, you’d think that AI data centers have become the number one cause of global warming. Yet, they’re not even in the top 100. Even at the current pace of data center buildouts, they won’t make the top 100… ever.
AI data center power utilization is a regional problem specific to certain localities. It’s a bad idea to build such a data center in certain places but companies do it anyway (for economic reasons that are easy to fix with regulation). It’s not a universal problem across the globe.
Aside: I’d like to point out that the fusion reactor designs currently being built and tested were created using AI. Much of the advancements in that area are thanks to “AI data centers”. If fusion power becomes a reality in the next 50 years it’ll have more than made up for any emissions from data centers. From all of them, ever.
- Comment on The wheels of fate can be harsh 2 weeks ago:
Without Made in Abyss this comic wouldn’t make any sense. Would you have preferred if I used copyrighted characters from some anime?
- Submitted 2 weeks ago to animemes@ani.social | 7 comments
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 2 weeks ago:
It’s even more complicated than that: “AI” is not even a well-defined term. Back when Quake 3 was still in beta (“the demo”), id Software held a competition to develop “bot AIs” that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).
That was over 25 years ago. What kind of “AI” do you think was used back then? 🤣
The AI hater extremists seem to be in two camps:
- Data center haters
- AI-is-killing-jobs
The data center haters are the strangest, to me. Because there’s this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people’s PCs (and other, personal hardware).
Yet every day there’s news suggesting that local AI is performing better and better. It seems inevitable—to me—that “big AI” will go the same route as mainframes.
- Comment on Larian CEO Responds to Divinity Gen AI Backlash: 'We Are Neither Releasing a Game With Any AI Components, Nor Are We Looking at Trimming Down Teams to Replace Them With AI' 2 weeks ago:
Most people—even obsessive gamers—don’t give two shits about AI. There’s a very loud minority that gets in everyone’s face saying all AI is evil like we’re John Connor or something. They are so obsessive and extreme about it, it often makes the news (like this article).
The market has already determined that if a game is fun, people will play it. How much AI was used to make it is irrelevant.
- Comment on What steps can be taken to prevent AI training and scraping of my public facing website? 3 weeks ago:
We learned this lesson in the 90s: If you put something on the (public) Internet, assume it will be scraped (and copied and used in various ways without your consent). If you don’t want that, don’t put it on the Internet.
There’s all sorts of clever things you can do to prevent scraping but none of them are 100% effective and all have negative tradeoffs.
For reference, the big AI players aren’t scraping the Internet to train their LLMs anymore. That creates too many problems, not the least of which is making yourself vulnerable to poisoning. If an AI is scraping your content at this point it’s either amateurs or they’re just indexing it like Google would (or both) so the AI knows where to find it without having to rely on 3rd parties like Google.
Remember: Scraping the Internet is everyone’s right. Trying to stop it is futile and only benefits the biggest of the big search engines/companies.
- Comment on US could ask tourists for five-year social media history before entry 3 weeks ago:
It depends… What color is your skin?
- Comment on Activist groups urge Congress to pause US datacenter buildouts 3 weeks ago:
Mandate that data centers self-power via renewable energy already! It’s such a simple fucking solution that solves basically all the problems of data centers.
Relevant note: The research that came out a while back saying that a long conversation with an AI chatbot could use up to half a liter of water included the water use to cool the power plant that’s powering the data center. The same paper spelled out that the actual water use of the data center itself is only 12% of that. So if we force data centers to be powered via renewable energy a long conversation with a chatbot would only use 0.06 liters of water which is basically negligible. Especially when you consider that the 0.5L was a worst-case scenario (older data centers letting all the water evaporate).
- Comment on I need help finishing the SHITPO phonetic alphabet 3 weeks ago:
D should be Dookie.
- Comment on [Discussion] Which character is the cutest for you? 3 weeks ago:
The shoulder fairy in Leadale, Kuu:
Fairy Kuu sitting on Caina’s shoulder
She’s not the cutest looking fairy ever but the way she uses body language to mimick/mock Caina’s speech is cute AF. For example, wagging her finger when Caina is lecturing her children who have been behaving badly. Or just generally popping out of Caina’s hair to stick her tongue out at an enemy 😁