riskable
@riskable@programming.dev
- Comment on Giving University Exams in the Age of Chatbots 1 day ago:
This is super interesting. I think academia is going to need to clearly divide “learning” into two categories:
- What you need to memorize.
- What you need to understand.
If you’re being tested on how well you memorized something, using AI to answer questions is cheating.
If you’re being tested on how well you understand something, using AI during an exam isn’t going to help you much unless it’s something that could be understood very quickly. In which case, why are you bothering to test for that knowledge?
If a student has an hour to answer ten questions about a complex topic, and they can somehow understand it well enough by asking AI about it, it either wasn’t worthy of teaching or that student is wasting their time in school; they clearly learn better on their own.
- Comment on What if brains are eggs? 1 day ago:
There’s a whole community about the body being just a shell and the brain being an egg that needs to crack: EGG IRL
- Comment on Amazon Doubles Down on AI Dubs for Anime Despite Backlash: Creative Director Wanted 1 day ago:
They totally fucked this up. AI dubs “aren’t there yet.” They’re still years away from being decent enough to get the job done.
If they really want to save money using AI, the correct way, with today’s technology, is to use an AI voice changer! Hire a real voice actor and then make them do all the voices, then use the AI voice changer to make them sound like each character.
…but they’re so fucking cheap and lazy they won’t even do that.
- Comment on See for yourself just how massive Meta’s Hyperion data center is 4 days ago:
I used to live down the street from a great big data center. It wasn’t a big deal. It’s basically just a building full of servers with extra AC units.
Inside? Loud AF (think: Jet engine. Wear hearing protection).
Outside: The hum of lots of industrial air conditioning units. Only marginally louder than a big office building.
A data center this big is going to have a lot more AC units than normal but they’ll be spread all around the building. It’s not like living next to an airport or busy train tracks (that’s like 100x worse).
- Comment on [Video] Bunny betrayal 6 days ago:
Now show us “the stick” method.
- Comment on "Not A Single Pixel" Of The New Ecco Game Will Be Generated By AI, Insists Series Creator 1 week ago:
This is my take at well, but not just for gaming… AI is changing the landscape for all sorts of things. For example, if you wanted a serious, professional grammar, consistency, and similar checks of your novel you had to pay thousands of dollars for a professional editor to go over it.
Now you can just paste a single chapter at a time into a FREE AI tool and get all that and more.
Yet here we are: Still seeing grammatical mistakes, copy & paste oversights, and similar in brand new books. It costs nothing! Just use the AI FFS.
Checking a book with an AI chat bot uses up as much power/water as like 1/100th of streaming a YouTube Short. It’s not a big deal.
The Nebula Awards recently banned books that used AI for grammar checking. My take: “OK, so only books from big publishers are allowed, then?”
- Comment on The AI explosion isn't just hurting the prices of computers and consoles – it's coming for TVs and audio tech too 1 week ago:
Every modern monitor has some memory in it. They have timing controllers and image processing chips that need DRAM to function. Not much, but it is standard DDR3/DDR4 or LPDDR RAM.
- Comment on The AI explosion isn't just hurting the prices of computers and consoles – it's coming for TVs and audio tech too 1 week ago:
No shit. There’s easier ways to open the fridge.
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 2 weeks ago:
unless you consider every single piece of software or code ever to be just “a way of giving instructions to computers”
Yes. Yes I do. That’s exactly what code is: instructions. That’s literally how computers work. That’s what people like me (software developers) do when we write software: We’re writing down instructions.
When you click or move your mouse, you’re giving the computer instructions (well, the driver is). When you type a key, that’s resulting in an instruction being executed (dozens to thousands, actually).
When I click “submit” on this comment, I’m giving a whole bunch of computers some instructions.
Insert meme of, “you mean computers are just running instructions?” “Always have been.”
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 2 weeks ago:
In Kadrey v. Meta (court case) a group of authors sued Meta/Anthropic for copyright infringement but the case was thrown out by the judge because they couldn’t actually produce any evidence of infringement beyond, “Look! This passage is similar.” They asked for more time so they could keep trying thousands (millions?) of different prompts until they finally got one that matched enough that they might have some real evidence.
In Getty Images v. Stability AI (UK), the court threw out the case for the same reason: It was determined that even though it was possible to generate an image similar to something owned by Getty, that didn’t meet the legal definition of infringement.
Basically, the courts ruled in both cases, “AI models are not just lossy/lousy compression.”
IMHO: What we really need a ruling on is, “who is responsible?” When an AI model does output something that violate someone’s copyright, is it the owner/creator of the model that’s at fault or the person that instructed it to do so? Even then, does generating something for an individual even count as “distribution” under the law? I mean, I don’t think it does because to me that’s just like using a copier to copy a book. Anyone can do that (legally) for any book they own, but if they start selling/distributing that copy, then they’re violating copyright.
Even then, there’s differences between distributing an AI model that people can use on their PCs (like Stable Diffusion) VS using an AI service to do the same thing. Just because the model can be used for infringement should be meaningless because anything (e.g. a computer, Photoshop, etc) can be used for infringement. The actual act of infringement needs to be something someone does by distributing the work.
You know what? Copyright law is way too fucking complicated, LOL!
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 2 weeks ago:
Hmmm… That’s all an interesting argument but it has nothing to do with my comparison to YouTube/Netflix (or any other kind of video) streaming.
If we were to compare a heavy user of ChatGPT to a teenager that spends a lot of time streaming videos, the ChatGPT side of the equation wouldn’t even amount to 1% of the power/water used by streaming. In fact, if you add up all the usage of all the popular AI services power/water usage that still doesn’t add up to much compared to video streaming.
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 2 weeks ago:
Sell? Only “big AI” is selling it. Generative AI has infinite uses beyond ChatGPT, Claude, Gemini, etc.
Most genrative AI research/improvement is academic in nature and it’s being developed by a bunch of poor college students trying to earn graduate degrees. The discoveries of those people are being used by big AI to improve their services.
You seem to be making some argument from the standpoint that “AI” == “big AI” but this is not the case. Research and improvements will continue regardless of whether or not ChatGPT, Claude, etc continue to exist. Especially image AI where free, open source models are superior to the commercial products.
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 2 weeks ago:
but we can reasonably assume that Stable Diffusion can render the image on the right partly because it has stored visual elements from the image on the left.
No, you cannot reasonably assume that. It absolutely did not store the visual elements. What it did, was store some floating point values related to some keywords that the source image had pre-classified. When training, it will increase or decrease those floating point values a small amount when it encounters further images that use those same keywords.
What the examples demonstrate is a lack of diversity in the training set for those very specific keywords. There’s a reason why they chose Stable Diffusion 1.4 and not Stable Diffusion 2.0 (or later versions)… Because they drastically improved the model after that. These sorts of problems (with not-diverse-enough training data) are considered flaws by the very AI researchers creating the models. It’s exactly the type of thing they don’t want to happen!
The article seems to be implying that this is a common problem that happens constantly and that the companies creating these AI models just don’t give a fuck. This is false. It’s flaws like this that leave your model open to attack (and letting competitors figure out your weights; not that it matters with Stable Diffusion since that version is open source), not just copyright lawsuits!
Here’s the part I don’t get: Clearly nobody is distributing copyrighted images by asking AI to do its best to recreate them. When you do this, you end up with severely shitty hack images that nobody wants to look at. Basically, if no one is actually using these images except to say, “aha! My academic research uncovered this tiny flaw in your model that represents an obscure area of AI research!” why TF should anyone care?
They shouldn’t! The only reason why articles like this get any attention at all is because it’s rage bait for AI haters. People who severely hate generative AI will grasp at anything to justify their position. Why? I don’t get it. If you don’t like it, just say you don’t like it! Why do you need to point to absolutely, ridiculously obscure shit like finding a flaw in Stable Diffusion 1.4 (from years ago, before 99% of the world had even heard of generative image AI)?
Generative AI is just the latest way of giving instructions to computers. That’s it! That’s all it is.
Nobody gave a shit about this kind of thing when Star Trek was pretending to do generative AI in the Holodeck. Now that we’ve got he pre-alpha version of that very thing, a lot of extremely vocal haters are freaking TF out.
Do you want the cool shit from Star Trek’s imaginary future or not? This is literally what computer scientists have been dreaming of for decades. It’s here! Have some fun with it!
Generative AI uses up less power/water than streaming YouTube or Netflix (yes, it’s true). So if you’re about to say it’s bad for the environment, I expect you’re just as vocal about streaming video, yeah?
- Comment on Newer AI Coding Assistants Are Failing in Insidious Ways 2 weeks ago:
Correction: Newer versions of ChatGPT (GPT-5.x) are failing in insidious ways. The article has no mention of the other popular services or the dozens of open source coding assist AI models (e.g. Qwen, gpt-oss, etc).
The open source stuff is amazing and gets better just as quickly as the big AI options. Yet they’re boring so they don’t make the news.
- Comment on Musk’s Grok AI Generated Thousands of Undressed Images Per Hour on X 2 weeks ago:
Well, the CSAM stuff is unforgivable but I seriously doubt even the soulless demon that is Elon Musk wants his AI tool generating that. I’m sure they’re working on it (it’s actually a hard computer science sort of problem because the tool is supposed to generate what the user asks for and there’s always going to be an infinite number of ways to trick it since LLMs aren’t actually intelligent).
Porn itself is not illegal.
- Comment on Musk’s Grok AI Generated Thousands of Undressed Images Per Hour on X 2 weeks ago:
I don’t know, man… Have you even seen Amber? It might be worth an alert 🤷
- Comment on Musk’s Grok AI Generated Thousands of Undressed Images Per Hour on X 2 weeks ago:
I don’t know how to tell you this but… Every body gives a shit. We’re born shitters.
- Comment on Journalistic Malpractice: No LLM Ever ‘Admits’ To Anything, And Reporting Otherwise Is A Lie 2 weeks ago:
Good catch!
- Comment on Musk’s Grok AI Generated Thousands of Undressed Images Per Hour on X 2 weeks ago:
The real problem here is that Xitter isn’t supposed to be a porn site (even though it’s hosted loads of porn since before Musk bought it). They basically deeply integrated a porn generator into their very publicly-accessible “short text posts” website. Anyone can ask it to generate porn inside of any post and it’ll happily do so.
It’s like showing up at Walmart and seeing everyone naked (and many fucking), all over the store. That’s not why you’re there (though: Why TF are you still using that shithole of a site‽).
The solution is simple: Everyone everywhere needs to classify Xitter as a porn site. It’ll get blocked by businesses and schools and the world will be a better place.
- Comment on Sony AI patent will see PlayStation games play themselves when players are stuck 2 weeks ago:
“To solve this puzzle, you have to get your dog to poop in the circle…”
- Comment on Sony AI patent will see PlayStation games play themselves when players are stuck 2 weeks ago:
Yep. Stadia also had a feature like this (that no one ever used).
Just another example of why software patents should not exist.
- Comment on Is there anything of any interests for the tech bros in Greenland? 2 weeks ago:
It’s cold outside all year round and there’s abundant geothermal energy. Basically, is the perfect place to build data centers.
- Comment on Drive safe 3 weeks ago:
These are the same people that would download a car!
- Comment on Maybe the RAM shortage will make software less bloated? 4 weeks ago:
Big AI is a bubble but AI in general is not.
If anything, the DRAM shortages will apply pressure on researchers to come up with more efficient AI models rather than more efficient (normal) software overall.
I suspect that as more software gets AI-assisted development we’ll actually see less efficient software but eventually, more efficient as adoption of AI coding assist becomes more mature (and probably more formalized/automated).
I say this because of experience: If you ask an LLM to write something for you it often does a terrible job with efficiency. However, if you ask it to analyze an existing code base to make it more efficient, it often does a great job. The dichotomy is due to the nature of AI prompting: It works best if you only give it one thing to do at a time.
In theory, if AI code assist becomes more mature and formalized, the “optimize this” step will likely be built-in, rather than something the developer has to ask for after the fact.
- Comment on If reincarnation exists, suicide could make things much much worse. 4 weeks ago:
Who says you get reincarnated right away? It could be a 1000 years between your death and rebirth!
That’s how I set it up in my silly comedy Isekai, Maizy’s Tails (it’s free to read on the web if you care… Just search it, it’ll be the first link): After death souls need to be “aged” at least 1000 years before they can be put in a new body. The gods think it’s a multiversal rule but the MC figures out a workaround 😁
It actually opens with the gods bidding on souls from Earth… A world that ended about a million years prior to the auction (because that’s how long it took to sort and categorize them all) 🤣
- Comment on Survey reveals most people are holding onto their phones for a long time, and it makes sense 4 weeks ago:
FYI: Speech recognition is an AI feature and it gets (marginally) better with the newer chips. For example, in noisy environments.
That’s probably the most-used AI thing that nearly everyone uses on occasion. Older phones had to send your speech to the cloud but with the new chips all that processing can be handled locally.
- Comment on Survey reveals most people are holding onto their phones for a long time, and it makes sense 4 weeks ago:
You have to keep it for two more years! Because even Samsung can’t get Samsung to sell Samsung DRAM for new phones!
- Comment on Survey reveals most people are holding onto their phones for a long time, and it makes sense 4 weeks ago:
There’s innovation! What are you even talking about‽
I just upgraded my phone two months ago and now two of the four cameras (which is the same number as my old phone that I bought four years ago) have something like 20% more pixels!
Also—now that I have the latest chip—I can talk to my phone in like three more languages. I don’t speak any of them, but… Innovation!
My new phone is also significantly heavier than the old one and the battery life is like 10% better than my old phone when it was new! Also, my display has a few extra lines of resolution on the top and bottom!
No innovation? Hah!
- Comment on Survey reveals most people are holding onto their phones for a long time, and it makes sense 4 weeks ago:
Time to move smartphones into the “durable goods” category.
- Comment on G-Assist is ‘real’: NVIDIA unveils NitroGen, open-source AI model that can play 1000+ games for you 4 weeks ago:
I doubt that. New services that host the open models are cropping up all the time. They’re like VPS hosting providers (in fact, existing VPS hosts will soon break out into that space too).
It’s not like Big AI has some huge advantage over the open source models. In fact, for images their a little bit behind!
The FOSS coding models are getting pretty fantastic and the get better all the time. It seems like once a month a new, free model comes out that eclipses the previous generation once a month.