riskable
@riskable@programming.dev
- Comment on UN approves 40-member scientific panel on the impact of artificial intelligence over US objections 1 day ago:
Of all the things to object to… This?
It’s so silly! No matter what the panel concludes, the current Trump regime will just ignore it anyway. Just like they ignore the IPCC or polling or basic economics.
It’d be like the administration objecting to a panel being formed about vaccines, pollution, corruption, or authoritarianism. Clearly, they don’t care so why bother? It just draws attention to their incompetence.
Just another loud demonstration of their incompetence, actually. Otherwise they wouldn’t have objected.
- Comment on Manipulating AI memory for profit: The rise of AI Recommendation Poisoning 2 days ago:
This is why web browsers like Firefox need their own AI. Local AI for not only creating summaries but for detecting bullshit like this.
Yes, creating summaries is kinda lame but without local AI you’re at the mercy of big corporations. It’s a new arms race. Not some bullshit feature that no one needs.
- Comment on Start-up idea 6 days ago:
…and burns people’s homes down due to lack of safety features.
…and children choke to death from easily removable small parts.
…and people get electrocuted because of a lack of warning label telling them not to use it in the bath.
- Comment on Women's razor ads use bare legs but cleaning products don't use clean floors. 1 week ago:
You want political toilet paper?
- Comment on You won: Microsoft is walking back Windows 11’s AI overload — scaling down Copilot and rethinking Recall in a major shift 1 week ago:
It’s called dogfooding and it’s what you’re supposed to do to improve your product.
- Comment on You won: Microsoft is walking back Windows 11’s AI overload — scaling down Copilot and rethinking Recall in a major shift 1 week ago:
Total market share is irrelevant. What matters more is total users.
If you make a product and there’s a million people on a platform who could buy it, the costs to port that product (and support it) need to be low for it to be worthwhile.
If the total number of people on that platform increases to 10 million, now the cost to port/support becomes more like a minuscule expense rather than a difficult decision.
When you reach 100 million there’s no excuse. There’s a lot of money to be made!
For reference, the current estimated amount of desktop Linux users globally is somewhere between 60-80 million. In English-speaking countries, the total is around 19-20 million.
It’s actually a lot more complicated than this, but you get the general idea: There’s a threshold where any given software company (including games) is throwing money away by not supporting Linux.
Also keep in mind that even if Linux had 50% market share, globally, Tim Sweeney would still not allow Epic to support it. I bet he’d rather start selling their own consoles that run Windows instead!
- Comment on You won: Microsoft is walking back Windows 11’s AI overload — scaling down Copilot and rethinking Recall in a major shift 1 week ago:
One thing for certain, Microsoft will not stop using Copilot to develop their software in house.
You’re wrong, but I think you’ll be OK with that because the reality of the situation is actually hilarious:
theverge.com/…/microsoft-claude-code-anthropic-pa…
“Turns out Copilot sucks so let’s just use our competitor’s superior product but that’s no reason we can’t keep foisting the inferior garbage on the masses!”
- Comment on 1 week ago:
Having Sonny Boy listed as a “masterpiece” has me shaking my head. Much in the same way I shook my head after watching Sonny Boy.
Sonny Boy is an anime you recommend people watch to mess with them.
Any normal person that watches it will step away at the end thinking, “WTF did I just watch‽”
Unrelated: Demon Slayer is worth watching. The hype is irrelevant.
- Comment on Not even lottery jackpots are enough to buy apartments in Seoul 1 week ago:
Seems like a short term problem. The birth rate in Good Korea is so low that before long they’ll have vastly more housing than people to live in it.
- Comment on Can anyone explain why? 2 weeks ago:
Wait until you see Gen Alpha’s spending on alcohol!
- Comment on I hope hell is like a microwave so if you find the right spot, you're ok 2 weeks ago:
Yeah it’s a common thought: An afterlife where people gather before going on to the next.
Usually, people think that the quality of your choices for the next life will be based on whatever criteria they think was most important in life. Someone who went out of their way to be nice will believe that it will be based on how nice you were. Whereas someone who spent their life accumulating money/power will assume it’s based on that.
For all we know, though, your “afterlife score” could be based on how many different sorts of food you tried, how many buttons you pressed, how far you traveled from where you were born, etc.
I actually have a novel idea about this concept: Dude dies and gets the red carpet treatment in the afterlife. He’s very happy about it but he doesn’t understand… He never got married and spent most of his life doing data entry and courtroom steganography.
Turns out, he got the high score in “button pressing.” He’s at the top of the leaderboard and this qualifies him for all sorts of “premium” reincarnation options. Not only that, but the gods intend to put his talents to use right away on “pressing issues.”
- Comment on I hope hell is like a microwave so if you find the right spot, you're ok 2 weeks ago:
In hell, they just use Crow Pilot for this sort of thing.
- Comment on I hope hell is like a microwave so if you find the right spot, you're ok 2 weeks ago:
Me, at life’s exit interview…
“Sooo… I’m regards to my, er, contributions to the good of the world… Does open source software count? What about all those times I made witty comments that made a few people smile? 😬”
- Comment on I hope hell is like a microwave so if you find the right spot, you're ok 2 weeks ago:
Speaking about assumptions about the afterlife, people who believe in reincarnation typically believe that after you die, you get reincarnated. The assumption there is that it happens right away. What if it happens like a thousand years after you die or maybe an entire universe goes by?
- Comment on UK proposes forcing Google to let publishers opt out of AI summaries 2 weeks ago:
Wrong way to handle AI summaries: Google crawls the article and presents it’s summary.
Right way to handle AI summaries: Your own browser uses a local AI model on your PC to generate the summary.
The first is easy to stop with legislation, the second is impossible to stop and yet, if you try you’re a fucking asshole trying to tell people what they can and cannot do with their own hardware. That’s straight up villain behavior.
- Comment on [Episode] GANSO! BanG Dream Chan • Ganso! Bandori-chan - Episode 17 discussion 2 weeks ago:
Does anyone actually watch these Bang! Dream shows?
- Comment on How would you spell the sound Transformers make when they transform? 2 weeks ago:
Well, there’s five beats, so…
Ts-che-chu-chu-chk
- Comment on Giving University Exams in the Age of Chatbots 3 weeks ago:
This is super interesting. I think academia is going to need to clearly divide “learning” into two categories:
- What you need to memorize.
- What you need to understand.
If you’re being tested on how well you memorized something, using AI to answer questions is cheating.
If you’re being tested on how well you understand something, using AI during an exam isn’t going to help you much unless it’s something that could be understood very quickly. In which case, why are you bothering to test for that knowledge?
If a student has an hour to answer ten questions about a complex topic, and they can somehow understand it well enough by asking AI about it, it either wasn’t worthy of teaching or that student is wasting their time in school; they clearly learn better on their own.
- Comment on What if brains are eggs? 3 weeks ago:
There’s a whole community about the body being just a shell and the brain being an egg that needs to crack: EGG IRL
- Comment on Amazon Doubles Down on AI Dubs for Anime Despite Backlash: Creative Director Wanted 3 weeks ago:
They totally fucked this up. AI dubs “aren’t there yet.” They’re still years away from being decent enough to get the job done.
If they really want to save money using AI, the correct way, with today’s technology, is to use an AI voice changer! Hire a real voice actor and then make them do all the voices, then use the AI voice changer to make them sound like each character.
…but they’re so fucking cheap and lazy they won’t even do that.
- Comment on See for yourself just how massive Meta’s Hyperion data center is 3 weeks ago:
I used to live down the street from a great big data center. It wasn’t a big deal. It’s basically just a building full of servers with extra AC units.
Inside? Loud AF (think: Jet engine. Wear hearing protection).
Outside: The hum of lots of industrial air conditioning units. Only marginally louder than a big office building.
A data center this big is going to have a lot more AC units than normal but they’ll be spread all around the building. It’s not like living next to an airport or busy train tracks (that’s like 100x worse).
- Comment on [Video] Bunny betrayal 3 weeks ago:
Now show us “the stick” method.
- Comment on "Not A Single Pixel" Of The New Ecco Game Will Be Generated By AI, Insists Series Creator 4 weeks ago:
This is my take at well, but not just for gaming… AI is changing the landscape for all sorts of things. For example, if you wanted a serious, professional grammar, consistency, and similar checks of your novel you had to pay thousands of dollars for a professional editor to go over it.
Now you can just paste a single chapter at a time into a FREE AI tool and get all that and more.
Yet here we are: Still seeing grammatical mistakes, copy & paste oversights, and similar in brand new books. It costs nothing! Just use the AI FFS.
Checking a book with an AI chat bot uses up as much power/water as like 1/100th of streaming a YouTube Short. It’s not a big deal.
The Nebula Awards recently banned books that used AI for grammar checking. My take: “OK, so only books from big publishers are allowed, then?”
- Comment on The AI explosion isn't just hurting the prices of computers and consoles – it's coming for TVs and audio tech too 4 weeks ago:
Every modern monitor has some memory in it. They have timing controllers and image processing chips that need DRAM to function. Not much, but it is standard DDR3/DDR4 or LPDDR RAM.
- Comment on The AI explosion isn't just hurting the prices of computers and consoles – it's coming for TVs and audio tech too 4 weeks ago:
No shit. There’s easier ways to open the fridge.
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 5 weeks ago:
unless you consider every single piece of software or code ever to be just “a way of giving instructions to computers”
Yes. Yes I do. That’s exactly what code is: instructions. That’s literally how computers work. That’s what people like me (software developers) do when we write software: We’re writing down instructions.
When you click or move your mouse, you’re giving the computer instructions (well, the driver is). When you type a key, that’s resulting in an instruction being executed (dozens to thousands, actually).
When I click “submit” on this comment, I’m giving a whole bunch of computers some instructions.
Insert meme of, “you mean computers are just running instructions?” “Always have been.”
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 5 weeks ago:
In Kadrey v. Meta (court case) a group of authors sued Meta/Anthropic for copyright infringement but the case was thrown out by the judge because they couldn’t actually produce any evidence of infringement beyond, “Look! This passage is similar.” They asked for more time so they could keep trying thousands (millions?) of different prompts until they finally got one that matched enough that they might have some real evidence.
In Getty Images v. Stability AI (UK), the court threw out the case for the same reason: It was determined that even though it was possible to generate an image similar to something owned by Getty, that didn’t meet the legal definition of infringement.
Basically, the courts ruled in both cases, “AI models are not just lossy/lousy compression.”
IMHO: What we really need a ruling on is, “who is responsible?” When an AI model does output something that violate someone’s copyright, is it the owner/creator of the model that’s at fault or the person that instructed it to do so? Even then, does generating something for an individual even count as “distribution” under the law? I mean, I don’t think it does because to me that’s just like using a copier to copy a book. Anyone can do that (legally) for any book they own, but if they start selling/distributing that copy, then they’re violating copyright.
Even then, there’s differences between distributing an AI model that people can use on their PCs (like Stable Diffusion) VS using an AI service to do the same thing. Just because the model can be used for infringement should be meaningless because anything (e.g. a computer, Photoshop, etc) can be used for infringement. The actual act of infringement needs to be something someone does by distributing the work.
You know what? Copyright law is way too fucking complicated, LOL!
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 5 weeks ago:
Hmmm… That’s all an interesting argument but it has nothing to do with my comparison to YouTube/Netflix (or any other kind of video) streaming.
If we were to compare a heavy user of ChatGPT to a teenager that spends a lot of time streaming videos, the ChatGPT side of the equation wouldn’t even amount to 1% of the power/water used by streaming. In fact, if you add up all the usage of all the popular AI services power/water usage that still doesn’t add up to much compared to video streaming.
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 5 weeks ago:
Sell? Only “big AI” is selling it. Generative AI has infinite uses beyond ChatGPT, Claude, Gemini, etc.
Most genrative AI research/improvement is academic in nature and it’s being developed by a bunch of poor college students trying to earn graduate degrees. The discoveries of those people are being used by big AI to improve their services.
You seem to be making some argument from the standpoint that “AI” == “big AI” but this is not the case. Research and improvements will continue regardless of whether or not ChatGPT, Claude, etc continue to exist. Especially image AI where free, open source models are superior to the commercial products.
- Comment on AI’s Memorization Crisis | Large language models don’t “learn”—they copy. And that could change everything for the tech industry. 5 weeks ago:
but we can reasonably assume that Stable Diffusion can render the image on the right partly because it has stored visual elements from the image on the left.
No, you cannot reasonably assume that. It absolutely did not store the visual elements. What it did, was store some floating point values related to some keywords that the source image had pre-classified. When training, it will increase or decrease those floating point values a small amount when it encounters further images that use those same keywords.
What the examples demonstrate is a lack of diversity in the training set for those very specific keywords. There’s a reason why they chose Stable Diffusion 1.4 and not Stable Diffusion 2.0 (or later versions)… Because they drastically improved the model after that. These sorts of problems (with not-diverse-enough training data) are considered flaws by the very AI researchers creating the models. It’s exactly the type of thing they don’t want to happen!
The article seems to be implying that this is a common problem that happens constantly and that the companies creating these AI models just don’t give a fuck. This is false. It’s flaws like this that leave your model open to attack (and letting competitors figure out your weights; not that it matters with Stable Diffusion since that version is open source), not just copyright lawsuits!
Here’s the part I don’t get: Clearly nobody is distributing copyrighted images by asking AI to do its best to recreate them. When you do this, you end up with severely shitty hack images that nobody wants to look at. Basically, if no one is actually using these images except to say, “aha! My academic research uncovered this tiny flaw in your model that represents an obscure area of AI research!” why TF should anyone care?
They shouldn’t! The only reason why articles like this get any attention at all is because it’s rage bait for AI haters. People who severely hate generative AI will grasp at anything to justify their position. Why? I don’t get it. If you don’t like it, just say you don’t like it! Why do you need to point to absolutely, ridiculously obscure shit like finding a flaw in Stable Diffusion 1.4 (from years ago, before 99% of the world had even heard of generative image AI)?
Generative AI is just the latest way of giving instructions to computers. That’s it! That’s all it is.
Nobody gave a shit about this kind of thing when Star Trek was pretending to do generative AI in the Holodeck. Now that we’ve got he pre-alpha version of that very thing, a lot of extremely vocal haters are freaking TF out.
Do you want the cool shit from Star Trek’s imaginary future or not? This is literally what computer scientists have been dreaming of for decades. It’s here! Have some fun with it!
Generative AI uses up less power/water than streaming YouTube or Netflix (yes, it’s true). So if you’re about to say it’s bad for the environment, I expect you’re just as vocal about streaming video, yeah?