riskable
@riskable@programming.dev
- Comment on Teachers Are Not OK 3 days ago:
Correction: Education is not OK.
AI is just giving poor kids the same opportunities rich kids have had for decades. Opportunities for cheating the system that was made specifically not to give students the best education possible but instead to bring them up to speed on the bare minimum required to become factory workers.
Except we don’t have very many factories any more. And we don’t have jobs for all these graduates that pay a living wage.
The banks are going to have to get involved soon. They’re going to have to figure out a way to load up working-age people with long term debt without college being involved.
- Comment on Meta rolled back protections. Now hate is surging. 1 week ago:
To me, this is like saying, “4chan has turned into a cesspool!” Yeah: It was like that from the start. YOU were the ones that assumed it was ever safe!
You’re posting stuff on the public Internet to a website for adults where literally anyone can sign up and comment FFS.
If you want good moderation you need community moderation from people in that community. Not some giant/evil megacorp!
There’s all sorts of tools and platforms that do this properly, easily, and for free. If you don’t like Meta’s websites move off of them already!
- Comment on Microsoft’s New Xbox Strategy Starts with Windows and Ends with No Console 1 week ago:
Mods on Xbox only exist for games where the game itself officially added mod support. I mean, sure it’s great when a game maker does that but usually it’s not as good as community-made mod support because community mods don’t require approval and can’t get censored/removed because the vendor doesn’t like it.
Remember: Microsoft’s vision of mods is what you get with the Bedrock version of Minecraft. Yet the mods available in the Java version are so vastly superior the difference is like night and day.
Console players—that are used to living without mods—don’t understand. Once mods become a regular thing that you expect in popular games going without them feels like going back into the dark ages.
- Comment on Microsoft’s New Xbox Strategy Starts with Windows and Ends with No Console 1 week ago:
All the fun of Windows gaming with the locked-down ecosystem of a console (no mods). What could go wrong?
It’s Windows Mobile all over again.
- Comment on We went from LEARN TO CODE to NO ONE LEARN TO CODE GET A CONSTRUCTION JOB in about a 3 year span. 1 week ago:
Interestingly, that’s how it works for construction jobs too!
Things will break and they will be back.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 weeks ago:
To be fair, the world of JavaScript is such a clusterfuck… Can you really blame the LLM for needing constant reminders about the specifics of your project?
When a programming language has five hundred bazillion absolutely terrible ways of accomplishing a given thing—and endless absolutely awful code examples on the Internet to “learn from”—you’re just asking for trouble. Not just from trying to get an LLM to produce what you want but also trying to get humans to do it.
This is why LLMs are so fucking good at writing rust and Python: There’s only so many ways to do a thing and the larger community pretty much always uses the same solutions.
JavaScript? How can it even keep up? You’re using yarn today but in a year you’ll probably like, “fuuuuck this code is garbage… I need to convert this all to <new thing>.”
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 weeks ago:
Define, “reasoning”. For decades software developers have been writing code with conditionals. That’s “reasoning.”
LLMs are “reasoning”… They’re just not doing human-like reasoning.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 weeks ago:
That just means they’d be great CEOs!
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 weeks ago:
I’m not convinced that humans don’t reason in a similar fashion. When I’m asked to produce pointless bullshit at work my brain puts in a similar level of reasoning to an LLM.
Think about “normal” programming: An experienced developer (that’s self-trained on dozens of enterprise code bases) doesn’t have to think much at all about 90% of what they’re coding. It’s all bog standard bullshit so they end up copying and pasting from previous work, Stack Overflow, etc because it’s nothing special.
The remaining 10% is “the hard stuff”. They have to read documentation, search the Internet, and then—after all that effort to avoid having to think—they sigh and start actually start thinking in order to program the thing they need.
LLMs go through similar motions behind the scenes! Probably because they were created by software developers but they still fail at that last 90%: The stuff that requires actual thinking.
Eventually someone is going to figure out how to auto-generate LoRAs based on test cases combined with trial and error that then get used by the AI model to improve itself and that is when people are going to be like, “Oh shit! Maybe AGI really is imminent!” But again, they’ll be wrong.
AGI won’t happen until AI models get good at retraining themselves with something better than basic reinforcement learning. In order for that to happen you need the working memory of the model to be nearly as big as the hardware that was used to train it. That, and loads and loads of spare matrix math processors ready to go for handing that retraining.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 2 weeks ago:
The only reason we’re not there yet is memory limitations.
Eventually some company will come out with AI hardware that lets you link up a petabyte of ultra fast memory to chips that contain a million parallel matrix math processors. Then we’ll have an entirely new problem: AI that trains itself incorrectly too quickly.
Just you watch: The next big breakthrough in AI tech will come around 2032-2035 (when the hardware is available) and everyone will be bitching that “chain reasoning” (or whatever the term turns out to be) isn’t as smart as everyone thinks it is.
- Comment on This Week in Plasma: Plasma 6.4 is nigh 2 weeks ago:
I love KDE!
- Comment on Looking for the perfect 5 year anniversary gift? 2 weeks ago:
This just proves that Google’s AI is a cut above the rest!
- Comment on I don't like the Linux clipboard situation 3 weeks ago:
Just use KDE and you get klipper which is an amazing clipboard manager. It keeps track of everything you’ve copied to your clipboard (raise the history to 999).
Configure it to pop up at the cursor location and whenever you type the keystroke you’ll have access to everything.
I personally prefer that it not paste immediately upon selection but there’s a number of options like that to set it up to your liking.
- Comment on [deleted] 4 weeks ago:
So… You were lovin’ it and they were having it their way.
- Comment on Eventually, old computers and operating systems will likely be referred to as dumb computers or dumb terminals or similar, because they don't have artificial intelligence. 5 weeks ago:
Haha, that’s amazing.
Guess I need to go buy “an inflator” 🤣
- Comment on Eventually, old computers and operating systems will likely be referred to as dumb computers or dumb terminals or similar, because they don't have artificial intelligence. 5 weeks ago:
Well… Don’t leave us hanging!
Did the instructions work? You’d think it would be harder than inflating a tablet-style phone… Because of the hinge.
- Comment on Microsoft is putting AI actions into the Windows File Explorer 5 weeks ago:
The big difference is that updates in Linux happen in the background and aren’t very intrusive. Your hard drive will be used here and there as it unpacks packages but the difference between say, apt, and Windows update is stark. Windows update slows everything down quite a lot.
- Comment on It's Breathtaking How Fast AI Is Screwing Up the Education System 5 weeks ago:
her goal isn’t to get them to stop, it’s to get them to recognize what garbage writing is and how to fix it so it isn’t garbage anymore.
I wish English teachers did this instead of… Whatever TF they’re doing instead.
This is something they should’ve been doing all along. Long before the invention of LLMs or computers.
- Comment on Duolingo CEO says AI is a better teacher than humans—but schools will exist ‘because you still need childcare’ 5 weeks ago:
Move to Japan 👍
- Comment on What's a good HTPC OS and software? 5 weeks ago:
I use Kubuntu with KDE Connect. It lets me control everything using my phone 👍
I can play/pause whatever from my lock screen and can use my phone’s keyboard like it’s connected to the computer. It’s fantastic 👍
- Comment on A shapeshifter standing in front of a mirror for hours making subtle adjustments to their disguises like a gamer in the character creator. 5 weeks ago:
Nah just own it. You only need to kill this one person to obtain your shapeshifter dreams!
Admit it, “I’d say sorry but I’d still pull the trigger.” 😁
- Comment on Why don't these code-writing AIs just output straight up machine code? 5 weeks ago:
Umm… AI has been used to improve compilers dating all the way back to 2004:
github.com/…/Artificial-Intelligence-in-Compiler-…
Sorry that I had to prove you wrong so overwhelmingly, so quickly 🤷
- Comment on Why don't these code-writing AIs just output straight up machine code? 5 weeks ago:
To add to this: It’s much more likely that AI will be used to improve compilers—not replace them.
Aside: AI is so damned slow already. Imagine AI compiler times… Yeesh!
- Comment on Audible unveils plans to use AI voices to narrate audiobooks 1 month ago:
I just wrote a novel (finished first draft yesterday). There’s no way I can afford professional audiobook voice actors—especially for a hobby project.
What I was planning on doing was handling the audiobook on my own—using an AI voice changer for all the different characters.
That’s where I think AI voices can shine: If someone can act they can use a voice changer to handle more characters and introduce a great variety of different styles of speech while retaining the careful pauses and dramatic elements (e.g. a voice cracking during an emotional scene) that you’d get from regular voice acting.
I’m not saying I will be able to pull that off but surely it will be better than just telling Amazon’s AI, “Hey, go read my book.”
- Comment on [deleted] 1 month ago:
Ah, so cute. Here’s what you need to do: Find what you like to do, then pick a realistic career path that you think you could do and do that instead. I highly recommend picking a career that lets you work from home. That way, you can skip the commute, do your laundry, and face existential dread at the same time; it’s better for the environment 👍
Comfort and happiness are what happens after work. You have to work to attain it! It’s the capitalist dream.
- Comment on Why is nobody mad about TGI Fridays taking the lords name in vain? 1 month ago:
That’s actually an unholey workaround!
o
contains 100% more hole than-
. - Comment on Meta's Reality Labs Has Now Lost Over $60 Billion Since 2020 - Slashdot 1 month ago:
Are there any other websites that still let you put in your AIM and ICQ numbers? Or brag about your super low user ID? 19437 BTW 🤣
- Comment on Glue used to be rare and magical. 2 months ago:
With the ghost of solvents past.
- Comment on Jack Dorsey and Elon Musk would like to ‘delete all IP law’ | TechCrunch 2 months ago:
If you hired someone to copy Ghibli’s style, then fed that into an AI as training data, it would completely negate your entire argument.
It is not illegal for an artist to copy someone else’s style. They can’t copy another artist’s work—that’s a derivative—but copying their style is perfectly legal. You can’t copyright a style.
All of that is irrelevant, however. The argument is that—somehow—training an AI with anything is somehow a violation of copyright. It is not. It is absolutely 100% not a violation of copyright to do that!
Copyright is all about distribution rights. Anyone can download whatever TF they want and they’re not violating anyone’s copyright. It’s the entity that sent the person the copyright that violated the law. Therefore, Meta, OpenAI, et al can host enormous libraries of copyrighted data in their data centers and use that to train their AI. It’s not illegal at all.
When some AI model produces a work that’s so similar to an original work that anyone would recognize it, “yeah, that’s from Spirited Away” then yes: They violated Ghibli’s copyright.
If the model produces an image of some random person in the style of Studio Ghibli that is not violating anyone’s copyright. It is not illegal nor is it immoral. No one is deprived of anything in such a transaction.
- Comment on Jack Dorsey and Elon Musk would like to ‘delete all IP law’ | TechCrunch 2 months ago:
I think your understanding of generative AI is incorrect. It’s not just “logic and RNG”…
If it runs on a computer, it’s literally “just logic and RNG”. It’s all transistors, memory, and an RNG.
The data used to train an AI model is copyrighted. It’s impossible for something to exist without copyright (in the past 100 years). Even public domain works had copyright at some point.
if any of the training data is copyrighted, then attribution must be given, or at the very least permission to use this data must be given by the current copyright holder.
This is not correct. Every artist ever has been trained with copyrighted works, yet they don’t have to recite every single picture they’ve seen or book they’ve ever read whenever they produce something.