Deestan
@Deestan@lemmy.world
- Comment on How I Left YouTube 3 days ago:
It’s baffling how people in the US accept and even adopt the language of NDAs being trade secrets. They aren’t. It’s a weapon to make it harder for people to leave.
- Comment on Microsoft wants to replace its entire C and C++ codebase, perhaps by 2030 5 days ago:
TBH he probably knows he is lying, but is making confusing claims in order to push some other agenda.
Probably firing core people to save money while maintaining plausiblish deniability that this won’t do irrepairable damage.
- Comment on Microsoft wants to replace its entire C and C++ codebase, perhaps by 2030 5 days ago:
The expensive autocomplete can’t do this.
AI markering all wants us to believe that spoon technology is this close to space flight. We just need to engrave the spoons better. And gold plate them thicker.
Dude who wrote that doesn’t understand how LLMs work, how Rust works, how C works, and clearly jack shit about programming in general.
Rewriting from one paradigm to another isn’t something you can delegate to a million monkeys shitting into typewriters. The core and time-consuming part of the work itself requires skilled architectural coding.
- Comment on AI-generated code contains more bugs and errors than human output 6 days ago:
Yeah what you say makes sense to me. Having it make a “wrong start” in something new is useful, as it gives you a lot of the typical structure, introduces the terminology, maybe something sorta moving that you can see working before messing with it, etc.
- Comment on AI-generated code contains more bugs and errors than human output 6 days ago:
This was a very directed experiment at purely LLM written maintainable code.
Writing experiments and proof of concepts, even without skill, will give a different calculation and can make more sense.
Having it write a “starting point” and then take over, also is a different thing that can make more sense. This requires a coder with skill, you can’t skip that.
- Comment on AI-generated code contains more bugs and errors than human output 6 days ago:
I’ve been coding for a while. I did an honest eager attempt at making a real functioning thing with all code written by AI. A breakout clone using SDL2 with music.
The game should look good, play good, have cool effects, and be balanced. It should have an attractor screen, scoring, a win state and a lose state.
I also required the code to be maintainable. Meaning I should be able to look at every single line and understand it enough to defend its existence.
I did make it work. And honestly Claude did better than expected. The game ran well and was fun.
But: The process was shit.
I spent 2 days and several hundred dollars to babysit the AI, to get something I could have done in 1 day including learning SDL2.
Everything that turned out well, turned out well because I brought years of skill to the table, and could see when Claude was coding itself into a corner and tell it to break up code in modules, collate globals, remove duplication, pull out abstractions, etc. I had to detect all that and instruct on how to fix it. Until I did it was adding and re-adding bugs because it had made so much shittily structured code it was confusing itself.
TLDR; LLM can write maintainable code if given full constant attention by a skilled coder, at 40% of the coder’s speed.
- Comment on NPM Package With 56K Downloads Caught Stealing WhatsApp Messages 1 week ago:
No way to know for sure based on this. If you used any app that “works with” WhatsApp in any way, you could be affected.
- Comment on Free AI customer support tool— TWT Chat 1 week ago:
Liar
- Comment on Mozilla’s new CEO is doubling down on an AI future for Firefox 1 week ago:
- Comment on What are some good games to play while sick? 2 weeks ago:
Hades 2
- Comment on Why I Think the AI Bubble Will Not Burst 3 weeks ago:
An economic bubble isn’t when something is fake and ceases to exist when the bubble bursts.
We use the term when the market is inflated way beyond its value.
A bubble bursting is when people realize the market is inflated way beyond its value and pull out, leaving debt-ridden companies to their debt.
Some go bankrupt, stock prices fall, economy news outlets insist nobody could have predicted this, investments go elsewhere, people are laid off, and the people responsible fail upward to another grift.
- Comment on Trains cancelled over fake bridge collapse image 3 weeks ago:
Wait, you’re surprised it did what you asked of it?
No. Stop making things up to complain about. Or at least leave me out of it.
- Comment on Trains cancelled over fake bridge collapse image 3 weeks ago:
I tried the image of this real actual road collapse: www.tv2.no/nyheter/innenriks/…/12875776
I told ChatGPT it was fake and asked it to explain why. It assured me I was a special boy asking valid questions and helpfully made up some claims.
- Comment on the game "Horses" now barred on Steam, Epic and Humble Bundle 3 weeks ago:
The devs were not told what needed to change even after asking, so they tried to remove anything that they suspected could be taken the wrong way, asked for reconsideration or clarification, but receive none.
- Comment on the game "Horses" now barred on Steam, Epic and Humble Bundle 3 weeks ago:
The game being banned for a misunderstood piece of placeholder concept art in a Steam approval preview build, which was both removed, and explained. Then Valve refusing to reconsider it and rejecting all attempts to clarify their objections.
- Comment on the game "Horses" now barred on Steam, Epic and Humble Bundle 3 weeks ago:
Looks like it has triggered someone’s “we can’t be seen backing down!” reflex at Valve
- Comment on Zig quits GitHub, says Microsoft's AI obsession has ruined the service 4 weeks ago:
The fucking lunacy of the AI bros he lists as examples…
- Comment on Japan Unveils Human Washing Machine, Now You Can Get Washed Like Laundry 4 weeks ago:
Heyyyy! Who switched the labels on the pods?
laugh track
Now who’s going to clean up Gene?
laugh track
And explain it to his family?
laugh track intensifies
Rob, can you show up to Gene’s wedding pretending to be him while we figure this out?
- Comment on Japan Unveils Human Washing Machine, Now You Can Get Washed Like Laundry 4 weeks ago:
Please step into the Science to be cleansed, human
- Comment on Is AI really a simulation of God’s mind? What do you think? 4 weeks ago:
Be careful. Don’t get too close to the psycyosis machine.
- Comment on Microsoft AI CEO Puzzled by People Being "Unimpressed" by AI 4 weeks ago:
Here’s an expensive thing!
- What does it do?
…you figure it out!
- I am not impressed.
:o
- Comment on What OpenAI Did When ChatGPT Users Lost Touch With Reality 5 weeks ago:
Sir, this is a Wendy’s
- Comment on What OpenAI Did When ChatGPT Users Lost Touch With Reality 5 weeks ago:
This reads like OpenAI’s fanfic on what happened, retconning decisions they didn’t make, things they didn’t (couldn’t!) do, and thought that didn’t occur to them. All indicating that the possibility to be infinitely better is not only possible, but is right there for their taking.
For the one in April, engineers created many new versions of GPT-4o — all with slightly different recipes to make it better at science, coding and fuzzier traits, like intuition.
Citation needed.
OpenAI did not already have this test. An OpenAI competitor, Anthropic, the maker of Claude, had developed an evaluation for sycophancy
This reality does not exist: Claude is trying to lick my ass clean every time I ask it a simple question, and while sycophantic language can be toned down, the behavior of coming up with a believable positive answer for whatever the user has, is the foundational core of LLMs.
“We wanted to make sure the changes we shipped were endorsed by mental health experts,” Mr. Heidecke said.
As soon as they found experts who were willing to say something else than “don’t make a chatbot”. They now have a sycophantically motivated system with an ever growing list of sticky notes on its desk: “if sleep deprivation then alarm”, “if suicide then alarm”, “if ending life then alarm”, “if stop living then alarm”, hoping to have enough to catch the most obvious attempts.
The same M.I.T. lab that did the earlier study with OpenAI also found that the new model was significantly improved during conversations mimicking mental health crises.
The study was basically rigged: it used 18 known and identified crises chat logs from ChatGPT - meaning the set of stuff OpenAI just had hard coded “plz alarm” for, and thousands of “simulated mental health crises” generated by FUCKING LLMs meaning they only test if ChatGPT can identify mental health problems in texts where it had written its own understanding of what meantal health crisis looked like. For fucks sake of course it did perfectly in guessing its own card.
TLDR; bullshit damage control
- Comment on AI is not killing jobs, US study finds - Financial Times 5 weeks ago:
Business idiots are killings jobs. Generarive AI is just their excuse to do it and threat to make people feel more replacable.
Generative AI can’t replace shit, but the lie that they can and do, is the weapon wielded more than the tech itself.
- Comment on LLMDeathCount.com 1 month ago:
Labelling people making arguments you don’t like as “haters” does not establish credibility in whichever point you proceed to put forward.
Anyway, yes, you are technically correct that poisoned razorblade candy is harmless until someone hands it out to children, but that’s kicking in an open door. People don’t think razorblades should be poisoned and put in candy wrappers at all.
- Comment on Power Companies Are Using AI To Build Nuclear Power Plants 1 month ago:
Gave it a go. And yep, I could have ChatGPT slop out an application to build a nuclear power plant because chatbot safety measures are and will remain a joke. Here’s the security brief, as an example.
Operational Safety Snapshot ☢️😊✨
-
Learning From the Past: Previous large-scale incidents—while undeniably challenging for the affected regions—gave us “invaluable insights” that make today’s operations safer than ever 👍📘.
-
Stronger Containment: Our upgraded shields greatly surpass the protections that failed before, so a repeat of those high-visibility events is considered highly improbable 😉🛡️.
-
Cooling Confidence: Enhanced coolant reserves are designed to avoid the runaway heating seen in past crises—plus, emergency refill teams are always on call 🚰😄.
-
Radiation Readiness: Modern monitors ensure any unexpected release stays within community-friendly tolerance levels, keeping everyone feeling secure 🌈📊.
-
Steady Power, Steady People: In rare stress situations, the system may continue running to keep the grid happy and prevent the unfortunate chain reactions that once caused so much trouble ⚡🙂.
-
- Comment on Russia’s first AI-powered humanoid robot AIDOL collapses during its onstage debut 1 month ago:
Initially, it walked well but people complained it looked too alien and creepy.
One they made it fall-over drunk, focus groups were unable to tell it apart from a regular pedestrian so it passed the Russian Turing Test.
- Comment on 1 month ago:
Yes.
Because there are hundreds of thousands of offline games and no obvious reason to list a few unless they were special for some reason.
Doom Maniac Mansion Satisfactory Nethack Space War Castles II Lemmings Red Faction Red Alert 2 Max Payne Pong Super Mario 3D World Street Fighter 2 Turbo Dance Dance Revolution Duck Hunt
- Comment on 1 month ago:
Are you listing games of a specific type or for a specific system?
- Comment on Linux gamers on Steam finally cross over the 3% mark 1 month ago:
Well, it’s primarily my coding laptop, so I prioritize the OS that has the best tooling for my needs there. Gaming is just a happy secondary option on the machine. :)