setsubyou
@setsubyou@lemmy.world
- Comment on 'Harada TEKKEN is Completely Dead': Veteran Bandai Dev Shares Final Message with Fans 3 days ago:
Due to the blank between Harada and TEKKEN, the title reads like he said TEKKEN as he imagined it is dead, but what he actually wrote is his X handle (with the underscore), so he’s talking about himself.
- Comment on How do I contact pixelfed.global admins? 1 week ago:
But bara roligt is Swedish
- Comment on Microsoft wants to replace its entire C and C++ codebase, perhaps by 2030 1 week ago:
They could do what Apple did when they replaced the old MacOS with UNIX, which is they shipped an emulator for a while that was integrated really well. They also had a sort of backwards compatible API that made porting apps a bit easier (now removed, it died with 32 bit support).
But in the Windows world, third party drivers are much more important. So in that regard it would be more difficult. Especially if they’re not fully behind it. As soon as they waver and there is some way to keep using traditional Windows, the result will be the same as when they tried to slim down the Windows API on ARM, and then nobody moved away from the APIs that were removed because they still worked on x86, which significantly slowed adoption for Windows on ARM.
- Comment on AI-generated code contains more bugs and errors than human output 2 weeks ago:
It depends on the task. As an extreme example, I can get AI to create a complete application in a language I don’t know. There’s no way that’s not more productive than me first learning the language to a point where I can make apps in it. Just have to pick something simple enough for the AI.
Of course the opposite extreme also exists. I’ve found that when I demand something impossible, AI will often just try to implement it anyway. It can easily get into an endless cycle where it keeps optimistically declaring that it identified the issue and fixed it with a small change, over and over again. This includes cases where there’s a bug in the underlying OS or similar. You can waste a huge amount of time going down an entirely wrong path if you don’t realize that an idea doesn’t work.
In my real work neither of these really happen. So the actual impact is much less. A lot of my work is not coding in the first place. And I’ve been writing code since I was a little kid, for almost 40 years now. So even the fast scaffolding I can do with AI is not that exciting. I can do that pretty quickly without AI too. When AI coding tools appeared my bosses started asking if I was fast because I was using one. No, I’m fast because some people ask for a new demo every week. Causes the same problems later too.
But I also do think that we all still need to learn how to use AI properly. This applies to all tools, but I think it’s more difficult than with other tools. If I try to use a hammer on something other than a nail, it will not enthusiastically tell me it can do it with just one more small change. AI tools absolutely will though, and it’s easy to just let them try because it’s just a few seconds to see what they come up with. But that’s a trap that leads to those productivity wasting spirals. Especially if the result actually somehow still works at first, so we have to fix it half a year later instead of right away.
At my work there are some other things that I feel limit the productivity potential of AI tools. First of all we’re only allowed to use a very limited number of tools, some of them made in-house. Then we’re not really allowed to integrate them into our workflows other than the part where we write code. E.g. I could trivially write an mcp server that interacts with our (custom in-house) ci system and actually increases my productivity because I could save a small number of seconds very often if I could tell an AI to find builds for me for integration or QA work. But it’s not allowed. We’re all being pushed to use AI but the company makes it really difficult at the same time.
So when I play around with AI on my spare time I do actually feel like I’m getting a huge boost. Not just because I can use a claude model instead of the ones I can use at work, but also just basic things like e.g. being able to turn on AI in Xcode at all when working on software for Apple platforms. On my work Macbook I can’t turn on any Apple AI features at all so even tab completion is worse. Or in other words, those realities of working on serious projects at a serious company with serious security policies can also kill any potential productivity boost from AI. They basically expect us to be productive with only those features the non-developer CEO likes, who also doesn’t have to follow any of our development processes…
- Comment on Indie Game Awards Disqualifies Clair Obscur: Expedition 33 Due To Gen AI Usage 2 weeks ago:
I’ve been programming as a hobby since I was 9. It’s also my job so I rarely finish the hobby projects anymore, but still.
On my first computer (Apple II) I was able to make a complete game as a kid that I felt was comparable to some of the commercial ones we had.
In the 1990ies I was just a teenager busy with school but I could make software that was competitive with paid products. Published some things via magazines.
In the late 90ies I made web sites with a few friends from school. Made a lot of money in teenager terms. Huge head start for university.
In the 2000s for the first time I felt that I couldn’t get anywhere close to commercial games anymore. I’m good at programming but pretty much only at that. My art skills are still on the same level as when I was a kid. Last time I used my own hand drawn art professionally was in 2007.
Games continued becoming more and more complex. They now often have incredibly detailed 3D worlds or at least an insane amount of pixel art. Big games have huge custom sound tracks. I can’t do any of that. My graphics tablets and my piano are collecting dust.
In 2025 AI would theoretically give me options again. It can cover some of my weak areas. But people hate it, so there’s no point. Indy developers now require large teams to count as indy; for a single person it’s difficult especially with limited time.
It’d be nice if the ethical issues could be fixed though. There are image models trained on proprietary data only, music models will get there too because of some recent legal settlements, but it’s not enough yet.
- Comment on LG TVs’ unremovable Copilot shortcut is the least of smart TVs’ AI problems 2 weeks ago:
That’s what I do. I have an LG OLED from 6-7 years ago and I have no idea what the UI looks like. But to be fair this is only because I don’t watch traditional TV at all. It’s just an Apple TV for most streaming services and a Mac Mini for some other things like adblocked youtube (with one of those cheap gyro mouse and keyboard bluetooth remotes). I guess I wouldn’t have to use the satellite TV though, I could get iptv via my fibre isp too, but that’d cost money.
The Mac is not good at supporting CEC other than switching source when it wakes up, but even that’s not an issue because I can still use the Apple TV remote to control volume even when something else is the active source. Speaking of volume, my setup also includes a Samsung sound bar which also has a remote that I never actually have to use. Everything mostly just works.
- Comment on No AI* Here - A Response to Mozilla's Next Chapter - Waterfox Blog 3 weeks ago:
The EU forced Apple to allow other rendering engines, but implementing one costs money vs just using WebKit for free, so nobody does it.
- Comment on No AI* Here - A Response to Mozilla's Next Chapter - Waterfox Blog 3 weeks ago:
very few who even touch AI for anything aside from docs or stats
Not even translation? That’s probably the biggest browser AI feature.
- Comment on All-Screen Keyboard Has Flexible Layouts 3 weeks ago:
The real ugly Optimus is a bunch of StreamDecks next to each other
- Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds 3 weeks ago:
Since sugar is bad for you, I used organic maple syrup instead and it works just as well
- Comment on Why I Think the AI Bubble Will Not Burst 4 weeks ago:
A Chinese university trained GLM
A startup spun out by a university (z.ai). Their business model is similar to what everybody else does, they host their models and sell access while trying to undercut each other. And like others they raised billions in funding from investors to be able to do this.
- Comment on Why I Think the AI Bubble Will Not Burst 4 weeks ago:
But also they are just tuning and packaging a publicly available model, not creating their own.
So they can be profitable because the cost of creating that model isn’t factored in, and if people stop throwing money at LLMs and stop releasing models for free, there goes their business model. So this is not really sustainable either.
- Comment on Google's Agentic AI wipes user's entire HDD without permission in catastrophic failure 4 weeks ago:
We need to start posting this everywhere else too.
This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.
- Comment on Microsoft finally admits almost all major Windows 11 core features are broken 1 month ago:
How do they mess this up so bad?
They made their devs use copilot.
- Comment on JLCPCB Locking Accounts, Mentions “Risky IP Addresses, Activities” | Hackaday 2 months ago:
Tbf the company doesn’t seem to spell out jialichuang or printed circuit board on their web site either, so maybe the author didn’t know.
- Comment on Rustmire (in development), an adventure x strategy x city-builder hybrid, where you explore a post apocalyptic world while building out a roaming city, with pixel-art graphics and a side view, releases a demo on Steam. 2 months ago:
Reminds me of Of Mice and Sand
- Comment on Firefox is adding profiles to separate your browsing sessions 2 months ago:
If you want an icon you can double click on your desktop, you can put you command in a file with the extension “.command” and mark it as executable. Double clicking it will run the content as a shell script in Terminal.
If you want something that can be put into the Dock, use the Script Editor application that comes with macOS to create a new AppleScript script. Type
do shell script “<firefox command here>”then find Export in the menu. Instead of Script, choose export to Application and check Run Only. This will give you an application you can put in the Dock.If you want to use Shortcuts, you can use the Run Shell Script action in Shortcuts too.
Finally, if you want something that opens multiple firefoxes at once, chain multiple firefox invocations together on one line separated by an ampersand. There is an option you have to use (–new-instance I think?) to make Firefox actually start a complete new instance.
- Comment on Excel's AI: 20% of the time, it works every time 2 months ago:
That’s funny because I grew up with math teachers constantly telling us that we shouldn’t trust them.
Normal calculators that don’t have arbitrary precision have all the same problems you get when you use floating point types in a programming language. E.g. 0.1+0.2==0.3 evaluates to false in many languages. Or how adding very small numbers to very large numbers might result in the larger number as is.
If you’ve only used CAS calculators or similar you might not have seen these too since those often do arbitrary precision arithmetics, but the vast majority of calculators is not like that. They might have more precision than a 32 bit float though.
- Comment on Excel's AI: 20% of the time, it works every time 2 months ago:
I mean, most calculators are wrong quite often
- Comment on AI Coding Is Massively Overhyped, Report Finds 2 months ago:
What bothers me the most is the amount of tech debt it adds by using outdated approaches.
For example, recently I used AI to create some python scripts that use polars and altair to parse some data and draw charts. It kept insisting to bring in pandas so it could convert the polars dataframes to pandas dataframes just for passing them to altair. When I told if that altair can use polars dataframes directly, that helped, but two or three prompts later it would try to solve problems by adding the conversion again.
This makes sense too, because the training material, on average, is probably older than the change that enabled altair to use polars dataframes directly. And a lot of code out there just uses pandas in the first place.
The result is that in all these cases, someone who doesn’t know this would probably be impressed that the scripts worked, and just not notice the extra tech debt from that unnecessary dependency on pandas.
It sounds like it’s not a big deal, but these things add up and soon your AI enhanced code base is full of additional dependencies, deprecated APIs, unnecessarily verbose or complicated code, etc.
I feel like this is one aspect that gets overlooked a bit when we talk about productivity gains. We don’t necessarily immediately realize how much of that extra LoC/time goes into outdated code and old fashioned verbosity. But it will eventually come back to bite us.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 months ago:
Well it’s not improving my productivity, and it does mostly slow me down, but it’s kind of entertaining to watch sometimes. Just can’t waste time on trying to make it do anything complicated because that never goes well.
Tbh I’m mostly trying to use the AI tools my employer allows because it’s not actually necessary for me to believe that they’re helping. It’s good enough if the management thinks I’m more productive. They don’t understand what I’m doing anyway but if this gives them a warm fuzzy feeling because they think they’re getting more out of my salary, why not play along a little.
- Comment on AI adoption rate is declining among large companies — US Census Bureau claims fewer businesses are using AI tools 3 months ago:
What gets me is that even the traditional business models for LLMs are not great. Like translation, grammar checking, etc. Those existed before the boom really started. DeepL has been around for almost a decade and their services are working reasonably well and they’re still not profitable.
- Comment on AI adoption rate is declining among large companies — US Census Bureau claims fewer businesses are using AI tools 3 months ago:
As someone who sometimes makes demos of our own AI products at work for internal use, you have no idea how much time I spend on finding demo cases where LLM output isn’t immediately recognizable as bad or wrong…
To be fair it’s pretty much only the LLM features that are like this. We have some more traditional AI features that work pretty well. I think they just tagged on LLM because that’s what’s popular right now.
- Comment on Elden Ring on Switch 2 Is a Disaster in Handheld Mode - IGN 4 months ago:
I played it on Steam Deck and it was fine. And the Switch 2 is more powerful than that, although it also has a much higher display resolution.
- Comment on China cut itself off from the global internet on Wednesday 4 months ago:
Sometimes mandatory web proxies still allow direct connections to port 443 so as to not break https, which in return means as long as your connection is to port 443, that proxy will pass it through without interfering.
I used to run sshd on port 443 for this reason back when I regularly had to work from client networks.
- Comment on Roblox Retaliates Against Child Abuse Survivor Who Exposed Platform's Predator Problem 4 months ago:
I played MIDI Maze on Atari ST as a kid, that was long before Quake…
Later in high school we played Doom over IPX.
- Comment on Reddit plans to unify its search interface as it looks to become a search engine | TechCrunch 5 months ago:
Problem is they’ve been smoking whatever Google AI recommends for too long now
- Comment on Wi-Fi 8 won't be faster, but will be better - more details emerge just hours after Wi-Fi 7 protocols are officially ratified 5 months ago:
EasyMesh exists. But not many companies implement it.
- Comment on St. Paul, MN, was hacked so badly that the National Guard has been deployed 5 months ago:
The article says it started on a Friday morning in Minnesota. It’s clear that that’s when the attack started and not a case of the first guy starting work that day discovering that it happened, because the article also says that they tried to contain it as it was going on, but ultimately failed.
Minnesota is at UTC-5 and China is at UTC+8, meaning when it’s morning in Minnesota, it’s already 13 hours later in China, i.e. middle of the night.
- Comment on The Epochalypse: It’s Y2K, But 38 Years Later 5 months ago:
HFS has this limitation but isn’t the default file system anymore since several years ago.