setsubyou
@setsubyou@lemmy.world
- Comment on [deleted] 1 week ago:
It was called 世界でいちばん透きとおった物語 by Hikaru Sugi, but I don’t think there’s an English translation because this kind of gimmick works a lot better in scripts where all characters are the same size, and a translation that ends up with a comparable arrangement of those letters would be a major pain too.
- Comment on The rise of Moltbook suggests viral AI prompts may be the next big security threat 1 week ago:
I don’t think it means that by definition. Not knowing how to do things yourself is a choice. And it’s the same choice we’ve been making ever since human civilization became too complex for one person to be an expert at everything. We choose to not learn how to do jobs we can have someone else, or a machine, handle all the time. If we choose wisely, we can greatly increase our capacity to get things done.
When I went to school in the 90ies, other students were asking the same question about math, because calculators existed. I don’t think they were 100% right because at least a basic understanding of math is generally useful even now with AI. But our teachers who were saying that we shouldn’t rely on calculators because they have limits and we won’t always have one with us were certainly not right either.
Personally I don’t like AI for everything either. But also, current AI assistants are just not trustworthy and for me that’s the more important point. I do write e-mails myself but I don’t see a conceptual difference between letting an AI do it, and letting a human secretary do it, which is not exactly unheard of. I just don’t trust current models nor the companies that operate them enough to let them handle something so personal. Similarly, even though I’ve always been interested in learning languages, I don’t see a big conceptual difference between using AI for translation and asking a human to do it, which is what most people did in the past. And so on.
- Comment on [deleted] 1 week ago:
I read one once where being able to slightly see through the pages was a key part of the plot
- Comment on DuckDuckGo poll says 90% responders don't want AI 2 weeks ago:
The article already notes that
privacy-focused users who don’t want “AI” in their search are more likely to use DuckDuckGo
But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.
- Comment on France will replace Microsoft Teams, Google Meet, Zoom, Webex and others with its own sovereign video conferencing application "Visio" for public officials 2 weeks ago:
It’s also a French word that means video conference (as a shortened form of visioconférence).
- Comment on At Davos, NVIDIA, Microsoft CEOs deny AI bubble 3 weeks ago:
The bubble thing is more the financial aspect. None of these AI companies are profitable and they also don’t have a clear path to profit. For some time the business plan of Open AI was literally develop advanced AI and then let the AI figure out how to make money. Yet, these companies attract huge amounts of investment and are responsible for basically all of the economic growth in the US.
Nobody thinks there are no uses at all for LLMs or image generation etc. or that people in general hate all AI. It’s a bubble because a lot of money is being invested in something that nobody managed to make profitable yet, so if the investment stops, then these companies will all implode.
- Comment on Wine 11 runs Windows apps in Linux and macOS better than ever 4 weeks ago:
There were some last year specifically for games on SteamOS bs Windows, like this: arstechnica.com/…/games-run-faster-on-steamos-tha…
- Comment on Google offers bargain: Sell your soul to Gemini, and it'll give you smarter answers 4 weeks ago:
Even that is just confusing. I sometimes use Perplexity (because Pro comes with my bank account - neobanks have zero focus). And by default it remembers things you say. So when I ask a question sometimes it will randomly decide to bring in something I else I asked about before. E.g. I sometimes use it to look up programming related stuff, and then when I ask something else it will randomly research whatever language it thinks I like now in that context too and do things like suggest an anime based on my recent interest in Rust for no good reason.
- Comment on 4 weeks ago:
Tbh I think the Sun Ray thin terminals were pretty cool at the time. Not really cloud because it was an enterprise product 20 years ago, so they used servers hosted by the enterprise. But at the time this idea of taking your entire desktop session with me via my employee badge felt pretty cool. Of course only supporting X11 sessions on Solaris meant that nobody outside Sun wanted it though but that’s not really a problem with the concept as such.
- Comment on 4 weeks ago:
In October 2025
- Comment on 4 weeks ago:
Yeah, it’s a major pain at my work because our cloud doesn’t support Macs (like e.g. AWS would), so we run a server room with a bunch of Macs that we wouldn’t otherwise need.
- Comment on 4 weeks ago:
You could also just only use Macs. In theory ARM Macs let you build and test for macOS (host or vm), Linux (containers or vm), Windows (vm), iOS (simulator or connected device), and Android (multiple options), both ARM and x86-64.
At least in theory. I think in practice I’d go mad. Not from the Linux part though. That part just works because podman on ARM Macs will transparently use emulation for x86 containers by default. (You can get the same thing on Linux too with qemu-user-static btw., for a lot more architectures too.)
- Comment on 4 weeks ago:
Damn you’re running a whole production pipeline and it only takes two minutes? That’s pretty good. I’ve worked with projects that take tens of minutes, if not hours, just to compile.
At work we have CI runs that take almost a week. On fairly powerful systems too. Multiple decades of a “no change without a test case” policy in a large project combined with instrumented debug builds…
- Comment on MySQL users be warned: git commits in mysql-server significantly declined 2025 5 weeks ago:
I’m not sure I’m on board with this “fewer CVE’s reported means the product is more secure” logic in this article…
- Comment on 'Harada TEKKEN is Completely Dead': Veteran Bandai Dev Shares Final Message with Fans 1 month ago:
Due to the blank between Harada and TEKKEN, the title reads like he said TEKKEN as he imagined it is dead, but what he actually wrote is his X handle (with the underscore), so he’s talking about himself.
- Comment on How do I contact pixelfed.global admins? 1 month ago:
But bara roligt is Swedish
- Comment on Microsoft wants to replace its entire C and C++ codebase, perhaps by 2030 1 month ago:
They could do what Apple did when they replaced the old MacOS with UNIX, which is they shipped an emulator for a while that was integrated really well. They also had a sort of backwards compatible API that made porting apps a bit easier (now removed, it died with 32 bit support).
But in the Windows world, third party drivers are much more important. So in that regard it would be more difficult. Especially if they’re not fully behind it. As soon as they waver and there is some way to keep using traditional Windows, the result will be the same as when they tried to slim down the Windows API on ARM, and then nobody moved away from the APIs that were removed because they still worked on x86, which significantly slowed adoption for Windows on ARM.
- Comment on AI-generated code contains more bugs and errors than human output 1 month ago:
It depends on the task. As an extreme example, I can get AI to create a complete application in a language I don’t know. There’s no way that’s not more productive than me first learning the language to a point where I can make apps in it. Just have to pick something simple enough for the AI.
Of course the opposite extreme also exists. I’ve found that when I demand something impossible, AI will often just try to implement it anyway. It can easily get into an endless cycle where it keeps optimistically declaring that it identified the issue and fixed it with a small change, over and over again. This includes cases where there’s a bug in the underlying OS or similar. You can waste a huge amount of time going down an entirely wrong path if you don’t realize that an idea doesn’t work.
In my real work neither of these really happen. So the actual impact is much less. A lot of my work is not coding in the first place. And I’ve been writing code since I was a little kid, for almost 40 years now. So even the fast scaffolding I can do with AI is not that exciting. I can do that pretty quickly without AI too. When AI coding tools appeared my bosses started asking if I was fast because I was using one. No, I’m fast because some people ask for a new demo every week. Causes the same problems later too.
But I also do think that we all still need to learn how to use AI properly. This applies to all tools, but I think it’s more difficult than with other tools. If I try to use a hammer on something other than a nail, it will not enthusiastically tell me it can do it with just one more small change. AI tools absolutely will though, and it’s easy to just let them try because it’s just a few seconds to see what they come up with. But that’s a trap that leads to those productivity wasting spirals. Especially if the result actually somehow still works at first, so we have to fix it half a year later instead of right away.
At my work there are some other things that I feel limit the productivity potential of AI tools. First of all we’re only allowed to use a very limited number of tools, some of them made in-house. Then we’re not really allowed to integrate them into our workflows other than the part where we write code. E.g. I could trivially write an mcp server that interacts with our (custom in-house) ci system and actually increases my productivity because I could save a small number of seconds very often if I could tell an AI to find builds for me for integration or QA work. But it’s not allowed. We’re all being pushed to use AI but the company makes it really difficult at the same time.
So when I play around with AI on my spare time I do actually feel like I’m getting a huge boost. Not just because I can use a claude model instead of the ones I can use at work, but also just basic things like e.g. being able to turn on AI in Xcode at all when working on software for Apple platforms. On my work Macbook I can’t turn on any Apple AI features at all so even tab completion is worse. Or in other words, those realities of working on serious projects at a serious company with serious security policies can also kill any potential productivity boost from AI. They basically expect us to be productive with only those features the non-developer CEO likes, who also doesn’t have to follow any of our development processes…
- Comment on Indie Game Awards Disqualifies Clair Obscur: Expedition 33 Due To Gen AI Usage 1 month ago:
I’ve been programming as a hobby since I was 9. It’s also my job so I rarely finish the hobby projects anymore, but still.
On my first computer (Apple II) I was able to make a complete game as a kid that I felt was comparable to some of the commercial ones we had.
In the 1990ies I was just a teenager busy with school but I could make software that was competitive with paid products. Published some things via magazines.
In the late 90ies I made web sites with a few friends from school. Made a lot of money in teenager terms. Huge head start for university.
In the 2000s for the first time I felt that I couldn’t get anywhere close to commercial games anymore. I’m good at programming but pretty much only at that. My art skills are still on the same level as when I was a kid. Last time I used my own hand drawn art professionally was in 2007.
Games continued becoming more and more complex. They now often have incredibly detailed 3D worlds or at least an insane amount of pixel art. Big games have huge custom sound tracks. I can’t do any of that. My graphics tablets and my piano are collecting dust.
In 2025 AI would theoretically give me options again. It can cover some of my weak areas. But people hate it, so there’s no point. Indy developers now require large teams to count as indy; for a single person it’s difficult especially with limited time.
It’d be nice if the ethical issues could be fixed though. There are image models trained on proprietary data only, music models will get there too because of some recent legal settlements, but it’s not enough yet.
- Comment on LG TVs’ unremovable Copilot shortcut is the least of smart TVs’ AI problems 1 month ago:
That’s what I do. I have an LG OLED from 6-7 years ago and I have no idea what the UI looks like. But to be fair this is only because I don’t watch traditional TV at all. It’s just an Apple TV for most streaming services and a Mac Mini for some other things like adblocked youtube (with one of those cheap gyro mouse and keyboard bluetooth remotes). I guess I wouldn’t have to use the satellite TV though, I could get iptv via my fibre isp too, but that’d cost money.
The Mac is not good at supporting CEC other than switching source when it wakes up, but even that’s not an issue because I can still use the Apple TV remote to control volume even when something else is the active source. Speaking of volume, my setup also includes a Samsung sound bar which also has a remote that I never actually have to use. Everything mostly just works.
- Comment on No AI* Here - A Response to Mozilla's Next Chapter - Waterfox Blog 1 month ago:
The EU forced Apple to allow other rendering engines, but implementing one costs money vs just using WebKit for free, so nobody does it.
- Comment on No AI* Here - A Response to Mozilla's Next Chapter - Waterfox Blog 1 month ago:
very few who even touch AI for anything aside from docs or stats
Not even translation? That’s probably the biggest browser AI feature.
- Comment on All-Screen Keyboard Has Flexible Layouts 1 month ago:
The real ugly Optimus is a bunch of StreamDecks next to each other
- Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds 1 month ago:
Since sugar is bad for you, I used organic maple syrup instead and it works just as well
- Comment on Why I Think the AI Bubble Will Not Burst 2 months ago:
A Chinese university trained GLM
A startup spun out by a university (z.ai). Their business model is similar to what everybody else does, they host their models and sell access while trying to undercut each other. And like others they raised billions in funding from investors to be able to do this.
- Comment on Why I Think the AI Bubble Will Not Burst 2 months ago:
But also they are just tuning and packaging a publicly available model, not creating their own.
So they can be profitable because the cost of creating that model isn’t factored in, and if people stop throwing money at LLMs and stop releasing models for free, there goes their business model. So this is not really sustainable either.
- Comment on Google's Agentic AI wipes user's entire HDD without permission in catastrophic failure 2 months ago:
We need to start posting this everywhere else too.
This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.
- Comment on Microsoft finally admits almost all major Windows 11 core features are broken 2 months ago:
How do they mess this up so bad?
They made their devs use copilot.
- Comment on JLCPCB Locking Accounts, Mentions “Risky IP Addresses, Activities” | Hackaday 3 months ago:
Tbf the company doesn’t seem to spell out jialichuang or printed circuit board on their web site either, so maybe the author didn’t know.
- Comment on Rustmire (in development), an adventure x strategy x city-builder hybrid, where you explore a post apocalyptic world while building out a roaming city, with pixel-art graphics and a side view, releases a demo on Steam. 3 months ago:
Reminds me of Of Mice and Sand