Comment on AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns

frog_brawler@lemmy.world ⁨4⁩ ⁨days⁩ ago

I’m going to express a perspective that goes against a lot of the Lemmy hive-mind, so I’m sure I’ll be downvoted, but here goes anyway cause I don’t give no fucks about downvotes…

When I first heard about LLMs (around 2021 I think), I was pretty neutral on it. “Meh, sounds like some annoying bullshit to piss off customers and attempt to take call-center jobs away. Probably going to flop…” was the perspective I had around the time of my first introduction.

Around 2022 or 2023, the place I was working at the time started heavily pushing people to use it. I was a Cloud Engineer at the time. There wasn’t a lot of justification as to “why” we should be using it, other than, “it’ll make things easier.” Because of my org pushing large numbers of engineers and developers into using something without demonstrating an actual benefit, or reason why it will help immediately caused my brain to signal red flags and become suspicious of it. My neutrality shifted into an ANTI-AI sentiment.

By the end of 2023 (or so), I was pretty vehemently against AI. I don’t need to articulate on that too much, we all know the reasons why. Towards the end of last year, I found myself in a weird spot of starting a business and didn’t know wtf I was doing, at all. That was the first time that I had a good experience with using Claude. It guided me through the process of creating an LLC, and a bunch of other bullshit that’s associated with that. I’ve had a few good experiences with it for the past few months… it’s a lot better than it was.

At this point, I’ve come to the conclusion that a large problem with AI was what happened in (what I’m calling) the early days (2020-2021), of pushing people to adopt some bullshit that was wholly unsubstantiated, and quite frankly sucked. The expectation for a “boom” was greatly miscalculated.

If companies were starting to push people to use it for the first time in 2025, I think we’d be having a much different conversation about AI / LLMs in 2026. I think it has some viable uses, and it does some of the technical aspects of my role substantially faster than I can do them. However, my concerns around the economic, environmental, and political implications of AI still have me maintaining a perspective that it’s more trouble than it’s worth. In short, “AI isn’t all bad on it’s own… the system we’re trying to add AI to is already fucked and AI is making it worse.”

source
Sort:hotnewtop