andallthat
@andallthat@lemmy.world
- Comment on EU considers tariffs on digital services Big Tech 19 hours ago:
yes, there are clearly unfair trade practices here. EU has been making money for Google and Amazon, but the US are not using our services. I hear the best solution to this are tariffs: EU users have to pay to use gmail until enough US users start using EU email providers and we rebalance the services trade!
- Comment on Stop calling them tech companies: GenAI and SaaS — are they really tech? It’s time to call a spade a spade. 2 days ago:
You make a great point. But just to stay on the example of cars: besides the innovation on EVs, there’s this horrible tendency to consider cars as tablets on wheels, both in the sense that you can forget about repairing them and in the sense that they are now increasingly considered low-margin hardware to run higher margin subscription services (or that the car itself becomes something you pay by use instead of owning). If anything warrants high valuation for a car company it would arguably be the innovation on EVs, rather than the SaaS model.
I hope the idea of Cars as a Service or Car Software As a Service dies before becoming too widespread. But if it doesn’t, maybe car companies wouldn’t become “Tech” companies, just more shitty subscription vendors. And their stock should be valued as such, not for the largely unwanted “Tech innovation”.
- Comment on Stop calling them tech companies: GenAI and SaaS — are they really tech? It’s time to call a spade a spade. 2 days ago:
By that measure shouldn’t Disney be considered a Tech company too? Or I guess banks and insurance companies.
I hadn’t thought of it that way, but maybe the article (at least the small part I can read with no paywall) is on to something, Companies that sell access to technology or rely on technology to sell something else (he does give the example of e-commerce) should not be “Tech” companies.
The part I didn’t get to is where the author draws the line to tell what companies ARE Tech. I guess OpenAI or Google would qualify. They sell services but they are services they invented and made, with considerable researxh and investment. But what about Amazon or Netflix?
- Comment on DOGE Plans to Rebuild SSA Codebase in Months, Risking Benefits and System Collapse 5 days ago:
With Grok looking more and more like the only one working for Musk with enough (digital) balls to stand to to his boss, that might be better than the alternative of “Big Balls” and the rest of the Digital Oblivous Goons of Elon
- Comment on What could possibly go wrong? DOGE to rapidly rebuild Social Security codebase. 5 days ago:
Hahaha, good point. On the other hand, “cheap” depends on the perspective. From Musk’s it’s an incredibly cheap way to get a big payoff…huge ROI!
- Comment on What could possibly go wrong? DOGE to rapidly rebuild Social Security codebase. 5 days ago:
“Hey we said ‘rapidly’, nobody said anything about it still working when we’re done”
- Comment on What if there really was a "pee tape"? 1 week ago:
No. If there was a pee tape, Trump himself would sell it and his voters would buy it too. This is a man who sold his own mug shot.
If Russia has something on him, it is dear old fucktons of money. Or we have to prepare to the scarier scenario where he’s not Putin’s puppet because he’s somehow forced to, he really wants to be Putin.
- Comment on Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End 2 weeks ago:
I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs. So far what I see is “developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week.”
All that increased dev capacity needs to translate into new useful products. Right now the “new useful product” that all energies are poured into is… AI itself. Or even worse, shoehorning “AI-powered” features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I’m pretty confident that something new and useful will remain but for now the level of hype is tremendous!
- Comment on Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End 2 weeks ago:
It’s not that LLMs aren’t useful as they are. The problem is that they won’t stay as they are today, because they are massively expensive. There are two ways for this to go (or an eventual combination of both:
-
Investors believe LLMs are going to get better and they keep pouring money into “AI” companies, allowing them to operate at a loss for longer That’s tied to the promise of an actual “intelligence” emerging out of a statistical model.
-
Investments stop believing, the bubble bursts and companies need to make money out of LLMs in their current state. To do that, they need to massively cut costs and monetize. I believe that’s called enshttificarion.
-
- Comment on After 40 years of being free Microsoft has added a paywall to Notepad 5 weeks ago:
the news is more that they are trying to shoehorn AI in effing Notepad to make sure even those little snippets of text can be used for training
- Comment on Who needs a sneaker bot when AI can hallucinate a win for you? - EQL Blog 5 weeks ago:
I think that using large language models to summarize email (especially marketing), news, social media posts or any type of content that uses a lot of formulaic writing is going to generate lots of errors.
The way I understand large language models, they create chains of words statistically, based on “what is this most likely to say based on my training material”?
In marketing emails, the same boilerplate language is used to say very different things. “You have been selected” emails have similar wording to “sorry this time you have not won but…”. Same cheery “thanks for being such a wonderful sucker” tone and 99% similar verbiage except for a crucial “NOT” here and there.
- Comment on Rust is Eating JavaScript 1 month ago:
well, joke’s on you. Since I rewrote her in Rust, my mum runs the 100 meters hurdles in 14 seconds
- Comment on Study of 8k Posts Suggests 40+% of Facebook Posts are AI-Generated 1 month ago:
But if half of the engagement is from AI, isnt that a grift on advertisers? Why should I pay for an ad on Facebook that is going to be “seen” by AI agents? AI don’t buy products (yet?)
- Comment on Russian TV companies demand 2 undecillion rubles from Google 5 months ago:
on the other hand, when Putin’s done killing off most of their own present and future workforce in a senseless war and completely tanking his own economy, that might be the equivalent of like $3
- Comment on Would you trust AI to scan your genitals for STIs? 5 months ago:
I’m not sure we, as a society, we’re ready to trust ML models to do things that might affect lives. This is true for self-driving cars and I expect it to be even more true for medicine. In particular, we can’t accept ML failures, even when they get to a point where they are statistically less likely than human errors.
I don’t know if this is currently true or not, so please don’t shoot me for this specific example, but IF we were to have reliable stats that everything else being equal, self-driving cars cause less accidents than humans, a machine error will always be weird and alien and harder for us to justify than a human one. “He was drinking too much because his partner left him”, “she was suffering from a health condition and had an episode while driving”… we have the illusion that we understand humans and (to an extent) that this understanding helps us predict who we can trust not to drive us to our death or not to misdiagnose some STI and have our genitals wither. But machines? Even if they were 20% more reliable than humans, how would we know which ones we can trust?