“You don’t need a DEA warrant or a Justice Department subpoena to see the trend: Europe’s 90‑plus‑percent dependency on US cloud infrastructure, as former European Commission advisor Cristina Caffarra put it, is a single‑shock‑event security nightmare waiting to rupture the EU’s digital stability.”
This is everyone’s risk. The digital equivalent of thinking that ICE won’t be interested in me so I’ll wait and watch instead of doing something now.
stinkytofuisgood@lemmy.ca 1 hour ago
Data sovereignty and social media sovereignty is something I’d love to see worked on more. I mean, the shift is already happening to a larger degree as you know if you’re reading this comment.
It’s a sad state of affairs when the average Joe needs to consider these things - or maybe it’s a wake up call to our relative complacency over the past decades?
I prefer to keep my data outside of the US. Canada for some, specific EU countries for others. But bills like Chat Control are even threatening other nations’ long-standing privacy norms. The burn-out will be real for some, those who didn’t care may not until it directly affects them. Others will hopefully find a balance they believe suits them.
The floor is lava! Have to keep jumping around!
jjlinux@lemmy.zip 14 minutes ago
I don’t know how these laws work when everything you and your environment use is self-hosted, but where I am it’s the one thing you can do to control your data, by law. Not even a warrant allows checking personally owned or company owned devices. However, most, if not all, institutions and companies rely entirely on US based tech hosted in the US. That shit needs to end.
glog78@digitalcourage.social 1 hour ago
@stinkytofuisgood @kalkulat Have ever someone thought that the bad results are a problem of the business model ?
Why should Open AI ( besides competition with others ) make the AI answering correct and with the minimal amount of tokens , if they get payed on token ?
I only just had this thought reading this post.
stinkytofuisgood@lemmy.ca 28 minutes ago
When it comes to LLM AI usage, there’s definitely a few things you can consider.
I’ll reference OpenAI as you mentioned but it can apply to others as well to varying degrees
First is that the OpenAI has the goal of making a profit, but the market is getting more saturated, the costs to run data centres increases with model complexity and user base increases.
Funding is being pumped into them by investors, government in the US and other non-AI companies are banking on AI to bolster the economy and profits respectively. For things like ChatGPT they offer paid plans for consumers but this is a small fraction of their revenue.
The stats show they are losing money quickly. Investors want profits, the company wants dominance, the government seems to be approaching it from a “too big to fail approach”.
So we’re slowly seeing shifts; in the US, ads are being experimented with in ChatGPT, models are being chosen that use less tokens and less computing power, etc.
There have also been studies showing the diminishing returns of the thinking that “a bigger model is better”.
There’s also the question of how much they care about correct answers. They are surely aware that most don’t understand how an LLM AI works, that most will not do much research to fact-check answers, that most will consider convenience to be king.
Their token system is a huge balancing act and I’m not fully convinced they know what works and what doesn’t.
AI is being propped up, it shows signs of a bubble. Not to say AI is going anywhere, but when the pop inevitably happens, the LLM AI landscape will leave behind a lot of failure and monetary loss, and a few winners. Think dot com bubble for reference but in a whole new era of computing.
The circular economy going on and the investment from private and US government entities can only keep this train on the tracks for so long.
If I were to subjectively answer your question in a phrase: I don’t think they really do care too much at the moment.