“Massive trillion dollar corporations are behaving absolutely fucking atrociously, so we need to do the same” is such an awful take that it makes me doubt the legitimacy of this user account.
Comment on Surprise EU rollback of 'GDPR' digital-rights rules prompts alarm
FaceDeer@fedia.io 15 hours ago
Doesn't seem terribly surprising to me, the existing rules make it very hard to make use of data for AI training in the EU. Other parts of the world have looser restrictions and they're developing AI like gangbusters as a result. The EU needed to either loosen up too or accept this entire sector of information tech being foreign-controlled, which would have its own major privacy and security problems.
skisnow@lemmy.ca 10 hours ago
WorldsDumbestMan@lemmy.today 9 hours ago
Well, if you want a peacful and legal version of the Gestapo that we can implement to one-up them, I have suggestions.
ag10n@lemmy.world 15 hours ago
As if the GDPR was a barrier to IP theft
FaceDeer@fedia.io 15 hours ago
Did you read the article? It says that making AI training easier is a key purpose of these changes.
WhatAmLemmy@lemmy.world 14 hours ago
Why should any of us approve of making things easier for technofascists?
FaceDeer@fedia.io 14 hours ago
Did I say you should approve of it? I'm just explaining why it comes as no surprise to me.
fonix232@fedia.io 9 hours ago
Why do you presume that all AI advancement is purely by technofascists?
ag10n@lemmy.world 14 hours ago
And my point was they’re already doing this in the face of regulation.
BakerBagel@midwest.social 10 hours ago
See, my first thought would be to crack down on the tech parasites that are ruining out society instead of changing the law to accommodate them. But I’m just a dumb American who lives in a place where corporations are allowed to do whatever they want including killing whistleblowers, but I’m sure that the fascist parties taking power in Europe won’t do that.
FaceDeer@fedia.io 14 hours ago
Then why change the rules? The article's author seems quite convinced that this will make AI training easier.
Aceticon@lemmy.dbzer0.com 9 hours ago
Sounds like the problem is lack of enforcement of the existing laws rather than the existing laws being bad.
To provide an extreme example, just because there’s a wave of murders doesn’t mean murder should be made legal.
6nk06@sh.itjust.works 15 hours ago
the existing rules make it very hard to make use of data for AI training in the EU
Yeah, like, that’s the whole point of privacy… Are you that retarded or did you get a paycheck from Sam Altman?
Novamdomum@fedia.io 14 hours ago
There really is no need for this rudeness. I'm sure you can make your point without resorting to this sort of language. Let's try not to turn into reddit.
Railcar8095@lemmy.world 14 hours ago
*threat of ban: “meh” *threat of redditification: “oh shit, oh shit”
FaceDeer@fedia.io 13 hours ago
I explain why I think the thing the article is about is happening, I get pummelled with downvotes because people don't like the thing I'm explaining. Someone calls me a retard, they get as many upvotes as I got downvotes. Seems like we're already in a pretty bad spot.
BakerBagel@midwest.social 10 hours ago
There is nothing stopping the EU from going the DeepSeek route and just stealing the finished LLM’s from American companies. But the truth is that the EU shouldn’t want to have all these data centers training generative models. The us is already dedicating 4% of our electricity production to them, with people in states along the Great Lakes and Eastern seaboard seeing massive increases in their electric bills to pay for them (~30% for me in Ohio, ~75% for my brother in Virginia). I can understand if you are a technocratic neoliberal in the EU parliament that is taking bribes from tech firms why you would want this, but for anyone paying attention, rhe promises tech companies are making to burn hundreds billions of euros while gutting privacy, 🔏IP, and consumer protections at the top of the bubble makes no sense.
FaceDeer@fedia.io 4 hours ago
Deepseek was trained from scratch.
That aside, you're basically describing the second option I presented; letting everyone else do the AI thing instead.
CouldntCareBear@sh.itjust.works 11 hours ago
The guy explained the rational he didn’t say it was his personal view that it should be done.
And even if was his view we shouldn’t be down voting things based on whether you agree or not. We should do it on whether it adds to the discussion.
The quality of discourse on lemmy is fucking dire.
gusgalarnyk@lemmy.world 10 hours ago
Explaining something no one asked to be explained without providing an opinion on the subject itself reads like tacit approval. On a subject such as this - "reduce your privacy for the benefit of AI companies that are some number of:
- monopolies that should have been busted many times over
- run by evil, greedy people who do not consider safety for the entire world when developing these things (reference Musk saying there’s a chance these destroy the world but that he’d rather be alive to see it happen than not contribute to the destruction)
- companies aiming not to better the world in anyway but explicitly pursue money at any real cost to the human lives they’re actively stealing from or attempting to invalidate." - it’s no surprise the comment is unpopular and gets downvoted.
If I stopped my comment there I’d get voted on based on my explanation of what just happened assuming I was pro-this process because that’s human nature (or maybe it’s a byproduct of modern media discourse where they ask questions but don’t answer them and expect you to fill in the blanks (look at most of conservative media when it’s dog whistling or talking about data around crime or what have you)).
I don’t think someone should be voted into the ground for explaining something, but I also think every online comment should do it’s best to make a stand on the core subject they’re discussing. We are in dire times and being a bystander let’s evil people win.
So practicing what I’m preaching: Privacy laws should absolutely not be reduced for the benefit of AI companies. We should create regulations and safety rails around AI companies so they practice ethically and safely, which won’t happen in the US.
FaceDeer@fedia.io 4 hours ago
Yeah, the downvote button isn't even being used as an "I disagree with this" button in this case, it's an "I hate the general concept this comment is about" button. And now you're getting downvoted too for pointing that out.
Guess I should have just said "boy howdy do I ever hate AI, good thing it's a bubble and everything will go right back to the way things were when it pops" and raked in the upvotes instead.
General_Effort@lemmy.world 9 hours ago
Copyright is the bigger problem. The lack of a sensible Fair Use equivalent makes a lot of “tech” impossible. GDPR is a problem, too, but for AI it is the smaller problem. The media sees itself as benefitting from the broken copyright laws, while GDPR cuts into their profits. So that’s why the public discussion is completely skewed.
It’s a given that the EU’s reliance on foreign IT companies will increase. Europe is deeply committed to this copyright ideology, that demands limiting and controlling the sharing of information. It’s not just a legal but a cultural commitment, as can be seen in these discussions on Lemmy. Look for reforms to the Data Act. That’s the latest expansion of this anti-enlightenment nonsense and it really has the potential to turbocharge the damage to the existing industry.
Alphane_Moon@lemmy.world 13 hours ago
You’re not going to beat the Americans at their own game. It’s a society that does not respect the rule of law, does not believe in true market competition and does not believe in democracy.
If you think I am acting out, consider the following point: recently Meta was found to have directly (in a premeditated manner) promoted scams/frauds that netted them $16B in commission in a single year. We all know that nothing will be done about this even under a hypothetical centre-right US government.
How do we know that? Well was anything done about Microsoft’s anti-competitive behaviour in the 90s?
But for me, the real irony is the polemics about competition and “free market”. In a real free market, MS, Meta, Google would not have hundreds of billions of dollar to burn because competition would drive profit margins to state of approaching zero. Zuck would not be able to burn $45 B on his weird and disgusting Metaverse Mii autosexuality fetish.
sem@piefed.blahaj.zone 9 hours ago
How do we deal with oligarchs including Xi?
Alphane_Moon@lemmy.world 8 hours ago
Xi is not an oligarch, to my knowledge, he has always worked in the CCP.