The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.
Brian Eno: “The biggest problem about AI is not intrinsic to AI. It’s to do with the fact that it’s owned by the same few people”
Submitted 1 week ago by cm0002@lemmy.world to technology@lemmy.world
https://musictech.com/news/music/brian-eno-ai-problem/
Comments
MyOpinion@lemm.ee 1 week ago
lobut@lemmy.ca 1 week ago
I mean, it’s our work the result should belong to the people.
piecat@lemmy.world 1 week ago
This is where “universal basic income” comes into play
Aceticon@lemmy.dbzer0.com 1 week ago
More broadly, I would expect UBI to trigger a golden age of invention and artistic creation because a lot of people would love to spend their time just creating new stuff without the need to monetise it but can’t under the current system.
Blackmist@feddit.uk 1 week ago
Unfortunately one will not lead to the other.
It will lead to the plot of Elysium.
Aux@feddit.uk 1 week ago
That’s what all artists have done since the dawn of ages.
ElPussyKangaroo@lemmy.world 1 week ago
Truer words have never been said.
futatorius@lemm.ee 1 week ago
Two intrinsic problems with the current implementations of AI is that they are insanely resource-intensive and require huge training sets. Neither of those is directly a problem of ownership or control, though both favor larger players with more money.
finitebanjo@lemmy.world 1 week ago
And a third intrinsic problem is that the current models with infinite training data have been proven to never approach human language capability, from papers written by OpenAI in 2020 and Deepmind in 2023, and also a paper by Stanford which proposes AI simply have no emergent behavior and only convergent behavior.
So yeah. Lots of problems.
andxz@lemmy.world 1 week ago
While I completely agree with you, that is the one thing that could change with just one thing going right for all the groups that work on just that problem.
It’s what happens after that that’s scary.
frezik@midwest.social 1 week ago
If gigantic amounts of capital weren’t available, then the focus would be on improving the models so they don’t need GPU farms running off nuclear reactors plus the sum total of all posts on the Internet ever.
Grimy@lemmy.world 1 week ago
AI has a vibrant open source scene and is definitely not owned by a few people.
A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It’s a shame to see so many actually cheering them on.
cyd@lemmy.world 1 week ago
So long as there are big players releasing open weights models, which is true for the foreseeable future, I don’t think this is a big problem. Once those weights are released, they’re free forever, and anyone can fine-tune based on them, or use them to bootstrap new models by distillation or synthetic RL data generation.
DarkCloud@lemmy.world 1 week ago
Like Sam Altman who invests in Prospera, a private “Start-up City” in Honduras where the board of directors pick and choose whoch laws apply to them!
nickwitha_k@lemmy.sdf.org 1 week ago
Techno-Feudalism
I’ll say it, yet again. It’s just feudalism. “Techno-Feudalism” has nothing different enough to it to differentiate it as even a sub-type of feudalism. It’s just the same thing all over again, using technological advances to improve the ability to monitor and impose control over the populace. Historical feudalists also leveraged technology to cement their rule (plate armor, cavalry, crossbows, cannon, mills, control of literacy, etc).
DarkCloud@lemmy.world 1 week ago
Techno-Feudalism is a specific idea from Yanis Varifakous, about places like Amazon, Ebay, AliExpress, Steam, Facebook, even YouTube to some extent. It has to do with the Market Place controlling which prices are promoted to buyers and sellers, and is about price fixing and capturing industries that the bulk of the population required to do commerce.
This is a very important concept to note and understand because it relates to the end of two party Capitalism (where buyers and sellers negotiate prices with each other).
So no, the use of fuedalism isn’t to indicate something about old school mechanisms of crowd control, brutality and repression. It’s a reference to the serfdom and economic aspects.
conicalscientist@lemmy.world 1 week ago
Attaching “tech” to everything makes it more palatable. Desirable even. It masks the fact that feudal lords are reinventing everything but with “tech”.
AbsoluteChicagoDog@lemm.ee 1 week ago
Same as always. There is no technology capitalism can’t corrupt
RadicalEagle@lemmy.world 1 week ago
I’d say the biggest problem with AI is that it’s being treated as a tool to displace workers, but there is no system in place to make sure that that “value” (I’m not convinced commercial AI has done anything valuable) created by AI is redistributed to the workers that it has displaced.
protist@mander.xyz 1 week ago
Welcome to every technological advancement ever applied to the workforce
pennomi@lemmy.world 1 week ago
The system in place is “open weights” models. These AI companies don’t have a huge head start on the publicly available software, and if the value is there for a corporation, most any savvy solo engineer can slap together something similar.
WrenFeathers@lemmy.world 1 week ago
The biggest problem with AI is the damage it’s doing to human culture.
rottingleaf@lemmy.world 1 week ago
Not solving any of the stated goals at the same time.
It’s a diversion. Its purpose is to divert resources and attention from any real progress in computing.
RememberTheApollo_@lemmy.world 1 week ago
And those people want to use AI to extract money and to lay off people in order to make more money.
That’s “guns don’t kill people” logic.
Yeah, the AI absolutely is a problem. For those reasons along with it being wrong a lot of the time as well as the ridiculous energy consumption.
magic_smoke@lemmy.blahaj.zone 1 week ago
The real issues are capitalism and the lack of green energy.
If the arts where well funded, if people where given healthcare and UBI, if we had, at the very least, switched to nuclear like we should’ve decades ago, we wouldn’t be here.
The issue isn’t a piece of software.
gian@lemmy.grys.it 1 week ago
Yeah, the AI absolutely is a problem.
AI is noto a problemi by itself, the problemi is that most of the people who make decisions in the workplace about these things do not understand what they are talking about and even less what something is capable of.
My impression is that AI now is what blockchain was some years ago, the solution to every problemi,which was of course false.
TheMightyCat@lemm.ee 1 week ago
No?
Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.
Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.
CodeInvasion@sh.itjust.works 1 week ago
Yah, I’m an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.
It’s really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.
Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I’m working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it’s the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)
cyd@lemmy.world 1 week ago
It’s possible to run the big Deepseek model locally for around $15k, not $100k. People have done it with 2x M4 Ultras, or the equivalent.
riskable@programming.dev 1 week ago
Would you say your research is evidence that the o1 model was built using data/algorithms taken from OpenAI via industrial espionage (like Sam Altman is purporting without evidence)? Or is it just likely that they came upon the same logical solution?
Not that it matters, of course! Just curious.
nalinna@lemmy.world 1 week ago
But the people with the money for the hardware are the ones training it to put more money in their pockets. That’s mostly what it’s being trained to do: make rich people richer.
riskable@programming.dev 1 week ago
This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.
I’ve downloaded several academic models and all commercial models and AI tools are based on all that public research.
I run AI models locally on my PC and you can too.
TheMightyCat@lemm.ee 1 week ago
But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.
Why attack the technology if its the rich people you are against and not the technology itself.
Melvin_Ferd@lemmy.world 1 week ago
We shouldn’t do anything ever because poors
nalinna@lemmy.world 1 week ago
But the people with the money for the hardware are the ones training it to put more money in their pockets. That’s mostly what it’s being trained to do: make rich people richer.
umbraroze@lemmy.world 1 week ago
AI business is owned by a tiny group of technobros, who have no concern for what they have to do to get the results they want (“fuck the copyright, especially fuck the natural resources”) who want to be personally seen as the saviours of humanity (despite not being the ones who invented and implemented the actual tech) and, like all big wig biz boys, they want all the money.
I don’t have problems with AI tech in the principle, but I hate the current business direction and what the AI business encourages people to do and use the tech for.
interdimensionalmeme@lemmy.ml 1 week ago
Well I’m on board for fuck intellectual property. If openai doesn’t publish the weights then all their datacenter get visited by the killdozer
captain_aggravated@sh.itjust.works 1 week ago
For some reason the megacorps have got LLMs on the brain, and they’re the worst “AI” I’ve seen. There are other types of AI that are actually impressive, but the “writes a thing that looks like it might be the answer” machine is way less useful than they think it is.
ameancow@lemmy.world 1 week ago
most LLM’s for chat, pictures and clips are magical and amazing. For about 4 - 8 hours of fiddling then they lose all entertainment value.
As for practical use, the things can’t do math so they’re useless at work. I write better Emails on my own so I can’t imagine being so lazy and socially inept that I need help writing an email asking for tech support or outlining an audit report. Sometimes the web summaries save me from clicking a result, but I usually do anyway because the things are so prone to very convincing halucinations, so yeah, utterly useless in their current state.
I usually get some angsty reply when I say this by some techbro-AI-cultist-singularity-head who starts whinging how it’s reshaped their entire lives, but in some deep niche way that is completely irrelevant to the average working adult.
frezik@midwest.social 1 week ago
The delusional maniacs are going to be surprised when they ask the Super AI “how do we solve global warming?” and the answer is “build lots of solar, wind, and storage, and change infrastructure in cities to support walking, biking, and public transportation”.
max_dryzen@mander.xyz 1 week ago
The government likes concentrated ownership because then it has only a few phonecalls to make if it wants its bidding done (be it censorship, manipulation, partisan political chicanery, etc)
futatorius@lemm.ee 1 week ago
And it’s easier to manage and track a dozen bribe checks rather than several thousand.
Guns0rWeD13@lemmy.world 1 week ago
brian eno is cooler than most of you can ever hope to be.
rottingleaf@lemmy.world 1 week ago
Dunno, the part about generative music (not like LLMs) I’ve tried, I think if I spent a few more years of weekly migraines on that, I’d become better.
Guns0rWeD13@lemmy.world 1 week ago
you mean like in the same way that learning an instrument takes time and dedication?
Grandwolf319@sh.itjust.works 1 week ago
The biggest problem with AI is that it’s the brut force solution to complex problems.
Instead of trying to figure out what’s the most power efficient algorithm to do artificial analysis, they just threw more data and power at it.
Besides the fact of how often it’s wrong, by definition, it won’t ever be as accurate nor efficient as doing actual thinking.
It’s the solution you come up with the last day before the project is due cause you know it will technically pass and you’ll get a C.
TheBrideWoreCrimson@sopuli.xyz 1 week ago
It’s moronic. Currently, decision makers don’t really understand what to do with AI and how it will realistically evolve in the coming 10-20 years. So it’s getting pushed even into environments with 0-error policies, leading to horrible results and any time savings are completely annihilated by the ensuing error corrections and general troubleshooting. But maybe the latter will just gradually be dropped and customers will be told to just “deal with it,” in the true spirit of enshittification.
canajac@lemmy.ca 1 week ago
AI will become one of the most important discoveries humankind has ever invented. Apply it to healthcare, science, finances, and the world will become a better place, especially in healthcare. Hey artist, writers, you cannot stop intellectual evolution. AI is here to stay. All we need is a proven way to differentiate the real art from AI art. An invisible watermark that can be scanned to see its true “raison d’etre”. Sorry for going off topic but I agree that AI should be more open to verification for using copyrighted material. Don’t expect compensation though.
Ledericas@lemm.ee 1 week ago
None of it is currently useful to those right now
jjjalljs@ttrpg.network 1 week ago
Apply it to healthcare, science, finances, and the world will become a better place, especially in healthcare.
That’s all kind of moot if we continue down the capitalist hellscape express. What good is an AI that can diagnose cancer if most people can’t afford access? What good is AI writing novels if our homes are destroyed by climate change induced disasters?
Those problems are mostly political, and AI isn’t going to fix them. The people that probably could be replaced with AI, the shitty “leaders” and such, are not going to voluntarily step down.
beto@lemmy.studio 1 week ago
And yet, he released his latest album exclusively on Apple Music.
iAvicenna@lemmy.world 1 week ago
like most of money
finitebanjo@lemmy.world 1 week ago
I don’t really agree that this is the biggest issue, for me the biggest issue is power consumption.
CitricBase@lemmy.world 1 week ago
That is a big issue, but excessive power consumption isn’t intrinsic to AI. You can run a reasonably good AI on your home computer.
The AI companies don’t seem concerned about the diminishing returns, though, and will happily spend 1000% more power to gain that last 10% better intelligence. In a competitive market why wouldn’t they, when power is so cheap.
frezik@midwest.social 1 week ago
Large power consumption only happens because someone is willing to dump lots of capital into it so they can own it.
finitebanjo@lemmy.world 1 week ago
Oh you’re right, let me just tally up all the days where that isn’t the case…
carry the 2…
don’t forget weekends and holidays…
Oh! It’s every single day. It’s just an always and forever problem. Neat.
HANN@sh.itjust.works 1 week ago
Ollama and stable diffusion are free open source software. Nobody is forcing anybody to use chatGPT
afk_strats@lemmy.world 1 week ago
Ollama is FOSS, SD has a proproprietary but permissive, source-available license, but it is not what most people would associate with “open-source”
HANN@sh.itjust.works 1 week ago
Fair, it may not be strictly FOSS but I think my point still stands. If people are worried about AI being owned by “the elite” they can just run Ollama.
KingThrillgore@lemmy.ml 1 week ago
He’s not wrong.
ininewcrow@lemmy.ca 1 week ago
Technological development and the future of our civilization is in control of a handful of idiots.
zapzap@lemmings.world 1 week ago
“Biggest” maybe. But it’s not the only relevant problem. I think AI is gonna pan out like social media did, which is to say it’s gonna be a shit show for society. And that would be the same no matter who owned it.
frezik@midwest.social 1 week ago
Both AI and social media are a shit show because it’s owned by a few people.
Unironically, the best social media is Fetlife. Not that it’s perfect by any means–not by far–but it is designed to facilitate bringing people together.
pyre@lemmy.world 1 week ago
wrong. it’s that it’s not intelligent. if it’s not intelligent, nothing it says is of value. and it has no thoughts, feelings or intent. therefore it can’t be artistic. nothing it “makes” is of value either.
SufferingSteve@feddit.nu 1 week ago
Reading the other comments, it seems there are more than one problem with AI. Probably even some perks as well.
Shucks, another one or these complex issues huh. Weird how everything you learn something about turns out to have these nuances to them.
C45513@lemm.ee 1 week ago
most of the replies can be summarized as “the biggest problem with AI is that we live under capitalism”
nialv7@lemmy.world 1 week ago
That’s… just not true? Currently frontier AI models are actually surprisingly diverse, there are a dozen companies from America, Europe, and China releasing competitive models. Let alone the countless finetunes created by the community. And many of them you can run entirely on your own hardware so no one really has control over how they are used.
interdimensionalmeme@lemmy.ml 1 week ago
Why is this message not being drilled into the heads of everyone. Sam Altman go to prison or publish your stolen weights.
umbrella@lemmy.ml 1 week ago
ai excels at some specific tasks. the chatbots they push us to are a gimmick rn.
Heliumfart@sh.itjust.works 1 week ago
Reminds me of “biotech is Godzilla”. Sepultura version of course
Polderviking@feddit.nl 1 week ago
My biggest gripe with current AI is the same problem I have with anything crypto. It’s out of control power consumption relative to the problem it solves or purpose it serves.
Knock_Knock_Lemmy_In@lemmy.world 1 week ago
Don’t thrown all crypto under the bus. Only bitcoin and other proof of work protocols are power hungry. 2nd and 3rd generation crypto use mostly proof of stake and ZKrollups for security. Much more energy efficient.
bob_lemon@feddit.org 1 week ago
Sure, but despite all the crypto bros assurances to the contrary, the only real-world applications for it is buying drugs, paying ransoms and getting scammed. Which means that any non-zero amount of energy is too much energy.
Polderviking@feddit.nl 1 week ago
I’m aware of this, but it still mostly just something for people speculate on. Something people buy, sit on, and then hopefully sell with a profit.
Bitcoin was supposed to be a decentralized money alternative, but the amount of people actually buying things with crypto are highly negligible.
And honestly even if was actually used for that the power consumption would still be something to discuss.
rottingleaf@lemmy.world 1 week ago
That’s, #1, fashion and not about environment, #2, fashion promoted because it’s cheaper for the industry.
And yes, power saved somewhere will just be spent elsewhere. Cheaper. Cause that means reduced demand for power (or grown not as fast as otherwise).
Guns0rWeD13@lemmy.world 1 week ago
lol, sucker. none of that does shit and industry was already destroying the planet just fine before ai came along.
Polderviking@feddit.nl 1 week ago
Dare I assume you are aware we have “industry” because we consume?