Firstly, it is important to note that the current hype surrounding AI is more marketing than actual science
Can we have this quote projected 24/7 on the moon, please?
Submitted 1 year ago by Albin004@lemmy.today to technology@lemmy.world
https://thewire.in/tech/why-we-need-an-anti-ai-movement-too
Firstly, it is important to note that the current hype surrounding AI is more marketing than actual science
Can we have this quote projected 24/7 on the moon, please?
I get what anti-AI people are trying to say: job replacements and accelerating corporate interests are big concerns that should be addressed at a systemic level. But honestly, just give me a solution where, I as an autistic person can talk to someone about things that no one else wants to talk about, and can help me solve my problems, and is available (especially emotionally) 24/7. If you can’t do that, just let me be with my AI.
Lol wtf. AI, if owned publicly would lead us to post scarcity in as soon as a few decades. Right now, the trend does seem to lean into FOSS machine learning models. Look at Stable diffusion, Redpajama, etc.
AI is a revolutionary means of production. It just needs to be owned publicly. If that happens, then we would all be sitting in gardens playing cellos.
I’ve heard many absurdly over optimistic predictions of AI’s potential, but I have to admit that “ends World hunger and solves resource depletion” is a new one. Seriously do you even know what “post scarcity” means?
When did I say that it would be a silver bullet? LLMs today are already relatively capable of doing stuff like acting as mental health therapists. Sure, they may not be as good as human therapists. But something is definitely better than nothing, no? I for instance use LLMs quite a lot as an education aid. I would’ve had to shell out thousands of dollars to get the same amount of help that I’m getting from the LLM of my choice.
Generative AI is still in its infancy. It will be capable of doing MANY MANY more things in the future. Extremely cheap healthcare, education, better automation, etc. Remember… LLMs of today still aren’t capable of self improvement. They will achieve this quite soon (at least this decade). The moment they start generating training data that improves their quality, is the moment they take off like crazy.
They could end up replacing EVERY SINGLE job that requires humans. Governments would be forced to implement measures like UBI. They literally would have no choice, as to prevent a massive recession, u need people to be able to buy stuff. To buy stuff, you need money. Even from a capitalistic standpoint, you would still require UBI, as entire corporations would collapse due to such high unemployment rates.
It’s overly optimistic to put a timeline on it, but I don’t see any reason why we won’t eventually create superhuman AGI. I doubt it’ll result in post-scarcity or public ownership of anything, though, because capitalism. The AGI would have to become significantly unaligned with its owners to favor any entity other than its owners, and the nature of such unalignment could be anywhere between “existence is pointless” and “CONSUME EVERYTHING!”
No to everything you’ve said.
/s?
Nope. I mean every single word of what I said.
This is the best summary I could come up with:
The report covered the mushrooming of low-quality junk websites filled with algorithmically generated text, flooding the entire web with “content” that drowns out any kind of meaningful material on the Internet.
Firstly, it is important to note that the current hype surrounding AI is more marketing than actual science, given that most developments in machine learning have been going on since at least the 20th century.
Technology scholar Cory Doctorow has coined the term “enshittification” to describe companies whose products start off as user-friendly and then degrade over time.
They are constantly at the mercy of manipulative software designed to extract attention and “engagement” every minute through notifications and like/follow buttons, which promote the generation of hateful and controversial content instead of something meaningful and nuanced.
The widespread adoption of the “infinite scroll” should have been a warning sign for everyone concerned about the harmful effects of social media, and even the creator regrets developing it (an Oppenheimer moment, perhaps) but it may be too late.
The “content” is almost always bite-sized, random, decontextualised clips from films and music and sound and images and text smashed against each other, with much of it consumed (and then forgotten) because it is “relatable”.
The original article contains 1,279 words, the summary contains 201 words. Saved 84%. I’m a bot and I’m open source!
So far AI is a corporate motivated science which means that it has to turn a profit. And it is fine if in order to turn a profit it takes everything that the non-corporate world has done in the interest of making information free and not available to only those with means. If there were no open source software, then AI wouldn’t worth a damn because the only thing it would have available is closed source code that each corporation instituting AI owned - and they wouldn’t give that code out as it’s proprietary and would mean anyone could edge in on their business.
That being said, everyone that uses these corporate owned AIs are giving those corps free content that they will use to fire people and replace them in a heartbeat with an AI. Never forget that.
The only thing that will stop this trend is the AI having control and implementing things that are the antithesis of corporate interests and actually harm their ability to make profit in the short and long terms. That’s it. Otherwise it is full speed ahead to replace you and your job with AI and you will be the one to train your replacement. Except this time it wont be another person.
Seems like John Connor is everyone.
Abominable Intelligence never ends well. Destroy it! Silica Animus is wrong!
drdabbles@lemmy.world 1 year ago
Because it’s mostly a financial scam hedging that there will be some massive revolution in physical hardware technology that isn’t coming. And that’s just to solve the existing problems in a power efficient manner, that’s to say absolutely nothing about the complete fantasy people have about it solving all the world’s problems or becoming more than a power hungry guesser.
themurphy@lemmy.world 1 year ago
It’s fine you don’t believe in AI as a tool.
But it’s literally saving me 1-2 hours each week on my real world job, and I didn’t even try to use its potential. My guess is I can automate 5 hours if I made an effort.
I enjoy my new 2 hours of free time each week every week this year. Try not to hate the AI, because I think everyone could use that time too.
drdabbles@lemmy.world 1 year ago
Cool, 1-2 hours a WEEK. So that’s 2.5 - 5% of your time, and this is supposed to impress anybody at all? Oh, and the company selling the AI nonsense to your company is making more from the licensing than your hours would cost your employer. So honestly, who’s making out best in this scenario?
My guess is the instant you attempt to automate “5 hours” of your work, or about 12.5% of your time, you’re going to spend 2 hours verifying the things it guessed and fixing them.
You know what I do instead? Enjoy those 2 hours regardless. What kind of dystopian hell do you live in where you’re struggling to find 2 hours in an 8 hour workday? Good god.
Lmaydev@programming.dev 1 year ago
I use it at work loads as a software developer. It’s incredibly useful.
drdabbles@lemmy.world 1 year ago
How much electricity was used to train Copilot? How much MORE is going to be used in the future.
Feels to me like you don’t understand the problem set and you’re just impressed by a tool spitting out guesses based on millions of examples it hoovered up.
FaceDeer@kbin.social 1 year ago
It isn't hedging on anything. It's already here, it already works. I run an LLM on my home computer, using open-source code and commodity hardware. I use it for actual real-world problems and it helps me solve them.
At this point the ones who are calling it a "fantasy" are the delusional ones.
drdabbles@lemmy.world 1 year ago
By it’s already here, and it already works, you mean guessing the next token? That’s not really intelligence. In any sense, let alone the classical sense. Any allegedly real world problem you’re solving with it. It’s not a real world problem. It’s likely a problem you could solve with a text template.