Comment on AI Has Lost Its Magic
femboy_bird@lemmy.blahaj.zone 7 months agoIdk man curses are magic and ai has been a curse on our existance since gpt3 launched
Comment on AI Has Lost Its Magic
femboy_bird@lemmy.blahaj.zone 7 months agoIdk man curses are magic and ai has been a curse on our existance since gpt3 launched
tsonfeir@lemm.ee 7 months ago
Has it? I mean, all this “AI” news is pretty annoying, but how has it impacted our daily lives? ChatGPT has only made mine better.
FluffyPotato@lemm.ee 7 months ago
Most search engines are practically unusable due to the massive amount of garbage AI generated sites that take up the search results while having just platantly false information.
tsonfeir@lemm.ee 7 months ago
I haven’t used a search engine since I started using ChatGPT. It does all the heavy lifting for me.
hoot@lemmy.ca 7 months ago
I am concerned to think of all the terrible and just plain wrong information you have been given.
FluffyPotato@lemm.ee 7 months ago
The problem with using it as a search engine is that if it doesn’t know the answer it commonly makes things up. I tried using it for work but it got details wrong enough to make it useless.
bionicjoey@lemmy.ca 7 months ago
I’m pretty sure the number of people that have lost their jobs over this shitty text generator has surpassed a million.
tsonfeir@lemm.ee 7 months ago
But that has nothing to do with ChatGPT. It’s what some people were blaming on all of the layoffs when it came out. That would have happened regardless. The news just loves a clickable headline.
Most of these companies will start rehiring again. They just did it to trim the fat, cut the high earners, and get people back in at a lower rate because they’re desperate for work.
Tale as old as time.
femboy_bird@lemmy.blahaj.zone 7 months ago
We get it you trust big tech and like sucking corporate dicks
JayDee@lemmy.ml 7 months ago
If we’re talking only about LLMs, then probably the biggest issues caused are threats to support line jobs, the enshittification of said help lines, blatant misinformation spread via those chat bots, and a variety of niche problems.
If we’re spreading out to mean AI mor generally, we could talk about how facial recognition has now gotten good enough that it’s being used to identify and catalogue pretty much anyone that passes a FR-equipped security system. Israel has actually been picking civilian targets via AI. We could also talk about “self driving” cars and the compeletely avoidable deaths they’ve caused. We could talk about how most convolution network AIs that identify graphic imagery and other horrific visuals use massive sweat shops to sort said graphic images for pennies. We could also talk about how mimicry AI has now been used to create both endless revenge porn of unwilling victims, and also faked the voice of others to try to scam them or make them not vote. There’s plenty of damage AI as a whole has done, even if LLMs are the most minimal of all of them.
tsonfeir@lemm.ee 7 months ago
A lot of what you’ve mentioned has existed for decades in some fashion. It’s just code.
Passerby6497@lemmy.world 7 months ago
And making these tools mass market instead of being something niche that requires actual talent to do is absolutely something to blame ChatGPT for.
RidcullyTheBrown@lemmy.world 7 months ago
I don’t think that this is “AI more generally” as the public (and the current article) understands it. You’re lumping together any slightly self corrective algorithm under the AI umbrella. This might be technically correct, but it’s just operations, it’s not indicative of the current hype.
The limiting factor for self driving cars is hardware, not software. There is no commercially viable video technology available to allow taking the self driving technology out of the lab and into the consumer space. Unless you’re talking about Tesla-like systems which, of course, are neither a “self-driving” system nor consumer ready.
This is not AI. The technology behind the voice or image manipulation has existed for some time and has been used for fake porn and for fake voice calls for a long while. We’re only discussing about it now because they can generate traffic if they’re tied to a hype like AI. Very few people would read a story about a student sticking faces of his colleagues over naked bodies, but say the student used AI and suddenly everyone wants to find out what happened. It’s even worse: headlines are discussing the reaction of X celebrity to porn fakes in the context of AI even though porn sites have been having a fake porn section ever since the late 90s and they’re available to anyone with the mental capacity to click “I’m over 18”. Maybe you’re too young to remember, but google wasn’t always censoring search results. Before 2010-ish, fakes like these would routinely appear in google searches of a celebrity’s name. I’m not really sure why AI makes this any different
JayDee@lemmy.ml 7 months ago
You are pulling a no true Scotsman fallacy here. AI has always been a somewhat vague term, and it’s explicitly a buzzword in today’s systems.
This AI front has also been taking the current form for more than a decade, but it wasn’t a public topic until now, because it was terrible up until now.
The relevant things is that AI is automating a normally human-centric practice via extensive training on a data model. All systems I’ve mentioned utilize that machine learning practice at some point in their process.
The statement about the deepfakes is just patently incorrect on your part. It is a trained model which takes an input, and outputs a manipulated output based on its training. That’s enough to meet the criteria. Before it was fairly difficult and almost immediately identifiable as AI manipulated. It’s now popular because it’s gotten good enough to not be immediately noticeable, done fairly easily, and is at the point where it can be mostly automated.