It's just projection of the hate for techbros (especially celebrities like Musk). Everything that techbros love (crypto, ai, space, etc) is hated automatically.
Comment on ChatGPT In Trouble: OpenAI may go bankrupt by 2024, AI bot costs company $700,000 every day
li10@feddit.uk 1 year ago
I don’t understand Lemmy’s hate boner over AI.
Yeah, it’s probably not going to take over like companies/investors want, but you’d think it’s absolutely useless based on the comments on any AI post.
Meanwhile, people are actively making use of ChatGPT and finding it to be a very useful tool. But because sometimes it gives an incorrect response that people screenshot and post to Twitter, it’s apparently absolute trash…
deadcream@kbin.social 1 year ago
chaogomu@kbin.social 1 year ago
AI is not good. I want to be good, but it's not.
I'll clarify, it's basically full of nonsense. Half of the shit it spits out is nonsense, and the rest is questionable. Even with that, it's already being used to put people out of their jobs.
Techbros think AI will run rampant and kill all humans, when they're the ones killing people by replacing them with shitty AI. And the worst part is that it isn't even good at the jobs it's being used for. It makes shit up, it plagiarizes, it spits out nonsense. And a disturbing amount of the internet is starting to become AI generated. Which is also a problem. See, AI is trained on the wider internet, and now AI is being trained on the shitty output of AI. Which will lead to fun problems and the collapse of the AI. Sadly, the jobs taken by AI will not come back.
Aceticon@lemmy.world 1 year ago
It’s a tool which can be used to great effect in the right setting, for example to wrap cold knowledge summarily stated into formats with much broader appeal and to revert the process.
However it’s being sold by greedy fuckers who stand to gain from people jumping into the hype-train as something else altogether: a shortcut into knowledge and the output of those who have it, because there’s a lot more money to be made from that than there is of something which can “write an article from a set of bullet points”.
_danny@lemmy.world 1 year ago
It’s definitely gone down hill recently, but at the launch of gpt4 it was pretty incredible. It would make several logical jumps that a lot of actual people probably wouldn’t make. I remember my “wow moment” was asking how many M&M’s would fit in a typical glass milk jug, and then I measured it myself (by weight) and got an answer about 8% off. It gave measurements and cited actual equations. I couldn’t find anything through Google that solved the same problem or had the same answer that it could have just copied. It was supposed to be bad at math, but gpt4 got those types of problems pretty much spot on for me.
I think that most people who have tried the latest AI models have had a bad experience because its power is distributed over more users.
chaogomu@kbin.social 1 year ago
There's also the issue of model collapse, when the AI is trained on data generated by AI, the errors and hallucinations start to compound until all you have left is gibberish. We're about halfway there.
aesthelete@lemmy.world 1 year ago
Not everyone that dislikes a thing or the promoters of that thing “have no idea what it is”…but sure, go off I guess. 🤷
notabird@lemmy.world 1 year ago
Lemmy and Mastodon to a larger extent hate anything owned by a corporation. That voice is getting more and more louder by the day.
Zeth0s@lemmy.world 1 year ago
AI is literally one of the most incredible creation of humanity, and people shit on it as if they know better. It’s genuinely an astonishing historical and cultural achievement, peak of human ingenuity.
No idea why.
wizardbeard@lemmy.dbzer0.com 1 year ago
It’s shit on because it is not actually AI as the general public tends to use the term. This isn’t Data from Star Trek, or anything even approaching Asimov’s three laws.
The immediate defense against this statement is people going into mental gymnastics and hand waving about “well we don’t have a formal definition for intelligence so you can’t say they aren’t” which is just… nonsense rhetorically because the inverse would be true as well. Can’t label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.
Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this “AI haters” group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems’ problems intrinsically means you want them to fail. That’s BS.
Machine Learning and Large Language Models are amazing, they’re game changing, but they aren’t magical panaceas and they aren’t even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.
Anyone trying to say different isn’t familiar with the field or is trying to sell you something. It’s the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.
The frustrating part is that people caught up in the hype train of AI will say the same thing: “You just don’t understand!” But then they’ll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.
At least in my opinion that’s where the negativity comes from.
Aceticon@lemmy.world 1 year ago
Personally, having been in Tech for almost 3 decades I am massivelly skeptical when the usual suspects put out yet another incredible claim backed up by nothing, and worse in an area I actually have quite a lot of knowledge in, and it gets picked up by mindless fanboys who don’t have the expertise to understand jack-shit about what they’re talking and greedy fuckers using salesspeak because they stand to personally gain if enough usefull idiots jump into the hype train.
You don’t even need to be old enough to remember that “revolution in human transportation” was how the Segway was announced: all it takes is to look at the claims about Bitcoin and the blockchain and remember the fraud-ridden shitshow the whole area became.
As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.
SirGolan@lemmy.sdf.org 1 year ago
I’ve been working on AI projects on and off for about 30 years now. Honestly, for most of that time I didn’t think neural nets were the way to go, so when LLMs and transformers got popular, I was super skeptical. After learning the architecture and using them myself, I’m convinced they’re part of but not the whole solution to AGI. As they are now, yes, they are world changing. They’re capable of improving productivity in a wide range of industries. That seems pretty world changing to me. There are already products out there proving this (GitHub Copilot, jasper, even ChatGPT). You’re welcome to downplay it and be skeptical, but I’d highly recommend giving it an honest try. If you’re right then you’ll have more to back up your opinion, and if you’re wrong, you’ll have learned to use the tech and won’t be left behind.
Aceticon@lemmy.world 1 year ago
Ah, yes.
Remind me again how that “revolution of human mobility”, the Segway, is doing now…
Or how wanderful every single one the announcements of breakthroughs in Fusion generation have turned out to be…
Or how the safest Operating System ever, Windows 7, turned out in terms of security…
Or how Bitcoin has revolutionized how people pay each other for stuff…
Some of us have seen all lots of hype trains go by over the years, always with the same format, and recognize the salesspeak from greedy fuckers designed to excite ignorant naive fanboys of such bullshit chu-chu-trains when they come to the station.
Looking at your choice of words in your post you’re very invested in it, either emotionally (as a fanboy) or monetarily (greedy fucker hoping to make money from the hype) since rational people who are not using salesspeak will not refer to anything brand new as “the most incredible creation of humanity” (it’s way too early to tell) much less deem any and all criticism of it as “shitting on it”.
FaceDeer@kbin.social 1 year ago
"Completely unrelated thing X didn't live up to its hype, therefore thing Y must also suck" is not particularly sound logic for shitting on something.
Aceticon@lemmy.world 1 year ago
Funny how from all the elements were it ressonates with historical events: “people promoting it”, “bleeding edge tech”, “style of messaging”, “extraordinary claims without extraordinary proof” and more, your ended up making the kind of simplistic conclusion that a young child might make.
Zeth0s@lemmy.world 1 year ago
AI, even at the current state is one of the most incredible creation of humanity.
If there was a nobel prize for math and computer science, the all field would deserve one next year. It would probably go to a number of different people who contributed to the current methodologies.
You cannot compare nft to AI. You can open nature or science (the scientific publications) now and you’d see how big is the impact of AI.
You can start your research here www.deepmind.com/research/…/alphafold . Another nobel prize material
Aceticon@lemmy.world 1 year ago
I actually have some domain expertise so excuse me if I don’t just eat up that overexcited ignorant fanboy pap and phamplet from one of the very companies trying to profit for such things.
GAI (General Artificial Intelligence, i.e. a “thinking machine”) would indeed be that “incredible creation of humanity”, but that’s not this shit. This shit is a pattern matching and pattern reassembly engine - a technologically evolve parrot capable of producing outputs that mimic what was present in its training sets to such a level that they even parrot associations that were present in their training sets (i.e. certain questions get certain answers, only the LLM doesn’t even understand them as “questions” and “answers” just as textual combinations).
Insuficiently intelligent people with no training in hard sciences often actually confuse such perfect parroting of that which intelligent beings previously produces with actually having intelligence, which is half part hilarious and half part sad.
HellAwaits@lemm.ee 1 year ago
What I don’t understand is why so many people conflate “hating disney CEO for misusing AI” with “hating AI”. Maybe if people understood the differences, they would “understand the hate”