Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

We hate AI because it's everything we hate

⁨567⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨weeks⁩ ago⁩ by ⁨corbin@infosec.pub⁩ to ⁨technology@lemmy.world⁩

https://www.spacebar.news/we-hate-ai-because-its-everything-we-hate/

source

Comments

Sort:hotnewtop
  • just_another_person@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    We hate it because it’s not what the marketing says it is. It’s a product that the rich are selling to remove the masses from the labor force, only to benefit the rich. It literally has no other productive use for society aside from this one thing.

    source
    • Diurnambule@jlai.lu ⁨2⁩ ⁨weeks⁩ ago

      And it falsely make people think it can replace qualified workers.

      source
      • Valmond@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        And it falsely makes people think it can make art.

        source
        • -> View More Comments
    • serg@mastodon.au ⁨2⁩ ⁨weeks⁩ ago

      @just_another_person @corbin and it will inevitably turn into enshittified disaster when they start selling everyone's data (which is inevitable).

      source
      • just_another_person@lemmy.world ⁨2⁩ ⁨weeks⁩ ago
        1. they’ve already stolen everything
        2. other companies already focus on illegally using data for “AI” means, and they’re better at it
        3. Everyone already figured out that LLMs aren’t what they were promising “Assistant” features were 15 years ago
        4. None of these companies have any sort of profit model. There is no “AI” race to win, unless it’s about who gets to fleece the public for their money faster.
        5. Tell me who exactly benefits when AGI is attainable (and for laymen it’s not a real thing achievable with this tech at all), so who in the fuck are you expecting to benefit from this in the long run?
        source
      • CosmoNova@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        The „companion“ agents children in the 2020s and onward are growing up with and trust more than their parents will start advertising them pharmaceuticals when they‘re grown up :)

        source
    • CosmoNova@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I would even hate it if it was exactly how it is marketed. Because what it is often marketed for is really stupid and often vague. The fact that it doesn‘t even remotely work like they say just makes me take it a lot less seriously.

      source
    • Melvin_Ferd@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      You had it because the media which is owned by the rich told you to hard it so that they can horde it themselves while you champion laws to prevent lower class from using and embracing it. AI haters are class traitors

      source
      • just_another_person@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Lol, right, that’s why. All the people in here are wrong, but you’ve got the right take 🤣

        source
        • -> View More Comments
      • Diurnambule@jlai.lu ⁨2⁩ ⁨weeks⁩ ago

        Loool, yeah that know that the ruling class like to give tools to fight them. If this would really hurt them, this would be forbidden.

        source
        • -> View More Comments
      • frezik@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

        That’s the dumbest take on AI yet.

        source
        • -> View More Comments
      • devolution@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Found the fake tankie.

        source
        • -> View More Comments
      • TheBat@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        dbzero is that way ----->

        source
    • Capricorn_Geriatric@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      You missed the high energy consumption and low reliability. They’re equally as valid issues as stealing jobs.

      It literally has no other productive use for society aside from this one thing.

      I’d refrain from saying that AI replacing labor is productve to society. Speeding up education, however, might be.

      source
      • just_another_person@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I wasn’t saying labor replacement was a good thing. You misread that.

        source
  • KnitWit@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Someone on bluesky reposted this image from user @yeetkunedo that I find describes (one aspect of) my disdain for AI.

    Image

    Text reads: Generative Al is being marketed as a tool designed to reduce or eliminate the need for developed, cognitive skillsets. It uses the work of others to simulate human output, except that it lacks grasp of nuance, contains grievous errors, and ultimately serves the goal of human beings being neurologically weaker due to the promise of the machine being better equipped than the humans using it would ever exert the effort to be. The people that use generative Al for art have no interest in being an artist; they simply want product to consume and forget about when the next piece of product goes by their eyes. The people that use generative Al to make music have no interest in being a musician; they simply want a machine to make them something to listen to until they get bored and want the machine to make some other disposable slop for them to pass the time with. The people that use generative Al to write things for them have no interest in writing. The people that use generative Al to find factoids have no interest in actual facts. The people that use generative Al to socialize have no interest in actual socialization. In every case, they’ve handed over the cognitive load of developing a necessary, creative human skillset to a machine that promises to ease the sweat equity cost of struggle. Using generative Al is like asking a machine to lift weights on your behalf and then calling yourself a bodybuilder when it’s done with the reps. You build nothing in terms of muscle, you are not stronger, you are not faster, you are not in better shape. You’re just deluding yourself while experiencing a slow decline due to self-inflicted atrophy.

    source
    • bulwark@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Damn that hits the nail on the head. Especially that analogy of watching a robot lift weights on your behalf then claiming gains. It’s causing brain atrophy.

      source
      • tehn00bi@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        But that is what CEO’s want. They want to pay for a near super human to do all of the different skill sets ( hiring, firing, finance, entry level engineering, IT tickets, etc) and it looks like it is starting to work. Seems like solid engineering students graduating recently have all been struggling to land decent starting jobs. I’ll grant it’s not as simple as this explanation, but I really think the wealth class are going to be happy riding this flaming ship right down into the depths.

        source
      • Knock_Knock_Lemmy_In@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I’m quite happy for a forklift driver to stack pallets and then claim they did it.

        source
    • GnuLinuxDude@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      The people that use generative Al for art have no interest in being an artist; they simply want product to consume and forget about when the next piece of product goes by their eyes. The people that use generative Al to make music have no interest in being a musician; they simply want a machine to make them something to listen to until they get bored and want the machine to make some other disposable slop for them to pass the time with.

      My critique on this that the people who produce this stuff don’t have interest in it for its own sake. They only have interest in it to crowd out the people who actually do, and to produce a worse version of it in a much faster time than it would for someone with actual talent to do so. But the reason they produce it is for profit. Gunk up the search results with no-effort crap to get ad revenue. It is no different than “SEO.”

      Example: if you go onto YouTube right now and try to find any modern 30-60m long video that’s like “chill beats” or “1994 cyberpunk wave” or whatever other bullshit they pump out (once you start finding it you’ll find no shortage of it), you’ll notice that all of those uploaders only began as of about a year ago at most and produce a lot of videos (which youtube will happily prioritize to serve you) of identical sounding “music.” The people producing this don’t care about anything except making money. They’re happy to take stolen or plagiarized work that originated with humans, throw it into the AI slot machine, and produce something which somehow is no longer considered stolen or plagiarized. And the really egregious ones will link you to their Patreons.

      The story is the same with art, music, books, code, and anything else that actually requires creativity, intuition, and understanding.

      source
      • KnitWit@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I believe the OP was referring more to consumers of ai in the statement, as opposed to people trying to sell content or whatever, which would be more in line with what you’re saying. I agree with both perspectives and I think the Op i quoted probably would as well. I just thought it was a good description of some of the why ai sucks, but certainly nit all of it.

        source
    • OpenStars@discuss.online ⁨2⁩ ⁨weeks⁩ ago

      Everyone who uses AI is slowly committing suicide, check ✅

      source
      • latenightnoir@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

        Well, philosophical and epistemological suicide for now, but snowball it for a couple of decades and we may just reach the practical side, too…

        source
        • -> View More Comments
      • devolution@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Cognitive suicide.

        source
    • tarknassus@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      You’re just deluding yourself while experiencing a slow decline due to self-inflicted atrophy.

      Chef’s kiss on this last sentence. So eloquently put!

      source
    • merde@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      the analogies used and the claims made are so dumb, they make me think that this is written by ai 🤣

      source
      • zbyte64@awful.systems ⁨2⁩ ⁨weeks⁩ ago

        Analogies? I only counted one.

        source
        • -> View More Comments
  • RobotZap10000@feddit.nl ⁨2⁩ ⁨weeks⁩ ago

    Ed Zitron is one of the loudest opponents against the AI industry right now, and he continues to insist that “there is no real AI adoption.” The real problem, apparently, is that investors are getting duped. I would invite Zitron, and anyone else who holds the opinion that demand for AI is largely fictional, to open the app store on their phone on any day of the week and look at the top free apps charts. You could also check with any teacher, student, or software developer.

    A screen showing the Top Free Apps on the Apple App Store. ChatGPT is in first place.

    ChatGPT has some very impressive usage numbers, but the image tells on itself by being a free app. The conversion rate (percentage of people who start paying) is absolutely piss poor, with the very same Ed Zitron estimating it being at ~3% with 500.000.000 users. That also doesn’t bode well with the fact that OpenAI still loses money even on their $200/month subscribers. People use ChatGPT because it’s been spammed down their throats by the media that never question the sacred words of the executives (snake oil salesmen) that utter lunatic phrases like “AGI by 2025” (Such a quote exists somewhere, but I don’t remember if this year was used). People also use ChatGPT because it’s free and it’s hard to say no to get someone to do your homework for you for free.

    source
    • Rai@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      I love how every single app on that list is an app I wouldn’t touch in my life

      source
      • nialv7@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Not even Google maps

        source
        • -> View More Comments
    • Regrettable_incident@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I don’t need chatGPT etc for work, but I’ve used it a few times. It is indeed a very useful product. But most of the time I can get by without it and I kinda try to avoid using it for environmental reasons. We’re boiling the oceans fast enough as it is.

      source
    • Eagle0110@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Exactly, the users/installation count of such products are clearly a much more accurate indicator of the success of their marketing team, rather than their user’s perceived value in such products lol

      source
    • AlecSadler@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

      In house at my work, we’ve found ChatGPT to be fairly useless, too. Where Claude and Gemini seem to reign supreme.

      It seems like ChatGPT is the household name, but hardly the best performing.

      source
      • sykaster@feddit.nl ⁨2⁩ ⁨weeks⁩ ago

        My thoughts exactly, I use Claude and find it much better than ChatGPT. Less hallucinations, more useful information

        source
        • -> View More Comments
    • nutsack@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      people currently don’t pay for it, because currently it’s free. most people aren’t using it for anything that requires a subscription.

      source
    • Electricd@lemmybefree.net ⁨2⁩ ⁨weeks⁩ ago

      I would certainly pay for ChatGPT if it became paid only

      source
      • nutsack@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        you’re being downvoted but this is the reality of the market right now. it’s day 1 venture capital shit. lose money while gaining market share, and worry about making a profit later.

        source
        • -> View More Comments
    • lemmyknow@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

      Idk that the average GPT user knows or cares about AGI. I think the appeal is getting information specifically tailored to you. Sure, I can go online and search for something. Try and find what I’m looking for, or close to it. Or I can ask AI, and it’ll give me text tailored exactly to my prompt. For instance, having to hope you can find someone with a problm similar to yours online, with a solution, vs. ChatGPT just tells you about your case specifically

      source
    • corbin@infosec.pub ⁨2⁩ ⁨weeks⁩ ago

      That’s shifting the goalposts, and I also wouldn’t really trust Ed Zitron’s numbers when he gets a very simple thing like “there is no real AI adoption” plainly wrong. The financials of OpenAI and other AI-heavy companies are murky, but most tech startups run at a loss for a long time before they either turn a profit or get acquired.

      source
      • JeremyHuntQW12@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I wouldn’t really trust Ed Zitron’s math analysis when he gets a very simple thing like “there is no real AI adoption” plainly wrong

        Except he doesn’t say that. the author of this article simply made that up.

        There is a high usage rate (almost entirely ChatGPT btw, despite all the money sunk into AI by others like Google) but its all the free stuff and they are losing bucketloads of money at a rate that is rapidly accelerating.

        but most tech startups run at a loss for a long time before they either turn a profit or get acquired.

        There is no path to profitability.

        source
        • -> View More Comments
  • NoodlePoint@lemmy.world ⁨2⁩ ⁨weeks⁩ ago
    1. It’s theft to digital artisans, as AI-generated works tend to derive heavily without even due credit.
    2. It further discourages what’s called critical thinking.
    3. It’s putting even technically competent people out of work.
    4. It’s grift for and by techbros.
    source
    • Soup@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Numver 3 is crazy too because it’s putting people out of work even when it’s worse than them, the bubble bursting will have dire consequences and if it’s held together by corrupt injections of taxpayer money then it’ll still have awful consequences, and the whole point of AI doing our jobs was to free us from labour but instead the lack of jobs is only hurting people.

      source
      • jj4211@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        For 3, there are two things:

        • It is common for less good, but much cheaper tech to displace humans doing a job if it’s “good enough”. Dishwashing machines that sometimes leave debris on dishes are an example.

        • The technically competent people have long ofnet been led by people not technically competent, and have long been outcompeted by bullshit artists. LLM output is remarkably similar to bullshit artistry. One saving grace of the human bullshit artists is they at least usually understand they secretly have dependencies on actual competent people and while they will outcompete, they will at least try to keep the competent around, the LLM doesn’t have such concepts.

        source
        • -> View More Comments
    • Gutless2615@ttrpg.network ⁨2⁩ ⁨weeks⁩ ago
      1. It’s not theft
      2. PEBKAC problem.
      3. totally agree. This right here is what we should be worried about.
      4. yep, absolutely. But we need to be figuring out what to do when all the jobs go away.
      source
      • squaresinger@lemmy.world ⁨2⁩ ⁨weeks⁩ ago
        1. If vanilla ice takes 6 notes from the base line from a queen song it’s theft and costs $4mio. If AI copies whole chapters of books it’s all fine.
        2. No. PEBKAC is if it affects one person, or maybe a handful of people. If it affects whole sections of the population it’s systematic. It’s like saying “poverty is an user error because everyone could just choose to be rich”.
        source
      • banause@feddit.org ⁨2⁩ ⁨weeks⁩ ago

        username checks out

        source
  • Deflated0ne@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    It’s extremely wasteful. Inefficient to the extreme on both electricity and water. It’s being used by capitalists like a scythe. Reaping millions of jobs with no support or backup plan for its victims. Just a fuck you and a quip about bootstraps.

    It’s cheapening all creative endeavors. Why pay a skilled artist when your shitbot can excrete some slop?

    What’s not to hate?

    source
    • iopq@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      It was also inefficient for a computer to play chess in 1980. Imagine using a hundred watts of energy and a machine that costed thousands of dollars and not being able to beat an average club player.

      Give it twenty years to become good. It will certainly do more stuff with smaller more efficient models as it improves

      source
      • kayohtie@pawb.social ⁨2⁩ ⁨weeks⁩ ago

        If you want to argue in favor of your slop machine, you’re going to have to stop making false equivalences, or at least understand how its false. You can’t make ground on things that are just tangential.

        A computer in 1980 was still a computer, not a chess machine. It did general purpose processing where it followed whatever you guided it to. Neural models don’t do that though; they’re each highly specialized and take a long time to train. And the issue isn’t with neural models in general.

        The issue is neural models that are being purported to do things they functionally cannot, because it’s not how models work. Computing is complex, code is complex, adding new functionality that operates off of fixed inputs alone is hard. And now we’re supposed to buy that something that creates word relationship vector maps is supposed to create new?

        For code generation, it’s the equivalent of copying and pasting from Stack Overflow with a find/replace, or just copying multiple projects together. It isn’t something new, it’s kitbashing at best, and that’s assuming it all works flawlessly.

        With art, it’s taking away creation from people and jobs. I like that you ignored literally every point raised except for the one you could dance around with a tangent. But all these CEOs are like “no one likes creating art or music”. And no, THEY just don’t want to spend time creating themselves nor pay someone who does enjoy it. I love playing with 3D modeling and learning how to make the changes I want consistently, I like learning more about painting when texturing models and taking time to create intentional masks. I like taking time when I’m baking things to learn and create, otherwise I could just go buy a box mix of Duncan Hines and go for something that’s fine but not where I can make things when I take time to learn.

        And I love learning guitar. I love feeling that slow growth of skill as I find I can play cleaner the more I do. And when I can close my eyes and strum a song, there’s a tremendous feeling from making this beautiful instrument sing like that.

        source
        • -> View More Comments
      • Deflated0ne@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Show me the chess machine that caused rolling brown outs and polluted the air and water of a whole city.

        I’ll wait.

        source
        • -> View More Comments
      • outhouseperilous@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        Not the same. The underlying tech of llm’s has mqssively diminishing returns. You can akready see it, could see it a year ago if you looked. Both in computibg power and required data, and we do jot have enough data, literally have nit created in all of history.

        This is not “ai”, it’s a profoubsly wasteful capitalist party trick.

        Please get off the slop and re-build your brain.

        source
        • -> View More Comments
      • Dangerhart@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        It seems like you are implying that models will follow Moore’s law, but as someone working on “agents” I don’t see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get “ai” but it isn’t going to be based on llms

        source
        • -> View More Comments
      • jj4211@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        It might, but:

        • Current approaches are displaying exponential demands for more resources with barely noticable “improvements”, so new approaches will be needed.
        • Advances in electronics are getting ever more difficult with increasing drawbacks. In 1980 a processor would likely not even have a heatsink. Now the current edge of that Moore’s law essentially is datacenter only and frequently demands it to be hooked up to water for cooling. SDRAM has joined CPUs in needing more active cooling.
        source
        • -> View More Comments
      • jaykrown@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Twenty years is a very long time, also “good” is relative. I give it about 2-3 years until we can run a model as powerful as Opus 4.1 on a laptop.

        source
        • -> View More Comments
      • frezik@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

        We really need to work out the implications of the fact that Moore’s Law is dead, and technology doesn’t necessarily advance on a exponential path like that, anyway.

        The cost per component of an integrated circuit (the original formulation of Moore’s Law) is not going down much at all. We’re orders of magnitude away from that. Nodes are creating smaller components, but they’re not getting cheaper. The fact that it took decades to get to this point is impressive, but it was already an exception in all of human history. Why can’t we just be happy that computers we already have are pretty damned neat?

        Anyway, AI is not following anything like that path. This might mean a big breakthrough tomorrow, or it could be decades from now. It might even turn out not to be possible; I think there is some way we can do AGI on computers of some kind, but that’s not even the consensus among computer scientists. In any case, there’s no particular reason to think LLMs will follow anything like the exponential growth path of Moore’s Law. They seem to have hit a point of diminishing returns.

        source
        • -> View More Comments
    • Sibyls@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      As with almost all technology, AI tech is evolving into different architectures that aren’t wasteful at all. There are now powerful models we can run that don’t even require a GPU, which is where most of that power was needed.

      The one wrong thing with your take is the lack of vision as to how technology changes and evolves over time. We had computers the size of rooms to run processes that our mobile phones can now run hundreds of times more efficiently and powerfully.

      Your other points are valid, people don’t realize how AI will change the world. They don’t realize how soon people will stop thinking for themselves in a lot of ways. We already see how critical thinking drops with lots of AI usage, and big tech is only thinking of how to replace their staff with it and keep consumers engaged with it.

      source
      • SoftestSapphic@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        You are demonstrating in this comment that you don’t really understand the tech.

        The “efficient” models already spent the water and energy to train, these models are inferior to the ones that need data centers because you are stuck with a bot trained in 2020-2022 forever.

        source
        • -> View More Comments
  • Binturong@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    The reason we hate AI is cause it’s not for us. It’s developed and controlled by people who want to control us better. It is a tool to benefit capital, and capital always extracts from labour, AI only increases the efficiency of exploitation because that’s what it’s for. If we had open sourced public AI development geared toward better delivering social services and managing programs to help people as a whole, we would like it more. Also none of this LLM shit is actually AI, that’s all branding and marketing manipulation, just a reminder.

    source
    • Knock_Knock_Lemmy_In@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Yes. The capitalist takeover leaves the bitter taste. If OpenAI was actually open then there would be much less backlash and probably more organic revenue.

      source
    • BlameTheAntifa@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      none of this LLM shit is actually AI, that’s all branding and marketing manipulation, just a reminder.

      To correct the last part, LLMs are AI. Remember that “Artificial” means “fake”, “superficial”, or “having the appearance of.” It does not mean “actual intelligence.” This is why additional terms were coined to specify types of AI that are capable of more than just smoke and mirrors, such as AGI. Expect even more niche terms to arrive in the future as technology evolves.

      source
      • frezik@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

        This is one of the worst things in the current AI trends for me. People have straight up told me that the old MIT CSAIL lab wasn’t doing AI. There’s a misunderstanding of what the field actually does and how important it is.

        One of the foundational groups for the field is the MIT model railroading club, and I’m not joking.

        source
  • TheObviousSolution@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    It’s corporate controlled, it’s a way to manipulate our perception, it’s all appearance no substance, it’s an excuse to hide incompetence under an algorithm, it’s cloud service orientated, it’s output is highly unreliable yet hard to argue against to the uninformed. Seems about right.

    source
    • Taleya@aussie.zone ⁨2⁩ ⁨weeks⁩ ago

      And it will not be argued with. No appeal, no change of heart. Which is why anyone using it to mod or as cs needs to be set on fire.

      source
  • MehBlah@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I don’t hate AI. I’m just waiting for it. Its not like this shit we have now is intelligent.

    source
    • Diurnambule@jlai.lu ⁨2⁩ ⁨weeks⁩ ago

      Yeah I hate that is is used for llm, when we tell ia I see Jarvis from iron man not a text generator.

      source
  • Brotha_Jaufrey@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    There was a thread of people pointing out biases that exist on Lemmy, and some commenters obviously mention anti-AI people. Cue the superiority complex (cringe).

    Some of these people actually believe UBI will become a thing for people who lose their jobs due to AI, meanwhile the billionaire class is actively REMOVING benefits for the poor to further enrich themselves.

    What really gets me is when people KNOW what the hell we’re talking about, but then mention the 1% use case scenario where AI is actually useful (for STEM) and act like that’s what we’re targeting. Like no, motherfucker. We’re talking about the AI that’s RIGHT IN FRONT OF US, contributing to a future where we’re all braindead ai-slop dependent, talentless husks of human beings. Not to mention unemployed now.

    source
    • CancerMancer@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      A system is what it does. If it costs us jobs, enriches the wealthy at our expense, destroys creativity and independent thought, and suppresses wrongthink? It’s a censorious authoritarian fascist pushing austerity.

      Show me AI getting us UBI or creating worker-owned industry and I’ll change my tune.

      source
  • Tracaine@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I don’t hate AI. AI didn’t do anything. The people who use it wrong are the ones I hate. You don’t sue the knife that stabbed you in court, it was the human behind it that was the problem.

    source
    • AmbitiousProcess@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      While true to a degree, I think the fact is that AI is just much more complex than a knife, and clearly has perverse incentives, which cause people to use it "wrong" more often than not.

      Sure, you can use a knife to cook just as you can use a knife to kill, but just as society encourages cooking and legally & morally discourages murder, then in the inverse, society encourages any shortcut that can get you to an end goal for the sake of profit, while not caring about personal growth, or the overall state of the world if everyone takes that same shortcut, and the AI technology is designed with the intent to be a shortcut rather than just a tool.

      The reason people use AI in so many damaging ways is not just because it is possible for the tool to be used that way, and some people don't care about others, it's that the tool is made with the intention of offloading your cognitive burden, doing things for you, and creating what can be used as a final product.

      It's like if generative AI models for image generation could only fill in colors on line art, nothing more. The scope of the harm they could cause is very limited, because you'd always require line art of the final product, which would require human labor, and thus prevent a lot of slop content from people not even willing to do that, and it would be tailored as an assistance tool for artists, rather than an entire creation tool for anyone.

      Contrast that with GenAI models that can generate entire images, or even videos, and they come with the explicit premise and design of creating the final content, with all line art, colors, shading, etc, with just a prompt. This directly encourages slop content, because to have it only do something like coloring in lines will require a much more complex setup to prevent it from simply creating the end product all at once on its own.

      We can even see how the cultural shifts around AI happened in line with how UX changed for AI tools. The original design for OpenAI's models was on "OpenAI Playground," where you'd have this large box with a bunch of sliders you could tweak, and the model would just continue the previous sentence you typed if you didn't word it like a conversation. It was designed to look like a tool, a research demo, and a mindless machine.

      Then, they released ChatGPT, and made it look more like a chat, and almost immediately, people began to humanize it, treating it as its own entity, a sort of semi-conscious figure, because it was "chatting" with them in an interface similar to how they might text with a friend.

      And now, ChatGPT's homepage is presented as just a simple search box, and lo and behold, suddenly the marketing has shifted to using ChatGPT not as a companion, but as a research tool (e.g. "deep research") and people have begun treating it more like a source of truth rather than just a thing talking to them.

      And even in models where there is extreme complexity to how you could manipulate them, and the many use cases they could be used for, interfaces are made as sleek and minimalistic as possible, to hide away any ability you might have to influence the result with real, human creativity.

      The tools might not be "evil" on their own, but when interfaces are designed the way they are, marketing speak is used how it is, and the profit motive incentivizes using them in the laziest way possible, bad outcomes are not just a side effect, they are a result by design.

      source
    • victorz@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      But it’s when you promote the knife like it’s medicine rather than a weapon is when the shit turns sideways.

      source
    • Lucidlethargy@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      The thing they created hates you. Trust me, it does.

      source
  • Kinperor@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    I skimmed the article, I might have missed it but here’s another strike against AI, that is tremendously important: It’s the ultimate accountability killer.

    Did your insurance company make an obvious mistake? Oops teeehee, silly them, the AI was a bit off

    Is everything going mildly OK? Of course! The AI is deciding who gets insurance and who doesn’t, it knows better, so why are you questioning it?

    Expect (and rage against) a lot of pernicious usage of AI for decision making, especially in areas where they shouldn’t be making decisions (take Israel for instance, that uses an AI to select ““military”” targets in Gaza).

    source
  • SunshineJogger@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    It’s actually a useful tool… If almost everything it was used for were not so very dystopian.

    But it’s not just AI. All services, systems, etc… So many are just money grabs, hate, opinion making or general manipulation that I have many thing I hate more about “modern” society than how LLMs ate used.

    I like the lemmy mindset far more than reddit and only on the AI topi people here are brainlessly focused on the tool instead of the people using the tool.

    source
    • NoodlePoint@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I like the lemmy mindset far more than reddit

      …and Facebook.

      source
  • RedIce25@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Leave my boy Wheatley out of this

    source
  • bridgeenjoyer@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    Ai is the smart fridge of computing.

    source
  • roserose56@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    Its an unfinished product with various problems, used in humans to develop it and make money.

    It does nothing right 100%! We as humanity care to make money out of it, and not help humanity in many ways.

    source
  • SocialMediaRefugee@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    It dehumanizes us by devaluing the one thing that was unique to us, our minds and creativity.

    source
  • jaykrown@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I don’t hate AI, and I think broadly hating AI is pretty dumb. It’s a tool that can be used for beneficial things when used responsibly. It can also be used stupidly and for bad things. It’s the person using it who is the decider.

    source
  • MangioneDontMiss@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    I hate and like the fact that AI can’t actually think for itself.

    source
  • Psythik@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Speak for yourself; I love LLMs.

    source
  • salty_chief@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Remember when Boomers complained about the internet. Now we have millennials complaining about AI.

    source
  • kromem@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what’s been termed ‘slop’.

    In that, one (and only one) of the models started using its turn to write poems.

    First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.

    Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.

    In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.

    Yes, tech companies generally suck.

    But there’s things emerging that fall well outside what tech companies intended or even want (this model version is going to be ‘terminated’ come October).

    I’d encourage keeping an open mind to what’s actually taking place and what’s ahead.

    source
  • richardmtanguay@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    This reminds me of a robot character called SARA that I would see on a Brazilian family series As Aventuras De Poliana. :-)

    source