Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Turns out Generative AI was a scam

⁨142⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨floofloof@lemmy.ca⁩ to ⁨technology@lemmy.world⁩

https://garymarcus.substack.com/p/turns-out-generative-ai-was-a-scam

cross-posted from: lemmy.bestiver.se/post/952771

Comments

source

Comments

Sort:hotnewtop
  • MaggiWuerze@feddit.org ⁨1⁩ ⁨day⁩ ago

    Substack promotes and finances Nazi content

    source
    • Melusine@tarte.nuage-libre.fr ⁨1⁩ ⁨day⁩ ago

      They even pushed a notification with a swastika to all users

      source
  • ada@piefed.blahaj.zone ⁨1⁩ ⁨day⁩ ago

    Generative AI was a scam

    So is Substack…

    source
  • Ilixtze@lemmy.ml ⁨1⁩ ⁨day⁩ ago

    In other news the sky is blue, enjoy your hollowed out economy.

    source
    • 7112@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Image

      source
  • Crozekiel@lemmy.zip ⁨15⁩ ⁨hours⁩ ago

    I’m frustrated at the “was” in the title… Like we aren’t still sinking on that awful ship right now, like it is all behind us… But it isn’t. :(

    source
  • Tetsuo@jlai.lu ⁨22⁩ ⁨hours⁩ ago

    Don’t worry, we will all be paying for it when the AI jenga tower will fall.

    Businesses can make huge investments mistakes and then they will ask for help from the government that will have to save them to prevent total collapse and to save jobs. So we will pay for a few dumb CEOs like we always do.

    source
    • floofloof@lemmy.ca ⁨20⁩ ⁨hours⁩ ago

      And those CEOs will go off with their vast piles of money to make the same mistakes again.

      source
    • dylanmorgan@slrpnk.net ⁨18⁩ ⁨hours⁩ ago

      The government may not be able to bail these companies out. The scale is even bigger than the housing crisis of 2008, and trust in the current administration is basically zero. I think the most we can hope for is the LLM companies (think OpenAI and Anthropic), and the companies whose services are effectively wrappers for LLMs, and probably Oracle (with its negative cash flow and astronomical debt) all go away. Amazon, Microsoft, and Google probably survive, with some high profile bloodletting, senior executives being purged by their boards. Apple has been the least bullish on AI, so they’re probably more or less safe and the biggest change will be new OS versions that don’t refer to Apple Intelligence. Facebook is structured in such a way that Zucc can’t be removed by the board, so who knows how that plays out.

      Palantir and their ilk will likely get whatever they need to survive unless the midterms bring in a shockingly progressive group that cares about people’s privacy and removes funding for mass surveillance.

      source
  • jontree255@lemmy.world ⁨14⁩ ⁨hours⁩ ago

    YEAH NO FUCKING SHIT

    Just like blockchain

    Just like NFTs

    Silicon Valley ran out of actual things to sell around 10 years ago and have just been shoveling shit at us.

    source
    • PalmTreeIsBestTree@lemmy.world ⁨10⁩ ⁨hours⁩ ago

      Jeffery Epstein was involved with funding the research for this shit and helped popularize in the media via his billionaire buddies.

      source
  • Curious_Canid@piefed.ca ⁨19⁩ ⁨hours⁩ ago

    LLMs are not capable of creating anything, including code. They are enormous word-matching search engines that try to find and piece together the closest existing examples of what is being requested. If what you’re looking for is reasonably common, that may be useful. If what you’re looking for is obscure, you may get things that don’t apply. And the LLM cannot tell the difference. They can be useful but, unlike an LLM, you need to understand the context to use them safely.

    I think the most interesting thing about LLMs is actually what they tell us about the repetitive nature of most of what we do.

    source
    • partial_accumen@lemmy.world ⁨16⁩ ⁨hours⁩ ago

      LLMs are not capable of creating anything, including code. They are enormous word-matching search engines that try to find and piece together the closest existing examples of what is being requested. If what you’re looking for is reasonably common, that may be useful.

      Just for common understanding, you’re making blanket statements about LLMs as though those statements apply to all LLMs. You’re not wrong if you’re generally speaking of the LLM models deployed for retail consumption like, as an example, ChatGPT. None of what I’m saying here is a defense about how these giant companies are using LLMs today. I’m just posting from a Data Science point of view on the technology itself.

      However, if you’re talking about the LLM technology, as in a Data Science view, your statements may not apply. The common hyperparameters for LLMs are to choose the most likely matches for the next token (like the ChatGPT example), but there’s nothing about the technology that requires that. In fact, you can set a model to specifically exclude the top result, or even choose the least likely result. What comes out when you set these hyperparameters is truly strange and looks like absolute garbage, but it is unique. The result is something that likely hasn’t existed before. I’m not saying this is a useful exercise. Its the most extreme version to illustrate the point. There’s also the “temperature” hyperparamter which introduces straight up randomness. If you crank this up, the model will start making selections with very wide weights resulting in pretty wild (and potentially useless) results.

      What many Data Scientists trying to make LLMs generate something truly new and unique is to balance these settings so that new useful combinations come out without it being absolute useless garbage.

      source
      • Curious_Canid@piefed.ca ⁨14⁩ ⁨hours⁩ ago

        I write software for a living and I have worked directly with LLM backend code. You aren’t wrong about the exceptions, but I think they actually reinforce my main point. If you play with the parameters you can make all kinds of things happen, but all of those things are still driven by the existing information it already has or can find. It can mash things together in random new ways, but it will always work with components that already exist. There is no awareness of context or meaning that would allow it to make intelligent choices about what it mashes together. That will always be driven by the patterns it already knows, positively or negatively.

        It’s like doing chemistry by picking random bottles from the shelf and dumping them into a beaker to see what happens. You could make an amazing discovery that way, but the chances of it happening are very, very low. And even if it does happen, there’s an excellent chance that you won’t recognize it.

        I’m in favor of using LLMs for tasks that involve large-scale data analysis. They can be quite helpful, as long as the user understands their limitations and performs due diligence to validate the results.

        Unfortunately what we are mostly seeing are cases where LLMs are used to generate boilerplate text or code that is assembled from a vast collection of material that someone who actually knew what they were doing had previously created. That kind of reuse is not inherently bad, but it should not be confused with what competent writers or coders do. And if LLMs really do take over a lot of routine daily tasks from people, the pool of approaches to those tasks will stagnate, and eventually degenerate, as LLMs become the primary sources of each others’ solutions.

        LLMs may very well change the world, but not it in the ways most people expect. Companies that have invested heavily in them are pushing them as the solutions to the wrong problems.

        source
        • -> View More Comments
  • goodshowsir@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Yes Gen AI is definately the overhyped, now failed kickstarter of the century.

    source
  • YetAnotherNerd@sopuli.xyz ⁨19⁩ ⁨hours⁩ ago

    Okay, is that actually wonderful news? If it’s doing nothing for the GDP, and the GDP went up, something is still causing that, right? So does that mean the economy is in less of a bubble than we’d thought, that we’re actually growing despite AI?

    source
    • Kolanaki@pawb.social ⁨19⁩ ⁨hours⁩ ago

      GDP is also a scam.

      source
  • Reygle@lemmy.world ⁨19⁩ ⁨hours⁩ ago

    The only “reasoning” I can see behind this is that the tech monopolies are hoping to become seen as “too big to fail”. We MUST NOT let that happen. They MUST be allowed to fail.

    source