Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Baidu CEO warns AI is just an inevitable bubble — 99% of AI companies are at risk of failing when the bubble bursts

⁨0⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨year⁩ ago⁩ by ⁨misk@sopuli.xyz⁩ to ⁨technology@lemmy.world⁩

https://www.tomshardware.com/tech-industry/artificial-intelligence/baidu-ceo-warns-ai-is-just-an-inevitable-bubble-99-percent-of-ai-companies-are-at-risk-of-failing-when-the-bubble-bursts

source

Comments

Sort:hotnewtop
  • RangerJosie@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Good.

    source
  • utopiah@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Eh… “Robin Li says increased accuracy is one of the largest improvements we’ve seen in Artificial Intelligence. “I think over the past 18 months, that problem has pretty much been solved—meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer,” the CEO added.”

    That’s plain wrong. Even STOA black box chatbots give wrong answer to the simplest of questions sometimes. That’s precisely what NOT being able to trust mean.

    How can one believe anything this person is saying?

    source
    • Benaaasaaas@lemmy.world ⁨1⁩ ⁨year⁩ ago

      To trust a computer it has to be correct 100% of the time, because it can’t say “I don’t know”.

      source
  • menemen@lemmy.world ⁨1⁩ ⁨year⁩ ago

    It will probably burst, but that does not man that AI will go away completly.

    source
    • Liz@midwest.social ⁨1⁩ ⁨year⁩ ago

      Same thing happened to the Dot Com bubble. The fundamental technology has valid uses, but we’re in the stage where some people are convinced it can be used for literally anything.

      source
  • Rhoeri@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Good.

    source
    • Gradually_Adjusting@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Image

      source
  • LovableSidekick@lemmy.world ⁨1⁩ ⁨year⁩ ago

    The AI bubble might be the 2020s’ dotcom bubble.

    source
    • D4MR0D@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Crossing fingers it burst soon

      source
      • LovableSidekick@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Same here. And speaking of bubbles I haven’t seen anything about NFTs in quite a while. I don’t think that bubble burst tho, it just sort of shriveled up and went away.

        source
  • HawlSera@lemm.ee ⁨1⁩ ⁨year⁩ ago

    It’s a lead bubble

    source
  • FlashMobOfOne@lemmy.world ⁨1⁩ ⁨year⁩ ago

    If you’re invested in these stocks, make sure you have your stop loss orders in place, 100%.

    I imagine the bubble bursting will be quick and deadly.

    source
    • McDropout@lemmy.world ⁨1⁩ ⁨year⁩ ago

      What are the AI rising stocks?

      source
    • turddle@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Set stop loss at 100%, got it 👍

      source
      • Evil_incarnate@lemm.ee ⁨1⁩ ⁨year⁩ ago

        Just to b sure, I’m going to set mine at 200%, to be double sure.

        source
  • EmperorHenry@discuss.tchncs.de ⁨1⁩ ⁨year⁩ ago

    Yeah, AI is really just a surveillance tool than anything else.

    When AI “creates” something, it’s just pulling up things related to words you typed in and making an amalgamation of what you typed in out of what it has.

    The real purpose is for corporations and governments to look through people’s devices and online storage at super speed.

    this is why you all need to be using end-to-end encrypted storage for everything and VPNs with perfect forward secrecy

    do your own research into the history of each provider of those things before you buy it

    source
    • ClamDrinker@lemmy.world ⁨1⁩ ⁨year⁩ ago

      There is so much wrong with this…

      AI is a range of technologies. So yes, you can make surveillance with it, just like you can with a computer program like a virus. But obviously not all computer programs are viruses nor exist for surveillance. What a weird generalization. AI is used extensively in medical research, so your life might literally be saved by it one day.

      You’re most likely talking about “Chat Control”, which is a controversial EU proposal to use scan either on people’s devices or from provider’s ends for dangerous and illegal content like CSAM. This is obviously a dystopian way to achieve that as it sacrifices literally everyone’s privacy to do it, and there is plenty to be said about that without randomly dragging AI into that. You can do this scanning without AI as well, and it doesn’t change anything about how dystopian it would be.

      You should be using end to end regardless, and a VPN is a good investment for making your traffic harder to discern, but if Chat Control is passed to operate on the device level you are kind of boned without circumventing this software, which would potentially be outlawed or made very difficult. It’s clear on it’s own that Chat Control is a bad thing, you don’t need some kind of conspiracy theory about ‘the true purpose of AI’ to see that.

      source
  • don@lemm.ee ⁨1⁩ ⁨year⁩ ago

    They couldn’t keep their heads on fucking straight during the .com bubble, and here they are doing it all over again.

    source
  • Andromxda@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

    No shit

    source
  • Hackworth@lemmy.world ⁨1⁩ ⁨year⁩ ago

    To be clear, it’ll be 10-30 years before AI displaces all human jobs.

    source
    • zbyte64@awful.systems ⁨1⁩ ⁨year⁩ ago

      Probably because we’ll all be dead, which also happens to be a solution to climate change.

      source
  • Veneroso@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Please please please please please please please please

    source
  • MooseTheDog@lemmy.world ⁨1⁩ ⁨year⁩ ago

    People look at the advertising for this shit (and future tech-bro shit) and wonder, “who is this for”? Remember E.L.O.N. Exaggerated Lies Overlooked Narratives

    Think of every manager and boss you’ve ever had. They don’t think, they just do. Salesmen convince them using issues that don’t exist, to sell solutions that don’t really work, to people that don’t understand how to use them. Repeat over 70 years and you have the modern American education system.

    Now things are different. Money is scarce, things are getting tight. Tech-Bros have changed from a mildly infuriating strategy, to a downright abusive one. These simple minded managers think everything is under attack, and the only solution is what they already have, but heavily monetized and completely unusable.

    source
  • Mubelotix@jlai.lu ⁨1⁩ ⁨year⁩ ago

    Chinese tech leader wants west to slow down their progress on AI

    source
    • agelord@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Copium.

      source
  • weew@lemmy.ca ⁨1⁩ ⁨year⁩ ago

    Yeah, but the 1% remaining will take over the world.

    Doors anyone remember the era when there were a million search engines? Google didn’t spawn alone.

    Same with Amazon. You think nobody else tried to make an online store in the 90s? Lol.

    source
    • Furbag@lemmy.world ⁨1⁩ ⁨year⁩ ago

      .com websites didn’t disappear after the dotcom bubble burst either. AI is definitely in a massive bubble right now, but something being in a bubble doesn’t mean it’s going to vanish completely. The AI companies with some substance backing them will weather the upcoming storm.

      Full disclosure: I don’t hate AI, but I hate that management-types are fellating themselves to the idea of it or the things than it can potentially do, rather than something that is providing them some kind of concrete benefit right now. I’m also mad at consumers for being stupid little sheep and paying a premium for anything that companies just happen to slap an “AI-powered” sticker on. It’s like organic produce 2.0 - you have to have it, but we can’t explain why, nor can we elaborate on what it does better than it’s contemporary.

      source
    • pup_atlas@pawb.social ⁨1⁩ ⁨year⁩ ago

      Sure, but the difference here was that all those companies were offering something different. Some had better results than others, a better ui, more accuracy in certain niches, etc. But 99% of AI companies now are all effectively reselling the OpenAI API. They aren’t making an effort to differentiate themselves at all. It’s as if Google was the only shop in town, and everyone bought all their search data an algorithms to slap their logo on. That’s just simply not sustainable at anywhere near the scale it is now. This won’t be a 3-5 year decline, it’ll be a 2 month crash.

      source
    • figjam@midwest.social ⁨1⁩ ⁨year⁩ ago

      'Member nfts?

      source
      • echodot@feddit.uk ⁨1⁩ ⁨year⁩ ago

        No one actually thought that they were a good idea it was just a bunch of con artists. It was a bubble for sure but it was an entirely artificially created one. There was no real business behind any of it.

        source
        • -> View More Comments
      • capital@lemmy.world ⁨1⁩ ⁨year⁩ ago

        I member

        source
    • dan@upvote.au ⁨1⁩ ⁨year⁩ ago

      Same with Amazon. You think nobody else tried to make an online store in the 90s? Lol.

      Fun fact: the first online store still exists. It’s Pizza Hut. They launched online ordering in 1994.

      source
      • wavebeam@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Yum brands has always been at the forefront of using tech to sell fast food. This was true then and is true now. Taco Bell has pioneered kiosks and in-app ordering as well as KDS in QSR environments.

        source
      • Cryophilia@lemmy.world ⁨1⁩ ⁨year⁩ ago

        That is a fun fact!

        source
    • Eyck_of_denesle@lemmy.zip ⁨1⁩ ⁨year⁩ ago

      I doubt anyone is downplaying that. People are just discussing how all companies are pushing A.I into products that don’t need it. Idk about you but I’m tired seeing A.I advertised as a feature on every app/site when it’s just a gpt wrapper.

      source
      • LavenderDay3544@lemmy.world ⁨1⁩ ⁨year⁩ ago

        The rot has even spread into hardware. No one wants die space wasted on a stupid NPU with with less than 1/1000 of the computing power their GPU has and can’t be used for anything other than local LLMs which FTI very few people use and those that do tend to have powerful Nvidia GPUs.

        source
        • -> View More Comments
    • Tja@programming.dev ⁨1⁩ ⁨year⁩ ago

      Wrong audience for this message. Most on lemmy are still running with their fingers in their ears yelling la-la-la really loud.

      source
  • sirico@feddit.uk ⁨1⁩ ⁨year⁩ ago

    Always invest in the spades never the gold mine

    source
    • ByteOnBikes@slrpnk.net ⁨1⁩ ⁨year⁩ ago

      I went to a AI conference and you can just feel how bogus it all feels. Like “Our patent pending AI system references billions of crowd-sourced data points to identify what you are craving for breakfast! Never think about breakfast again!”

      And as a engineer speaking with other engineers, we all collectively shrug and just keep taking the money. I’ll AI your toaster for enough money I don’t GAF.

      source
    • weew@lemmy.ca ⁨1⁩ ⁨year⁩ ago

      That’s why Nvidia is making bank right now

      source
      • whoisearth@lemmy.ca ⁨1⁩ ⁨year⁩ ago

        And AMD won the console wars

        source
        • -> View More Comments
  • fluxion@lemmy.world ⁨1⁩ ⁨year⁩ ago

    AI companies specializing in spreading bullshit all across the internet have a bright future however

    source
  • TehWorld@lemmy.world ⁨1⁩ ⁨year⁩ ago

    So, I have clients that are actively using AI on a daily basis and LOVE it. It is however a very narrow subset. Also, I’m pretty sure that a LARGE amount of Dollars are currently being spent on AI generated political articles.

    source
    • NikkiDimes@lemmy.world ⁨1⁩ ⁨year⁩ ago

      The web didn’t die after the dot com bubble burst. The AI bubble will burst, but a smaller niche of companies will continue to exist.

      source
      • dan@upvote.au ⁨1⁩ ⁨year⁩ ago

        The dot com crash was because tech companies were massively overvalued and didn’t have a proper business plan. I definitely see some similarities with the AI bubble, especially with the large unprofitable companies like OpenAI. OpenAI isn’t estimated to become profitable until 2029, and there’s a lot of unknowns between now and then (e.g. maybe they’ll be forced to license content they use for training).

        source
  • bluewing@lemm.ee ⁨1⁩ ⁨year⁩ ago

    No shit.

    Like all new technologies, there is a time when bunches of companies jump on the band wagon to get in on the action. You can see it all throughout the history of the industrial revolution.

    They mostly know that there will come a great weeding out of those that can’t handle the technology or just fail from poor management. But they are betting they will be among the 1% that wins the race and remain to dominate the market.

    The rest will just bide their time until the next Big Thing comes along. And the process starts over again.

    source
  • Blackmist@feddit.uk ⁨1⁩ ⁨year⁩ ago

    Yeah, but thanks to the glory of corporateworld, all the people involved in making these decisions will be in a higher position at a different company by the time the consequences come knocking.

    You definitely will not regret spending billions of dollars on GPUs and electricity bills.

    source
    • xenoclast@lemmy.world ⁨1⁩ ⁨year⁩ ago

      I’ll be gone, you’ll be gone.

      www.urbandictionary.com/define.php?term=IBGYBG

      source
      • ByteOnBikes@slrpnk.net ⁨1⁩ ⁨year⁩ ago

        This hurts so much. Being in the tech industry, I see it everywhere.

        source
  • GeneralInterest@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Maybe it’s like the dotcom bubble: there is genuinely useful tech that has recently emerged, but too many companies are trying to jump on the bandwagon.

    LLMs do seem genuinely useful to me, but of course they have limitations.

    source
    • linearchaos@lemmy.world ⁨1⁩ ⁨year⁩ ago

      We need to stop viewing it as artificial intelligence. The parts that are worth money are just more advanced versions of machine learning.

      Being able to assimilate a few dozen textbooks and pass a bar exam is a neat parlor trick, but it is still just a parlor trick.

      Unfortunately probably the biggest thing to come out of it will be the marketing aspect. If they spend enough money to train small models on our wants and likes it will give them tremendous amounts of return.

      The key to using it in a financially successful manner is finding problems that fit the bill. Training costs are fairly high, quality content generation is also rather expensive. There are sticky problems around training it from non-free data. Whatever you’re going to use it for either needs to have a significant enough advantage to make the cost of training /data worth it.

      I still think we’re eventually going to see education rise. The existing tools for small content generation adobe’s use of it to fill in small areas is leaps and bounds better than the old content aware patches. We’ve been using it for ages for speech recognition and speech generation. From there it’s relatively good at helper roles. Minor application development, copy editing, maybe some VFX generation eventually. Things where you still need a talented individual to oversee it but it can help lessen the workload.

      There are lots of places where it’s being used where I think it’s a particularly poor fit. AI help desk chatbots, IVR scenarios, It says brain dead as the original phone trees and flow charts that we’ve been following for decades.

      source
      • SparrowRanjitScaur@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Machine learning is AI. I think the term you’re looking for is general artificial intelligence, and no one is claiming LLMs fall under that label.

        source
      • Eheran@lemmy.world ⁨1⁩ ⁨year⁩ ago

        If GPT4o is still not what you would call AI, then what is? You can have conversations with it, the Turing test is completely irrelevant all of the sudden.

        source
        • -> View More Comments
    • datelmd5sum@lemmy.world ⁨1⁩ ⁨year⁩ ago

      We’re hitting logarithmic scaling with the model trainings. GPT-5 is going to cost 10x more than GPT-4 to train, but are people going to pay $200 / month for the gpt-5 subscription?

      source
      • Skates@feddit.nl ⁨1⁩ ⁨year⁩ ago

        Is it necessary to pay more, or is it enough to just pay for more time? If the product is good, it will be used.

        source
      • madis@lemm.ee ⁨1⁩ ⁨year⁩ ago

        But it would use less energy afterwards? At least that was claimed with the 4o model for example.

        source
        • -> View More Comments
      • GeneralInterest@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Businesses might pay big money for LLMs to do specific tasks. And if chip makers invest more in NPUs then maybe LLMs will become cheaper to train. But I am just speculating because I don’t have any special knowledge of this area whatsoever.

        source
  • Mwa@lemm.ee ⁨1⁩ ⁨year⁩ ago

    idk why baidu requires a account to download from it.

    source
  • ulkesh@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Wow, a CEO who doesn’t buy into the hype? That’s astonishing.

    I, for one, cannot wait for the bubble to burst so we can get back to some sense of sanity.

    source
    • nyan@lemmy.cafe ⁨1⁩ ⁨year⁩ ago

      Edit>> Though if Baidu is investing in AI like all the rest, then maybe they just think they’ll be immune — in which case I’m sad again that I haven’t yet come across a CEO who calls bullshit on this nonsense.

      They may just have kept their AI investments responsible—that is, not put more money into it than they can afford to lose. Keep in mind, Baidu is the Chinese equivalent of Google. They have a large, diversified business with many income streams. I expect they’ll still be around after the bubble bursts.

      source
  • bamfic@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I am old enough to remember when the CEO of Nortel Networks got crucified by Wall Street for saying in a press conference that the telecom/internet/carrier boom was a bubble, and the fundamentals weren’t there (who is going to pay for long distance anymore when calls are free over the internet? where are the carriers-- Nortel’s customers-- going to get their income from?). And 4 years later Nortel ceased to exist. Cisco crashed too, though had enough TCP/IP router biz and enterprise sales to keep them alive even until today.

    This all reminds me of the late 1990s internet bubble rather than the more recent crypto bubble. We’ll all still be using ML models for all kinds of things more or less forever from now on, but it won’t be this idiotic hype cycle and overvaluation anymore after the crash.

    source
    • fine_sandy_bottom@lemmy.federate.cc ⁨1⁩ ⁨year⁩ ago

      Crypto is still just as awful as it ever was IMO. Still plenty of assholes gambling investing in crypto.

      source
      • ByteOnBikes@slrpnk.net ⁨1⁩ ⁨year⁩ ago

        This message has existed for 10 hours and a cryptobro hasn’t commented yet?

        source
    • wrekone@lemmyf.uk ⁨1⁩ ⁨year⁩ ago

      Well put.

      Soon, it won’t be this idiotic hype cycle, but it’ll be some other idiotic hype cycle. Short term investors love hype cycles.

      source
    • kameecoding@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Crypto has been turned into gold by wallstreet, they bought up enough of it to jot be completely exposed, it’s supply is extremely limited and will run out. Putting your money into it is no different than putting it into gold, you might catch a good moment and buy in low and get some return, but most wont.

      source
      • fine_sandy_bottom@lemmy.federate.cc ⁨1⁩ ⁨year⁩ ago

        Putting your money into it is no different than putting it into gold

        Sorry kiddo, putting your money into crypto is very, very different to putting it into gold.

        source
        • -> View More Comments
      • Valmond@lemmy.world ⁨1⁩ ⁨year⁩ ago

        The supply is absolutely more like unlimited lol.

        Not enough btc? Make lite coin! Etc etc etc

        source
        • -> View More Comments
    • kautau@lemmy.world ⁨1⁩ ⁨year⁩ ago

      We just don’t have to listen to the hype about it anymore.

      True, it’s now in most circles just been mixed in as a commodity to trade on. Though I wish everyone would get that. There’s still plenty of idiots with .eth usernames who think there’s some new boon to be made. The only “apps” built on crypto networks were and are purely for trading crypto, I’ve never seen any real tangible benefit to society come out of it. It’s still used plenty for money laundering, but regulators are (slowly) catching up. And it’s still by far the easiest way to demonstrate what happens to unregulated markets.

      www.web3isgoinggreat.com

      source
  • peopleproblems@lemmy.world ⁨1⁩ ⁨year⁩ ago

    10 to 30? Yeah I think it might be a lot longer than that.

    Somehow everyone keeps glossing over the fact that you have to have enormous amounts of highly curated data to feed the trainer in order to develop a model.

    Curating data for general purposes is incredibly difficult. The big medical research universities have been working on it for at least a decade, and the tools they have developed, while cool, are only useful as tools too a doctor that has learned how to use them. They can speed diagnostics up, they can improve patient outcome. But they cannot replace anything in the medical setting.

    The AI we have is like fancy signal processing at best

    source
    • FatCrab@lemmy.one ⁨1⁩ ⁨year⁩ ago

      AI in health and medtech has been around and in the field for ages. However, two persistent challenges make roll out slow-- and they’re not going anywhere because of the stakes at hand.

      The first is just straight regulatory. Regulators don’t have a very good or very consistent working framework to apply to to these technologies, but that’s in part due to how vast the field is in terms of application. The second is somewhat related to the first but really is also very market driven, and that is the issue of explainability of outputs. Regulators generally want it of course, but also customers (i.e., doctors) don’t just want predictions/detections, but want and need to understand why a model “thinks” what it does. Doing that in a way that does not itself require significant training in the data and computer science underlying the particular model and architecture is often pretty damned hard.

      I think it’s an enormous oversimplification to say modern AI is just “fancy signal processing” unless all inference, including that done by humans, is also just signal processing. Modern AI applies rules it is given, explicitly or by virtue of complex pattern identification, to inputs to produce outputs according to those “given” rules. Now, what no current AI can really do is synthesize new rules uncoupled from the act of pattern matching. Effectively, a priori reasoning is still out of scope for the most part, but the reality is that that simply is not necessary for an enormous portion of the value proposition of “AI” to be realized.

      source
      • peopleproblems@lemmy.world ⁨1⁩ ⁨year⁩ ago

        The oversimplification was intended - you also caught my meaning of it being able to synthesize new rules.

        source
    • RootBeerGuy@discuss.tchncs.de ⁨1⁩ ⁨year⁩ ago

      Not an expert so I might be wrong, but as far as I understand it, those specialised tools you describe are not even AI. It is all machine learning. Maybe to the end user it doesn’t matter, but people have this idea of an intelligent machine when its more like brute force information feeding into a model system.

      source
      • RecluseRamble@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

        Don’t say AI when you mean AGI.

        By definition AI (artificial intelligence) is any algorithm by which a computer system automatically adapts to and learns from its input. That definition also covers conventional algorithms that aren’t even based on neural nets. Machine learning is a subset of that.

        AGI (artifical general intelligence) is the thing you see in movies, people project into their LLM responses and what’s driving this bubble. It is the final goal, and means a system being able to perform everything a human can on at least human level. Pretty much all the actual experts agree we’re a far shot from such a system.

        source
        • -> View More Comments
    • ContrarianTrail@lemm.ee ⁨1⁩ ⁨year⁩ ago

      LLM’s are not the only type of AI out there. ChatGPT appeared seemingly out of nowhere. Whose to say the next AI system wont do that as well?

      source
      • peopleproblems@lemmy.world ⁨1⁩ ⁨year⁩ ago

        ChatGPT did not appear out of nowhere.

        ChatGPT is an LLM that is a generative pre-trained model using a nueral network.

        Aka: it’s a chat bot that creates it’s responses based on an insane amount of text data. LLMs trace back to the 90s, and I learned about them in college in the late 2000s-2010s. Natural Language Processing was a big contributor, and Google introduced some powerful nueral network tech in 2014-2017.

        The reason they “appeared out of nowhere” to the common man is merely marketing.

        source
        • -> View More Comments
      • vritrahan@lemmy.zip ⁨1⁩ ⁨year⁩ ago

        Anything can happen. We can discover time travel tomorrow. The economy cannot run on wishful thinking.

        source
        • -> View More Comments
  • Snapz@lemmy.world ⁨1⁩ ⁨year⁩ ago

    And they will ALL deserve it.

    source
  • Pulptastic@midwest.social ⁨1⁩ ⁨year⁩ ago

    Aw, only 99%?

    source
  • LemmyBe@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Checks to see if Baidu is doing AI…yep.

    source
    • tal@lemmy.today ⁨1⁩ ⁨year⁩ ago

      “probably 1% of the companies will stand out and become huge and will create a lot of value, or will create tremendous value for the people, for society. And I think we are just going through this kind of process.”

      Baidu is huge. Sounds like good news for Baidu!

      source
  • DarkCloud@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I think less restrictive AI that are free, like Venice AI will be around for longer than ones that went with restrictive subscription models, and that eventually those other ones will become niche.

    New technology always propagates further the freer it is to use and experiment with, and ChatGPT and OpenAI are quite restrictive and money hungry.

    source
-> View More Comments