yeah, i really hate this. i have shares of multiple tech companies, like nvidia, intel, AMD, TSMC, etc. and because of the AI bubble idk how much they are really worth. the market is all warped and one day a company us doing well, the next day it seems to be in peril. i would like to know how much they would be worth after the bubble bursts, but there us no way to know.
There is no market. This whole thing is a huge speculative bubble, a bit like crypto. The core idea of crypto long term make some sense but the speculative value does not. The core idea of LLMs (we are no where near true AI) makes some sense but it is half baked technology. It hadn’t even reached maturity and enshittification has set in.
OpenAI doesn’t have a realistic business plan. It has a griftet who is riding a wave of nonsense in the tech markets.
lurch@sh.itjust.works 1 month ago
technocrit@lemmy.dbzer0.com 1 month ago
[deleted]kameecoding@lemmy.world 1 month ago
Bro has just learned about inflation and thinks a deflationary currency is some magical fix, lmao.
makyo@lemmy.world 1 month ago
IMO it’s even worse than that. At least from what I gather from the AI/Singularity communities I follow. For them, AGI is the end goal - a creative thinking AI capable of deduction far greater than humanity. The company that owns that suddenly has the capability to solve all manner of problems that are slowing down technological advancement. Obviously owning that would be worth trillions.
However it’s really hard to see through the smoke that the Altmans etc. are putting up - how much of it is actual genuine prediction and how much is fairy tales they’re telling to get more investment?
And I’d have a hard time believing it isn’t mostly the latter because while LLMs have made some pretty impressive advancements, they still can’t have specialized discussions about pretty much anything without hallucinating answers. I have a test I use for each new generation of LLMs where I interview them about a book I’m relatively familiar with and even with the newest ChatGPT model, it still makes up a ton of shit, even often contradicting its own answers in that thread, all the while absolutely confident that it’s familiar with the source material.
Honestly, I’ll believe they’re capable of advancing AI when we get an AI that can say ‘I actually am not sure about that, let me do a search…’ or something like that.
itslilith@lemmy.blahaj.zone 1 month ago
I follow a YouTube channel, AI explained, that has some pretty grounded analysis of the latest models and capabilities. He compared LLMs to the creative writing center of the brain, as in they’re really nice to interact with, output things that sound correct, but ultimately are missing the capabilities of reasoning and factuality that are needed for AGI
DogWater@lemmy.world 1 month ago
He’s the best for unbiased info when new models drop and news drops. Love that channel.
itslilith@lemmy.blahaj.zone 1 month ago
He’s really good. I’m torn on the subject because the current AI hype is most certainly a bubble and a grift, but I find the technology fascinating. I do think there’s potential for great things there, but the technology is almost exclusively in the hands of 3 companies and will have a terrible impact on everyone else. I enjoy just focusing on the technical details every once in a while
msage@programming.dev 1 month ago
Every LLM answer is a hallucination
Halcyon@discuss.tchncs.de 1 month ago
The hallucination is in the mind of the user – people fall for the illusion of talking to a creature.