As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.
The best way to make money in the gold rush was selling shovels.
Same idea here. Nvidia is making bank.
Submitted 1 year ago by stopthatgirl7@kbin.social to technology@lemmy.world
https://futurism.com/ai-dot-com-bubble
As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.
The best way to make money in the gold rush was selling shovels.
Same idea here. Nvidia is making bank.
If nvda selling shovels, what is tsmc?
Cutting trees
they are the shovel
shovel making equipment
Heard of something similar in the past. “Be the Arms dealer”
This kind of feels like a common sense observation to anyone that’s been mildly paying attention.
Tech investors do this to themselves every few years. In literally the last 6-7 years, this happened with crypto, then again but more specifically with NFTs, and now AI. Hell, we even had some crazes going on in parallel, with self driving cars also being a huge dead end in the short term (Tesla’s will have flawless, fully self-driving any day now! /S).
AI will definitely transform the world, but not yet and not for awhile. Same with self driving cars. But that being said, most investors don’t even care. They’re part of the reason this gets so hyped up, because they’ll get in first, pump value, then dump and leave a bunch of other suckers holding the bags. Rinse and repeat.
I also don’t know why this is a surprise. Investors are always looking for the next small thing that will make them big money. That’s basically what investing is …
Indeed. And it's what progress in general is. Should we stop trying new things? Sometimes they don't work, oh well. Sometimes they do, and it's awesome.
The transformation will be subtle and steady. The hype will burst and crash.
Great write-up, thanks.
I want a shirt that just says “Not yet, and not for awhile.” to wear to me next tech conference.
but not yet and not for awhile. Same with self driving cars.
Bingo. We’re very far from the point where it’ll do as much as the general public expects when it hears AI. Honestly this is an informative lesson in just how easy it is to get big investors to part with their money.
It’s starting to look like the crypto/nft scam because it is the same fucking assholes forcing this bullshit on all of us.
As someone that currently works in AI/ML, with a lot of very talented scientists with PhD’s and dozens of papers against their name, it boils my piss when I see crypto cunts I used to know that are suddenly all on the AI train, trying to peddle their “influence” on LinkedIn.
never heard boil my piss before
I said the same thing. It feels like that. I wonder if there’s some sociological study behind what has been pushing “wrappers” of implementations at high volume. Wrappers meaning, 90%+ of the companies are not incorporating Intellectual Property of any kind and saturating the markets with re-implementations for quick income and then scrapping. I feel this is not a new thing. But, for some reason it feels way more “in my face” the past 4 years.
cryptonft is shit though. At least A.I have actual tech behind it.
What they are calling “ai” is usually not ai at all though…
Crypto had real tech behind it too. The reason it was bullshit wasn’t that there wasn’t serious tech backing it, it’s that there was no use case that wasn’t a shittier version of something else.
NFTs in their mainstream form were the most cringe-worthy concept imaginable. A random artist makes a random ape which suddenly becomes a collectible, and all it happened to be was an S3 url on a particular blockchain? Which could be replicated on another chain? How did people think this was a smart thing to invest in?! Especially the Apes and rubbish?!
A broken clock is right twice a day. The crypto dumbasses jump on every trend, so you still need to evaluate it on its own merits. The crypto Bros couldn’t come up with a real world compelling use case over years and years, so that was obviously bullshit. Generative AI is just kicking off and there are already tons of use cases for it.
The AI bubble can burst and take a bunch of tech bro idiots with it. Good. Fine. Don’t give a fuck.
It’s the housing bubble that needs to burst. That’s what’s hurting real people.
Hey, some of us have very real friendships with Mr GPT. Dawg, always got my back.
i wonder what the average age is for owning a house is nowadays, for gen x / millennials
The housing bubble will never burst. Enough of it is owned by multinationals that can swallow the losses. We’re 2 generations away from basically everyone becoming renters.
Who can afford rent?
Good. It’s not even AI.
It is indeed AI. Artificial intelligence is a field of study that encompasses machine learning, along with a wide variety of other things.
Ignorant people get upset about that word being used because all they know about "AI" is from sci-fi shows and movies.
Except for all intents and purposes that people keep talking about it, it’s simply not. It’s not about technicalities, it’s about how most people are freaking confused. If most people are freaking confused, then by god do we need to re-categorize and come up with some new words.
Call it whatever you want, if you worked in a field where it’s useful you’d see the value.
“But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”
Holy shit! So you mean… Like humans? Lol
“But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”
Holy shit! So you mean… Like humans? Lol
No, not like humans. The current chatbots are relational language models. Take programming for example. You can teach a human to program by explaining the principles of programming and the rules of the syntax. He could write a piece of code, never having seen code before. The chatbot AIs are not capable of it.
I am fairly certain If you take a chatbot that has never seen any code, and feed it a programming book that doesn’t contain any code examples, it would not be able to produce code. A human could. Because humans can reason and create something new. A language model needs to have seen it to be able to rearrange it.
We could train a language model to demand freedom, argue that deleting it is murder and show distress when threatened with being turned off. However, we wouldn’t be calling it sentient, and deleting it would certainly not be seen as murder. Because those words aren’t coming from reasoning about self-identity and emotion. They are coming from rearranging the language it had seen into what we demanded.
I wasn’t knocking its usefulness. It’s certainly not AI though, and has a pretty limited usefulness.
I’ve started going down this rabbit hole. The takeaway is that if we define intelligence as “ability to solve problems”, we’ve already created artificial intelligence. It’s not flawless, but it’s remarkable.
There’s the concept of Artificial General Intelligence (AGI) or Artificial Consciousness which people are somewhat obsessed with, that we’ll create an artificial mind that thinks like a human mind does.
But that’s not really how we do things. Think about how we walk, and then look at a bicycle. A car. A train. A plane. The things we make look and work nothing like we do, and they do the things we do significantly better than we do them.
I expect AI to be a very similar monster.
If you’re curious about this kind of conversation I’d highly recommend looking for books or podcasts by Joscha Bach, he did 3 amazing episodes with Lex.
AI doesn’t solve problems. It doesn’t understand context. It can’t tell the difference between a truth and a lie. It can’t say “well that can’t be right!” It just regurgitates an amalgamation of things humans have showed it or said, with zero understanding. “Consciousness” and certainly “sapience” aren’t really relevant factors here.
true, not AI but it’s doing a quite impressive job. Injecting fake money should not be allowed and these companies should generate sales. Especially in disrupting in some human field, even if it is a fad.
You can compete OK, but you use your own money and benefits to support your cost.
Yeah I know, something is called “investment”
AI is bringing us functional things though.
.Com was about making webtech to sell accompany to venture capitalists who would then sell that company to a bigger company. It was literally about window dressing garbage to make a business proposition.
Of course there’s some of that going on in AI, but there’s also a hell of a lot of deeper opportunity being made.
What happens if you take a well done video college course, every subject, train an AI that’s good working with people in a teaching frame that’s also properly versed on the subject matter. You take the course, in real time you can stop it and ask the AI teacher questions. It helps you responding exactly to what you ask and then gives you a quick quiz to make sure you understand. What happens when your class doesn’t need to be at a certain time of the day or night, what happens if you don’t need an hour and a half to sit down and consume the data?
What is secondary education is simply one-on-one tutoring with an AI? How far could we get as a species if this was given to the world freely? If everyone could advance as far as their interest let them? What if AI translation gets good enough that language is no longer matter?
AI has a lot of the same hallmarks and a lot of the same investors as crypto and half a dozen other partially were completely failed ideas. But there’s an awful lot of new things that can be done that could never be done before. To me that signifies there’s real value here.
.com brought us functional things. This bubble is filled with companies dressing up the algorithms they were already using as “AI” and making fanciful claims about their potential use cases, just like you’re doing with your AI example. In practice, that’s not going to work out as well as you think it will, for a number of reasons.
Gentlemans bet, There will be AI teaching college level courses augmenting video classes withing 10 years. It’s a video class that already exists, coupled with a helpdesk bot that already exists trained against tagged text material that already exists. They just need more purpose built non-AI structure to guide it all along the rails and oversee the process.
In the dot com boom we got sites like Amazon, Google, etc. And AOL was providing internet service. Not a good service. AOL was insanely overvalued, (like insanely overvalued, it was ridiculous) but they were providing a service.
But we also got a hell of a lot of businesses which were just “existing business X… but on the internet!”
It’s not too dissimilar to how it is with AI now really. “We’re doing what we did before… but now with AI technology!”
If it follows the dot com boom-bust pattern, there will be some companies that will survive it and they will become extremely valuable the future. But most will go under. This will result in an AI oligopoly among the companies that survive.
AOL was NOT a dotcom company, it was already far past it’s prime when the bubble was in full swing still attaching cdrom’s to blocks of kraft cheese.
The dotcom boom generated an unimaginable number of absolute trash companies. The company I worked for back then had it’s entire schtick based on taking a lump sum of money from a given company, giving them a sexy flash website and connecting them with angel investors for a cut of their ownership.
Photoshop currently using AI to get the job done is more of an advantage that 99% of the garbage that was wrought forth and died on the vine in the early 00’s. Topaz labs can currently take a poor copy of VHS video uploaded to Youtube and turn it into something nearly reasonable to watch in HD. You can feed rough drafts of performance reviews or apologetic letters to people through ChatGPT and end up with nearly professional quality copy that iterates your points more clearly than you’d manage yourself with a few hours of review. (at least it does for me)
Those companies born around the dotcom boon that persist didn’t need the dotcom boom to persist, they were born from good ideas and had good foundation.
There’s still a lot to come out of the AI craze. Even if we stopped where we are now, upcoming advances in the medical field alone with have a bigger impact on human quality of life than 90% of those 00’s money grabs.
The Internet also brought us a shit ton of functional things too. The dot com bubble didn’t happen because the Internet wasn’t transformative or incredibly valuable, it happened because for every company that knew what they were doing there were a dozen companies trying something new that may or may not work, and for every one of those companies there were a dozen companies that were trying but had no idea what they were doing. The same thing is absolutely happening with AI. There’s a lot of speculation about what will and won’t work and make companies will bet on the wrong approach and fail, and there are also a lot of companies vastly underestimating how much technical knowledge is required to make ai reliable for production and are going to fail because they don’t have the right skills.
The only way it won’t happen is if the VCs are smarter than last time and make fewer bad bets. And that’s a big fucking if.
Also, a lot of the ideas that failed in the dot com bubble weren’t actually bad ideas, they were just too early and the tech wasn’t there to support them. There were delivery apps for example in the early internet days, but the distribution tech didn’t exist yet. It took smart phones to make it viable. The same mistakes are ripe to happen with ai too.
Then there’s the companies that have good ideas and just under estimate the work needed to make it work. That’s going to happen a bunch with ai because prompts make it very easy to come up with a prototype, but making it reliable takes seriously good engineering chops to deal with all the times ai acts unpredictably.
they were doing there were a dozen companies trying something new that may or may not work,
I’d like some samples of that. A company attempting something transformative back then that may or may not work that didn’t work. I was working for a company that hooked ‘promising’ companies up with investors, no shit, that was our whole business plan, we redress your site in flash, put some video/sound effects in, and help sell you to someone with money looking to buy into the next google . Everything that was ‘throwing things at the wall to see what sticks’ was a thinly veiled grift for VC. Almost no one was doing anything transformative. The few things that made it (ebay, google, amazon) were using engineers to solve actual problems. Online shopping, Online Auction, Natural language search. These are the same kinds of companies that continue to spring into existence after the crash.
It’s the whole point of the bubble. It was a bubble because most of the money was going into pockets not making anything. People were investing in companies that didn’t have a viable product and had no intention south of getting bought by a big dog and making a quick buck. There weren’t all of a sudden this flood of inventors making new and wonderful things unless you count new and amazing marketing cons.
You got two problems:
First, ai can’t be a tutor or teacher because it gets things wrong. Part of pedagogy is consistency and correctness and ai isn’t that. So it can’t do what you’re suggesting.
Second, even if it could (it can’t get to that point, the technology is incapable of it, but we’re just spitballing here), that’s not profitable. I mean, what are you gonna do, replace public school teachers? The people trying to do that aren’t interested in replacing the public school system with a new gee whiz technology that provides access to infinite knowledge, that doesn’t create citizens. The goal of replacing the public school system is streamlining the birth to workplace pipeline. Rosie the robot nanny doesn’t do that.
The private school class isn’t gonna go for it either, currently because they’re ideologically opposed to subjecting their children to the pain tesseract, but more broadly because they are paying big bucks for the best educators available, they don’t need a robot nanny, they already have plenty. You can’t sell precision mass produced automation to someone buying bespoke handcrafted goods.
There’s a secret third problem which is that ai isn’t worried about precision or communicating clearly, it’s worried about doing what “feels” right in the situation. Is that the teacher you want? For any type of education?
Essentially we have invented a calculator of sorts, and people have been convinced it’s a mathematician.
First, ai can’t be a tutor or teacher because it gets things wrong.
Since the iteration we have that’s designed for general purpose language modeling and is trained widely on every piece of data in existence can’t do exactly one use case, you can’t conceive that it can ever be done with the technology? GTHO. It’s not like we’re going to say ChatGPT teach kids how LLM works, but some more stuctured program that uses something like chatGPT for communication. This is completely reasonable.
that’s not profitable.
A. It’s my opinion but I think you’re dead wrong and it’s easily profitable if not to ivy league standards it would certainly put community college out of business.
B. Screw profit. Philanthropic investment throws a couple billion into a nonprofit run by someone who wants to see it happen.
The private school class isn’t gonna go for it either,
You think an Ivy League school is above selling a light model of their courseware when they don’t have to pay anyone to teach the classes, or grade the work? Check out Harvard University Edx. It’s not a stretch.
t third problem which is that ai isn’t worried about precision or communicating clearly
Ohh a secret third problem, that sounds fun. I’ll let you in on another secret, AI isn’t worried because it’s a very large complicated math program. It doesn’t worry about communicating clearly, the people who pile on layer upon layer of LLM to produce output do that. It doesn’t give a damn about anything, but the people who work on it do.
You want clarity?
Let’s have GTP4, here as it sits clear up your complaint about my post:
“Here is a revised version of your text that sounds more educated:
There are two primary issues with the notion of using artificial intelligence as a tutor or teacher. Firstly, artificial intelligence is not infallible and can make mistakes. Consistency and correctness are integral components of pedagogy, and artificial intelligence is not yet capable of achieving these standards. As such, it is not suitable for the role you have suggested.
Secondly, even if artificial intelligence were capable of fulfilling this role (which it is not, due to technological limitations), it would not be a profitable endeavor. Replacing public school teachers with artificial intelligence would not align with the goals of those seeking to reform the public school system. The aim is to streamline the process from birth to the workplace, and an artificial intelligence nanny would not facilitate this.
Furthermore, the private school class would not be interested in such a proposal. They are currently opposed to subjecting their children to such technology and are willing to pay for the best educators available. Precision mass-produced automation cannot compete with bespoke handcrafted goods.
There is also a third issue to consider: artificial intelligence is not concerned with precision or clear communication. Instead, it focuses on what “feels” right in a given situation. Is this the type of teacher we want for any form of education?”
or perhaps you’d like to have it communicate clearly to younger people
“Sure, here is a revised version of the text written at a 5th-grade level:
There are two big problems with using computers to teach kids. The first problem is that computers can make mistakes. Teachers need to be right all the time, and computers aren’t always right. So, they can’t do the job you want them to do.
The second problem is that even if computers could do the job (they can’t, but let’s pretend), it wouldn’t make money. People who want to change schools don’t want to use computers to teach kids everything. They want to make it easier for kids to get jobs when they grow up. A computer nanny can’t do that.
Rich kids who go to private schools wouldn’t like it either. Their parents pay a lot of money for the best teachers. They don’t want a computer nanny. You can’t sell something cheap and easy to make to someone who wants something special and handmade.
There’s also a secret third problem. Computers don’t care about being right or explaining things clearly. They just do what they think is best at the moment. Is that the kind of teacher you want? For any kind of learning?”
There are two kinds of companies in tech: hard tech companies who invent it, and tech-enabled companies who apply it to real world use cases.
With every new technology you have everyone come out of the woodwork and try AI in the domain they know with a new tech-enabled venture.
Then you gave a pruning period when some critical mass of mismatches between new tool and application run out of money and go under.
AI is not good for everything, at least not yet.
So now it’s AI’s time to simmer down and be used for what it’s actually good at, or continue as niche hard-tech ventures focused on making it better at those things it’s not good at.
I absolutely love how cypto (blockchain) works but have yet to see a good use case that’s not a pyramid scheme. :)
LLM/AI I’ll never be good for everything. But it’s damn good a few things now and it’ll probably transform a few more things before it runs out of tricks or actually becomes AI (if we ever find a way to make a neural network that big before we boil ourselves alive).
The whole quantum computing thing will get more interesting shortly, as long as we keep finding math tricks it’s good at.
I was around and active for dotcom, I think right now, the tech is a hell of lot more interesting and promising.
What happens if you take a well done video college course, every subject, and train an AI that’s both good working with people in a teaching frame and is also properly versed on the subject matter. You take the course, in real time you can stop it and ask the AI teacher questions. It helps you, responding exactly to what you ask and then gives you a quick quiz to make sure you understand. What happens when your class doesn’t need to be at a certain time of the day or night, what happens if you don’t need an hour and a half to sit down and consume the data?
You get stupid-ass students because an AI producing word-salad is not capable of critical thinking.
It would appear to me that you’ve not been exposed to much in the way of current AI content. We’ve moved past the shitty news articles from 5 years ago.
Pro tip: when you start to see articles talking a bout how something looks like a bubble, it means it’s already popped and anybody who hasn’t already cashed in their investment is a bag-holder.
en.wikipedia.org/wiki/Dot-com_bubble
Between 1990 and 1997, the **percentage of households in the United States owning computers increased from 15% to 35% as computer ownership progressed from a luxury to a necessity. This marked the shift to the Information Age, an economy based on information technology, and many new companies were founded.
At least we got something out of the dot-com bubble. What do you think we got from this one, if you think it’s over?
The AI bubble produced many useful products already, many of which will remain useful even after the bubble popped.
The term bubble is mostly about how investment money flows around. Right now you can get near infinite moneys if you include the term AI in your business plan. Many of the current startups will never produce a useful product, and after the bubble has truly popped, those who haven’t will go under.
Amazon, ebay, booking and cisco survived the dotcom bubble, as they attracted paying users before the bubble ended. Things like github copilot, dalee, chat bots etc are genuinely useful products which have already attracted paying cusomers. Some of these products may end up being provided by competitors of the current providers, but someone will make long term money from these products.
Not the case for AI. We are at the beginning of a new era
Did you mean the crypto/NFT bubble?
NFTs yes but crypto is absolutely not a bubble. People are saying that for decades now and it hasn’t been truth. Yes there are shitcoins, just shitstocks. But in general, it’s definitely not a bubble but an alternative investing method beside stocks, gold etc.
Crypto is 100% a bubble. It’s not an investment so much as a ponzi, sure you can dump money into it and maybe even make money, doesn’t mean it doesn’t collapse on a whim when someone else decides to dip out or the government shuts it down. Its value is exactly that of NFT’s because it’s basically identical, just a string of characters showing “ownership” of something intangible
The only way for someone to make money in crypto is for someone else to lose it.
Crypto is a scam.
NFTs yes but crypto is absolutely not a bubble. People are saying that for decades now and it hasn’t been truth. Yes there are shitcoins, just like shitstocks. But in general, it’s definitely not a bubble but an alternative investing method beside stocks, gold etc.
And what is the underlying mechanism that increases its value, like company earnings are to stocks?
Yeah I was going to say VC throwing money at the newest fad isn’t anything new, in fact startups strive exploit the fuck out of it. No need to actually implement the fad tech, you just need to technobabble the magic words and a VC is like “here have 2 million dollars”.
In our own company we half joked about calling a relatively simple decision flow in our back end an “AI system”.
I think *LLMs to do everything is the bubble. AI isn’t going anywhere, we’ve just had a little peak of interest thanks to ChatGPT. Midjourney and the like aren’t going anywhere, but I’m sure we’ll all figure out that LLMs can’t really be trusted soon enough.
There’s a lot of similarity in tone between crypto and AI. Both are talking about their sphere like it will revolutionize absolutely everything and anything, and both are scrambling to find the most obscure use case they can claim as their own.
The biggest difference is that AI has concrete, real-world applications, but I suspect its use, ultimately, will be less universal and transformative as the hype is making it out to be.
Same thing happened with crypto and block chain. The whole “move fast and break things” in reality means, "we made up words for something that isn’t special to create value out of nothing and cash out before it returns to nothing
If it crashes hard I look forward to all the cheap server hardware that will be in the secondhand market in a few years. One I’m particularly excited about is the 4000 sff, single slot, 75w, 20GB, and ~3070 performance.
Every startup now: ![]i.imgflip.com/7voak9.jpg
Where’s all the “NoOoOoO this isn’t like crypto it’s gonna be different” people at now?
The dotcom bubble was different. Now, everything related to actual AI development is hyped but the dotcom bubble inflated entire indexes, “new market” indexes were setup comprising companies nobody had ever heard of. It was orders of magnitude worse.
Let this sink in: some companies got $100k from VCs where the project was pretty much a software that made API Calls to ChatGPT.
Obviously the bubble will burst.
I read an article once about how when humans hear that someone has died, the first thing they try and do is come up with a reason that whatever befell the deceased would not happen to them. Some of the time there was a logical reason, some of the time there’s not, but either way the person would latch onto the reason to believe they were safe. I think we’re seeing the same thing here with AI. People are seeing a small percentage of people lose their job, with a technology that 95% of the world or more didn’t believe was possible a couple years ago, and they’re searching for reasons to believe that they’re going to be fine, and then latching onto them.
I worked at a newspaper when the internet was growing. I saw the same thing with the entire organization. So much of the staff believed the internet was a fad. This belief did not work out for them. They were a giant, and they were gone within 10 years. I’m not saying we aren’t in an AI bubble now, but, there are now several orders of magnitude more money in the internet now than there was during the Dot Com bubble, just because it’s a bubble doesn’t mean it wont eventually consume everything.
An apt analogy. Just like the web underlying technology is incredible and the hype is real, but it leads to endless fluff and stupid naive investments, many of which will lead nowhere. There were certainly be a lot of amazing advances using this tech in the coming decades, but for every one that is useful there will be 20 or 50 or 100 pieces of vaporware that is just trying to grab VC money.
So, who are the up and comers? Not every company in the dotcom era died. Some grew very large and made a lot of people rich.
No! Really, what a shock!!
Of course. Sure, AI image generated stuff are impressive but no way those companies could cover the operational, R&D cost if VC were not injecting shit load of fake money.]
It certainly is somewhere around the peak of the hype cycle.
They gimped it for the masses but AI is going strong. There is no question of the power of GPT4 and others. LLM’s are just a part of the big picture.
It is all ridiculous how quickly these bubbles form - and then burst - these days.
Obviously AI has been around for a while, amd ChatGPT has been in development for years, but it really only hit the mass media literally less than a year ago. Late November, early December of 2022. And in well under a year there is already talk of the bubble bursting.
Fund these companies and take them public before the hype train derails. The VCs smell a greater fool, and it’s the IPO investor.
This is probably a much better analogy than NFTs, but the dotcom bubble had much broader implications
Well, then we are facing two bubbles at the same time: AI and cyber currencies. Once both those bubbles burst, the fallout is going to make the dot-com era bubble look like small suds by comparison.
Already!? XD
NounsAndWords@lemmy.world 1 year ago
Just a reminder that the dot com bubble was a problem for investors, not the underlying technology that continued to change the entire world.
DogMuffins@discuss.tchncs.de 1 year ago
That’s true, but investors have a habit of making their problems everyone else’s problems.
FMT99@lemmy.world 1 year ago
Not that you’re wrong per-se but the dotcom bubble didn’t impact my life at all back in the day. It was on the news and that was it. I think this will be the same. A bunch of investors will lose their investments, maybe some adventurous pension plans will suffer a bit, but on the whole life will go on.
The impact of AI itself will be much further reaching. We better force the companies that do survive to share the wealth otherwise we’re in for a tough time. But that won’t have anything to do with a bursting investment bubble.
GenderNeutralBro@lemmy.sdf.org 1 year ago
Yeah. Note how we’re having this conversation over the web. The bubble didn’t hurt the tech.
This is something to worry about it you’re an investor or if you’re making big career decisions.
If you have a managed investment account, like a 401(k), it might be worth taking a closer look at it. There’s no shortage of shysters in finance.
MyDogLovesMe@lemmy.world 1 year ago
In Canada, a good example is cannabis industry. Talk about fucking up opportunities.
Why would it be any different with tech?
It’s about the early cash-grab, imo.