if A.I. dies out because capitalism I will wheeze
So Far, AI Is a Money Pit That Isn't Paying Off
Submitted 1 year ago by FlyingSquid@lemmy.world to technology@lemmy.world
https://gizmodo.com/github-copilot-ai-microsoft-openai-chatgpt-1850915549
Comments
bappity@lemmy.world 1 year ago
WrittenWeird@lemmy.world 1 year ago
The current breed of generative “AI” won’t ‘die out’. It’s here to stay. We are just in the early Wild-West days of it, where everyone’s rushing to grab a piece of the pie, but the shine is starting to wear off and the hype is juuuuust past its peak.
What you’ll see soon is the “enshittification” of services like ChatGPT as the financial reckoning comes, startup variants shut down by the truckload, and the big names put more and more features behind paywalls. We’ve gone past the “just make it work” phase, now we are moving into the “just make it sustainable/profitable” phase.
GenderNeutralBro@lemmy.sdf.org 1 year ago
This is why I, as a user, am far more interested in open-source projects that can be run locally on pro/consumer hardware. All of these cloud services are headed down the crapper.
My prediction is that in the next couple years we’ll see a move away from monolithic LLMs like ChatGPT and toward programs that integrate smaller, more specialized models. Apple and even Google are pushing for more locally-run AI, and designing their own silicon to run it. It’s faster, cheaper, and private. We will not be able to run something as big as ChatGPT on consumer hardware for decades (it takes hundreds of gigabytes of memory at minimum), but we can get a lot of the functionality with smaller, faster, cheaper models.
_number8_@lemmy.world 1 year ago
GPT already got way shittier from the version we all saw when it first came out to the heavily curated, walled garden version now in use
MargotRobbie@lemmy.world 1 year ago
Oh surprise surprise, looks like generative AI isn’t going to fulfill Silicon Valley and Hollywood studios’ dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!
As I said here before, generative AIs are not universal solution to everything that has ever existed like they are hyped up to be, but neither are they useless. At the end of the day, they are ultimately tools. Complex, powerful, useful tools, but tools nonetheless. A good artist can create better work faster with the help of a diffusion model, the same way LLM code generation can help a good programmer finish their project faster and better. (I think). All of these AI models are trained on data from data from everyone on Internet, which is why I think its reasonable that everyone should have access to these generative AI models for the benefit of humanity and not profit, and not just those who took other people’s work for free to trained the models. In other words, these generative AI models should belong to everyone.
And here lies my distaste for Sam Altman: OpenAI was founded as a nonprofit for the benefit of humanity, but at the first chance of money he immediately started venture capitalisting and put anything from GPT-2 onwards under locks and keys for money, and now it looks like that they are being crushed under the weight of their own operating costs while groups like Facebook and Stability catches up with actual open models, I will not be sad if "Open"AI fails.
(For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source though.)
I have to admit, playing with these generative models is pretty fun though.
atetulo@lemm.ee 1 year ago
Hm. I think you should zoom out a bit and try to recognize that AI isn’t stagnant.
Voice recognition and translation programs to years before they were appropriate for real-world applications. AI is also going to require years before it’s ready. But that time is coming. We haven’t reached a ‘ceiling’ for AI’s capabilities.
MargotRobbie@lemmy.world 1 year ago
Breakthrough technological development usually can be described as a sigmoid function (s-shaped curve), while there is an exponential progress in the beginning, it usually hit a climax then slow down and plateau until the next breakthrough.
There are certain problem that are not possible to resolve with the current level of technology for which development progress has slowed to a crawl, such as level 5 autonomous driving (by the way, better public transport is a way less complex solution.), and I think we are hitting the limit of what far transformer based generative AI can do since training has become more and more expensive for smaller and smaller gains, whereas hallucination seems to be an inherent problem that is ultimately unfixable with the current level of technology.
nickwitha_k@lemmy.sdf.org 1 year ago
Oh surprise surprise, looks like generative AI isn’t going to fulfill Silicon Valley and Hollywood studios’ dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!
It really is incredible how much this rhymes with the crypto hype. To be fair, the technology does actually have uses but, as someone in the latter category, after I saw it in action, I quickly felt less worried about my job prospects.
Fortunately, enough people in charge of staffing seem to have listened to people with technical knowledge to not make my earlier prediction (mass layoffs directly due to LLMs, followed by mass, panicked re-hirings when said LLMs ruined the business) come true. But, the worry itself, along with the RTO pushes (not to mention exploitation of contractors and H1B holders) really underscore his desperately the industry needs to get organized. Hopefully, what’s going on in the games industry with IATSE gets more traction and more of my colleagues on the same page but, that’s one area where I’m not as optimistic as I’d like to be - I’ll just have to cheer on SAG, WGA, and UAW for the time being.
(For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source.)
Absolutely agreed. There’s a surprising amount of good in the open source world that has come from otherwise ethically devoid companies. Even Intuit donated the Argo project, which has evolved from a cool workflow tool to a toolkit with far more. There is always the danger of EEE, however, so, we’ve got to stay vigilant.
batmangrundies@lemmy.world 1 year ago
There was a smallish VFX group here that was attached to a volume screen company. They employed something like 20 people I think? So pretty small.
But the volume screen employed a guy who could do an adequate enough job with generative tools instead and the company folded. The larger VFX company they partner with had 200 employees, they recently cut to 50.
In my field, a team leader in 2018 could earn about 180,000 AUD P/A. Now those jobs are advertised for 130,000 AUD, because new models can do ~80% of the analysis with human accuracy.
AI is already folding companies and cutting jobs. It’s not in the news maybe, but as industries shift to compete with smaller firms leveraging AI it will cascade.
MargotRobbie@lemmy.world 1 year ago
Generative AI can make each individual artist/writer/programmer much more efficient at their job, but the shareholders and executives get their way and only big companies have access to this technologu, this increased productivity will instead be used reduce headcount and make the remaining people do more work on a tighter deadline, instead of helping everyone work less, do better work, and be happier.
This is the reason I think democratizing generative AI via local models is important, because as your example shows, it levels the playing field between small and big players, and helps people work less while making more cool stuff.
FLX@lemmy.world 1 year ago
A powerful tool maybe, but useless
If your drill needs a nuclear plant and monthly subcription to drill a hole, it’s a shitty tool
warbond@lemmy.world 1 year ago
Going to have to disagree with you there. I’ve gotten plenty of use out of chat GPT in multiple scenarios. I find it difficult to imagine what exactly you think is useless about it because it seems so indispensable to me at this point.
mojo@lemm.ee 1 year ago
Silicon like usual thinking these things are as big as the invention as the internet, and trying to get their money in there the first place. AI was and still is a massive game changer, but nothing can live up to the hype of which they throw a stupid amount of money at these things. They didn’t learn their lesson after crypto or the “metaverse” either lol. I see AI being a tool, an incredibly useful one. That also means it has a lot of jobs it simply can’t do. It can’t replace artists, but artists can use it as a tool to help them work off of things.
snek@lemmy.world 1 year ago
So far I’ve only seen AI being used to fire employees that a company totally absolutely still needs but just doesn’t want to pay wages to. Companies are dumb as fuck, that’s my conclusion, but what else can you expect by organizations run by ladder-climbing CEO figures?
lloram239@feddit.de 1 year ago
Things will live up to the hype and easily surpass it. That’s not the issue. The issue is that people take the world of today and imagine how much better/faster/richer they could become if they had AI. The crux is by the time they have AI, everybody else has it too. Thus it loses its competitive advantage. It just raises the baseline.
If I had to create the thousands of images I have generated with AI three years ago it would have costs thousands if not millions of dollar, a gigantic almost insurmountable task. But that doesn’t mean they have any value today. Everybody can produce similar images with a few clicks.
The whole point of AI is after all that it makes work that used to be difficult and expensive, cheap and easy, and nobody is going to pay huge amounts of money for a task that has become trivial.
guacupado@lemmy.world 1 year ago
What I’m curious is what’s going to happen to all these companies that went all-in on building data centers when they weren’t doing it previously. Places like Meta and Amazon are huge enough that it’s always been a sound investment but with this hype there are other companies trying to set up server farms with no real prize in sight.
barsoap@lemm.ee 1 year ago
I mean A100s don’t exactly break that quickly and they’re specialised enough hardware so that they will continue to be able to rent them out. They’re also overpriced AF though which might cut into the bottom line but they’re probably not going to end up with a giant loss, I don’t really doubt they will break even. That opportunity cost, though…
Aceticon@lemmy.world 1 year ago
Ever since the Internet Bubble crashed around 2000 that the business community in the Valley has been repeatedly trying to pump up a new bubble, starting with what they called Web 2.0 which started being hyped pretty maybe even before the dust settled on tha crash after the first Tech bubble.
And if you think about it, it makes sense: the biggest fortunes ever made in Tech are still from companies which had their initial growth back then, such as Google, Amazon and even Paypal (Microsoft and Apple being maybe the most notable exceptions, both predating it).
iwenthometobeafamilyman@lemmy.world 1 year ago
FTFY: “Business modes that are merely providing an API wrapper around ChatGPT isn’t paying off”.
IRL, people are doing some amazing things with generative AI, esp in 2D graphic art.
Potatos_are_not_friends@lemmy.world 1 year ago
So much fucking this.
Every cash grab right now around AI is just a frontend for a chatGPT API. And every investor who throws money at them is the mark.
trashgirlfriend@lemmy.world 1 year ago
IRL, people are doing some amazing things with generative AI, esp in 2D graphic art.
Woah, shiny bland images that are a regurgitation of stolen artwork!!!
Nougat@kbin.social 1 year ago
Never mind that LLMs are a far cry from AI.
Lmaydev@programming.dev 1 year ago
They are literally AI. As are path finding algorithms for games.
People just don’t get what AI is. Any program that simulates intelligence is AI.
You’re likely thinking of general AI.
QuaternionsRock@lemmy.world 1 year ago
the capability of computer systems or algorithms to imitate intelligent human behavior
I don’t know about you, but I would consider writing papers/books/essays/etc. (even bad ones) and code (even with mistakes) intelligent human behavior, and they’re pretty good at imitating it.
ComradeBunnie@aussie.zone 1 year ago
It’s also helped me find the names of several books and films that have been rattling around in my mind, some for decades, which actually made me very happy because not remembering that sort of thing drives me a little mad.
I’m stuck on two books that it can’t work out - both absolute trash pulp fiction, one that I stopped reading because it was so terrible and the other that was so bad but I actually wouldn’t mind reading again.
Oh well, can’t have it all.
FLX@lemmy.world 1 year ago
people are doing
No they ain’t doing shit, they just prompt
macallik@kbin.social 1 year ago
What I don't like about the article is that the phrasing 'paying off' can apply to making investors money OR having worthwhile use cases. AI has created plenty of use cases from language learning to code correction to companionship to brainstorming, etc.
It seems ironic that a consumer-facing website is framing things from a skeptical "But is it making rich people richer?" perspective
xantoxis@lemmy.world 1 year ago
In my case, I still want to know if it’s not making rich people richer, because a) fuck rich people, and b) I don’t want to buy into things that will disappear in a year when the hype dies down. As a “consumer” my purchasing decisions impact my life, and the actions of the wealthy affect that more than you’d like.
Smacks@lemmy.world 1 year ago
AI is a tool to assist creators, not a full on replacement. Won’t be long until they start shoving ads into Bard and ChatGPT.
BeautifulMind@lemmy.world 1 year ago
AI is a tool to
assistplagiarize the work of creatorsFixed it
LOL OK it’s a super-powerful technology that will one day generate tons of labor very quickly, but none of that changes that in order to train it to be able to do that, you have to feed it the work of actual creators- and for any of that to be cost-feasible, the creators can’t be paid for their inputs.
The whole thing is predicated on unpaid labor, stolen property.
2ncs@lemmy.world 1 year ago
At what line does it become stolen property? There are plenty of tools which artists use today that use AI. Those AI tools they are using are more than likely trained on some creation without payment. It seems the data it’s using isn’t deemed important enough for that to be an issue. Google has likely scraped billions of images from the Internet for training on Google Lens and there was not as much of an uproar.
Honestly, I’m just curious if there is an ethical line and where people think it should be.
Kanda@reddthat.com 1 year ago
Wait, R&D doesn’t research and develop dollar bills into existence?
RanchOnPancakes@lemmy.world 1 year ago
Thats how this works. Blow though VC money to try and “strike gold” fail. Change model to become profitable." Move to the next scam.
jimbo@lemmy.world 1 year ago
Have they not tried simply asking the AI how to make it profitable?
alienanimals@lemmy.world 1 year ago
AI isn’t paying off if you’re too dumb to figure out how to use the many amazing tools that have come about.
BolexForSoup@kbin.social 1 year ago
I was going to say...I use AI-transcription tools for video editing, AI-upscaling, and Resolve dropped an incredible AI green screen tool that makes it effortless.
NegativeLookBehind@kbin.social 1 year ago
I wonder if “AI not paying off” in the context of this article actually means “Companies haven’t been able to lay off a bunch of their staff yet, like they’re hoping to do”
Semi-Hemi-Demigod@kbin.social 1 year ago
AI is a lot more like the Internet than it is like Facebook. It's a set of techniques you can use to create tools. These are incredibly useful tools, but you're not going to make Facebook money off of them because the techniques are pretty easy to replicate and the genie is out of the bottle.
What the tech bros are looking for is a way to control access to AI so they can be a chokepoint. Like if Craftsman could charge for every single time you used their tool to make something. For one very recent example, see what happened to Unity. Creating chokepoints and then collecting rent is the modern corporate feudal strategy, but that won't work if everybody with an AWS account and enough money can spin up an LLM and start training it.
_number8_@lemmy.world 1 year ago
AI stem splitting for songs is magical as well
mPony@kbin.social 1 year ago
@BolexForSoup can you recommend a good quality Upscaler ?
mPony@kbin.social 1 year ago
can you recommend a good quality Upscaler?
stealth_cookies@lemmy.ca 1 year ago
The problem here is that AI in the media has become synonymous with generalized LLMs, while other “AI” applications have been in place for many years doing more specific things that have more obvious use cases that can be more easily commercialised.
eltrain123@lemmy.world 1 year ago
Do people really not understand that we are in the early stages of ai development? The first time most people were made aware of LLMs was, like, 6 months ago. What ChatGPT can do is impressive for a self contained application, but is far from mature enough to do the things people are complaining it can’t do.
The point the industry is trying to warn about is that this technology is past its infancy and moving into, from a human comparison standpoint, childhood or adolescence. But, it iterates significantly faster than humans, so the time it can do the type of things people are bitching about is years, not decades, away.
If you think businesses have sunk this much money and effort into AI and didn’t do a cost-benefit analysis that stretched out decades, you are being naive or disingenuous.
flumph@programming.dev 1 year ago
If you think businesses have sunk this much money and effort into AI and didn’t do a cost-benefit analysis that stretched out decades, you are being naive or disingenuous.
Are you kidding? We literally just watched the same bubble and burst in companies that rushed to get their piece of the Metaverse and NFT cash grab. I worked at a SaaS company that decided to add AI features because it was in the news and Azure offered it as a service. There was zero financial analysis done, just like for every other feature they added
I’m sure Microsoft has a plan since they invested heavily. But even Google is playing catch-up like they did with GCP.
atetulo@lemm.ee 1 year ago
AI is actually useful.
The metaverse and NFTs aren’t.
Your analogy is not a 1:1 representation of the situation and only serves to distract from the topic at hand.
atetulo@lemm.ee 1 year ago
Do people really not understand that we are in the early stages of ai development?
Yes. Top post in this thread is someone cheering that AI won’t replace people in hollywood.
Just give it time. Remember how poor voice recognition and translation software was at first?
MargotRobbie@lemmy.world 1 year ago
Top post in this thread is someone cheering that AI won’t replace people in hollywood.
I really like how I’m just “someone” here now.
vrighter@discuss.tchncs.de 1 year ago
pretty much all improvements aren’t “better tech”, but just “bigger tech”. Reducing their footprint is an unsolved problem (just like it has always been with neural networks, for decades)
WhiteHawk@lemmy.world 1 year ago
Optimization is a problem that cannot be “solved” by definition, but a lot of work is being done on it with some degree of success
aesthelete@lemmy.world 1 year ago
You’d think at this point that investors would wait for a thing to fill out the question mark second step in their business plan before investing in it, but you’d be way, way wrong.
Every new tech company comes to the investor panel with:
-
build expressive to run new tool and give it away to end users for free
-
???
-
profit!
And after more than two decades somehow people keep falling for it (even when the last few “big ones” didn’t pan out at all).
punkwalrus@lemmy.world 1 year ago
Because people assume all these investors know what they are doing. They don’t. Now, some investors are good, but they usually don’t go for shit like this. At lot of investors are VCs, rich upper class twits, who can afford to lose money. Pure and simple. It’s like a bunch of lotto winners telling people they know how to pick numbers, betting outside bets once in a while, get lucky, and have selective bias.
Plus, they have enough money to hedge their bets. For example, say you invest $1mil in companies A, B, C, D, E, and F. All lose everything except A and B, which earn you $3mil each. You put in $6mil, got back $6mil. You broke even, tell people you knew what you were doing because you picked A and B, and conveniently never mention the rest. Then rich twits people invest in what YOU invest in. So you invest in H, others invest in H because you did, drives up the value. Now magnify this by a lot of investors, hundreds of letters, and it’s all like some weird game of luck and timing.
But a snapshot in time leads to your 2) ??? Point. Many know this is a confidence game, based on luck, charm, and timing. Some just stumble through it, and others are fleeced, but who cares? Daddy’s got money.
Money works different for rich people. It’s truly puzzling.
quackers@lemmy.blahaj.zone 1 year ago
They sure as hell are doing a good job of making me reliant on AI though. Soon I’ll probably be payinf 200$ a month because i cant remember how to do things without AI. I think thats the plan anyway.
aesthelete@lemmy.world 1 year ago
Soon I’ll probably be payinf 200$ a month because i cant remember how to do things without AI.
Sounds like a problem TBH, I’d get that checked out by a professional.
-
Potatos_are_not_friends@lemmy.world 1 year ago
These are the same kind of people who go, “We spent money on Timmy’s clothes for over two years and it’s not paying off.”
Bro, AI is an investment.
bluGill@kbin.social 1 year ago
It is a risky investment. Taking care of your kid is something where we have done it enough that we understand the risks and pay off and most parents can make a reasonable prediction. (a few kids will "turn 21 in prison doing life without parole" - but most turn out okay and return love to their parents and attempt to improve society - though you may not agree with their definition of improve society)
I have no idea if the current faults with AI will be solved or not. That is a risk you are taking. It is useful for some things, but we don't know how useful.
iopq@lemmy.world 1 year ago
There’s also the “not in prison, but mostly just lives at home and smokes weed” money pit of children
My childhood friend ended up this way and I’ve given up on him
InternetTubes@lemmy.world 1 year ago
Well, they don’t want to do the one thing needed to make it successful: transparency. Maybe it can’t be.
btaf45@lemmy.world 1 year ago
So far what I’ve seen from AI is that it lies and lies and lies. It lies about history. It lies about science. It lies about politics. It lies about case law. It lies about programming libraries. Maybe this will be fixed some day, or maybe it will just get worse. Until then the only thing I would trust it is about something in which their is no wrong answer.
RagingRobot@lemmy.world 1 year ago
I never ask it things I don’t know. I don’t think that’s really what’s it’s useful for. It’s really good at combining words though. So it can write a better sentence than I could. Better in a sense that it’s easier for others to understand what my thoughts are if I feed them in as input. Since they were my thoughts originally I can spot the bullshit pretty fast.
Lugh@futurology.today 1 year ago
It should also worry investors open-source AI is only months behind the big tech leaders. I looked into AI voice cloning lately. There’s a few really pricey options. Like $25 a month for a couple of hours voice cloning.
However, there’s already an open-source version of what they’re selling.
OldWoodFrame@lemm.ee 1 year ago
Yeah, so far. It’s super early in the modern incarnation of AI that actually has the chance to pay off, LLMs.
This isn’t like Bitcoin where there’s huge hype for a pretty small market opportunity. We all realize the promise, we are just still figuring out how to get rid of hallucinations and making it consistent and tuned to a certain business usage.
iwenthometobeafamilyman@lemmy.world 1 year ago
Image recognition AIs are already paying off, specifically in the medical technology industry. 90% of radiologists are gonna be out of a job within 5 years (just my personal bro take).
deranger@sh.itjust.works 1 year ago
It’s not “paying off” as this isn’t implemented anywhere, thus not making money.
I think you’re way off the mark and buying into the hype. That’s my opinion from an electronic medical record software analyst.
Kbin_space_program@kbin.social 1 year ago
Well, and also navigating the minefields that the LLMs absolutely have copyrighted material in them that wasn't paid for or licensed. E.G. Dall-E can produce a full image of Fresh Cut Grass, a character owned by Critical Role.
And that the stuff they produce *isn't copyright-able.
FaceDeer@kbin.social 1 year ago
And that the stuff they produce isn't copyright-able.
Even if that were true, is there no value in public domain art resources?
treadful@lemmy.zip 1 year ago
People are literally paying monthly subscriptions for access to a bunch of these things.
ripcord@kbin.social 1 year ago
Did you read the article? The problem hasn't been getting some people to pay, it's that the things that are available so far are losing loads of money. Or at least, that's the premise.
xantoxis@lemmy.world 1 year ago
Goood. Gooooooooooood.
AtmaJnana@lemmy.world 1 year ago
Tough shut. It’s the Next Big Thing so everyone has to have it.
kromem@lemmy.world 1 year ago
Great, now factor in the cost of data collection if not subsidizing usage that you are effectively getting free RLHF from…
The one thing that’s been pretty much a guarantee over the last 6 months is that if there’s a mainstream article with ‘AI’ in the title, there’s going to be idiocy abound in the text of it.
Nobody@lemmy.world 1 year ago
Who could have predicted writing bullshit-y papers for kids in school wasn’t a billion dollar business?
autotldr@lemmings.world [bot] 1 year ago
This is the best summary I could come up with:
A new report from the Wall Street Journal claims that, despite the endless hype around large language models and the automated platforms they power, tech companies are struggling to turn a profit when it comes to AI.
Github Copilot, which launched in 2021, was designed to automate some parts of a coder’s workflow and, while immensely popular with its user base, has been a huge “money loser,” the Journal reports.
OpenAI’s ChatGPT, for instance, has seen an ever declining user base while its operating costs remain incredibly high.
A report from the Washington Post in June claimed that chatbots like ChatGPT lose money pretty much every time a customer uses them.
Platforms like ChatGPT and DALL-E burn through an enormous amount of computing power and companies are struggling to figure out how to reduce that footprint.
To get around the fact that they’re hemorrhaging money, many tech platforms are experimenting with different strategies to cut down on costs and computing power while still delivering the kinds of services they’ve promised to customers.
The original article contains 432 words, the summary contains 172 words. Saved 60%. I’m a bot and I’m open source!
usualsuspect191@lemmy.ca 1 year ago
Can something be a money pit and pay off? I feel like not paying off is part of the definition of a money pit… Or was the headline written by AI
art@lemmy.world 1 year ago
I think they mean that it’s costing companies a lot of money to operate but their returns aren’t high enough to justify the costs.
Heresy_generator@kbin.social 1 year ago
The problem is that users pay $10 a month subscription fee for Copilot but, according to a source interviewed by the Journal, Microsoft lost an average of $20 per user during the first few months of this year. Some users cost the company an average loss of over $80 per month, the source told the paper.
If you accept the premise that Copilot is a finished software product ready for sale that seems really bad. When you realize that it's beta software still in testing and they convinced a large pool of people to pay them $10 a month for the privilege of testing it that worst case scenario of them having to pay $80 a month for the infrastructure the most prolific of these testers use seems like an extraordinary bargain.
TWeaK@lemm.ee 1 year ago
Sounds like the internet in the 90s.
1bluepixel@lemmy.world 1 year ago
It also reminds me of crypto. Lots of people made money from it, but the reason why the technology persists has more to do with the perceived potential of it rather than its actual usefulness today.
There are a lot of challenges with AI (or, more accurately, LLMs) that may or may not be inherent to the technology. And if issues cannot be solved, we may end up with a flawed technology that, we are told, is just about to finally mature enough for mainstream use. Just like crypto.
p03locke@lemmy.dbzer0.com 1 year ago
No, this isn’t crypto. Crypto and NFTs were trying to solve for problems that already had solutions with worse solutions, and hidden in the messaging was that rich people wanted to get poor people to freely gamble away their money in an unregulated market.
AI has real, tangible benefits that are already being realized by people who aren’t part of the emotion-driven ragebait engine. Stock images are going to become extinct in several years. People can make at least a baseline image of what they want, no matter the artistic ability. Musicians are starting to use AI tools. ChatGPT makes it easy to generate low-effort, high-time-consuming letters and responses like item descriptions, or HR responses, or other common draft responses. Code AI engines allow programmers to present reviewable solutions in real-time, or at least something to generate and tweak. None of this is perfect, but it’s good enough for 80% of the work that can be modified over time.
Things like chess AI has existed for decades, and LLMs are just extensions of the existing generative AI technology. I dare you to tell Chess.com that “AI is a money pit that isn’t paying off”, because they would laugh their fucking asses off, as they are actively pouring even more money and resources into Torch.
The author here is a fucking idiot. And he didn’t even bother to change the HTML title from its original focus of just Github Copilot. Clickbait bullshit.
thecrotch@sh.itjust.works 1 year ago
Let’s combine AI and crypto, and migrate it to the cloud. Imagine the PowerPoints middle managers will make about that!
iopq@lemmy.world 1 year ago
I’m still trying to transfer $100 from Kazakhstan to me here. By far the lowest fee option is actually crypto since the biggest difference is the currency conversion. If you have to convert anyway, might as well only pay 0.30% on both ends
demesisx@infosec.pub 1 year ago
Crypto found a problem to fix. The problem is: everything is run by that problem so it was astroturfed to death by parties that run the current financial system and the enemy of their enemy (who’s a friend), opportunistic scammers like SBF and Do Kwan.
Lmaydev@programming.dev 1 year ago
Or computers decades before that.
Many of these advances are incredibly recent.
And also many of the things we use in our day to day are ai powered without people even realising.
elbarto777@lemmy.world 1 year ago
AI powered? Like what?