AI as a general concept probably will at some point. But LLMs have all but reached the end of the line and they’re not nearly smart enough.
CarbonatedPastaSauce@lemmy.world 2 months ago
The only people who would say this are people that don’t know programming.
LLMs are not going to replace software devs.
tias@discuss.tchncs.de 2 months ago
li10@feddit.uk 2 months ago
LLMs have already reached the end of the line 🤔
I don’t believe that. At least from an implementation perspective we’re extremely early on, and I don’t see why the tech itself can’t be improved either.
Maybe it’s current iteration has hit a wall, but I don’t think anyone can really say what the future holds for it.
jacksilver@lemmy.world 2 months ago
LLMs have been around since roughly 2016. While scaling the up has improved their performance/capabilities, there are fundamental limitations on the actual approach. Behind the scenes, LLMs (even multimodal ones like gpt4) are trying to predict what is most expected, while that can be powerful it means they can never innovate or be truth systems.
For years we used things like tf-idf to vectorize words, then embeddings, now transformers (supped up embeddings). Each approach has it limits, LLMs are no different. The results we see now are surprisingly good, but don’t overcome the baseline limitations in the underlying model.
todd_bonzalez@lemm.ee 2 months ago
The “Attention Is All You Need” paper that birthed modern AI came out in 2017. Before Transformers, “LLMs” were pretty much just Markov chains and statistical language models.
mashbooq@infosec.pub 2 months ago
I’m not trained in formal computer science, so I’m unable to evaluate the quality of this paper’s argument, but there’s a preprint out that claims to prove that current computing architectures will never be able to advance to AGI, and that rather than accelerating, improvements are only going to slow down due to the exponential increase in resources necessary for any incremental advancements (because it’s an NP-hard problem). That doesn’t prove LLMs are end of the line, but it does suggest that additional improvements are likely to be marginal.
Wooki@lemmy.world 2 months ago
we’re extremely early on
Oh really! The analysis has been established since the 70’s. Its so far from early on that statement is comical
todd_bonzalez@lemm.ee 2 months ago
Transformers, the foundation of modern “AI”, was proposed in 2017. Whatever we called “AI” and “Machine Learning” before that was mostly convolutional networks inspired by the 80’s “Neocognitron”, which is nowhere near as impressive.
The most advanced thing a Convolutional network ever accomplished was DeepDream, and visual Generative AI has skyrocketed in the 10 years since then. Anyone looking at this situation who believes that we have hit bedrock is delusional.
From DeepDream to Midjourney in 10 years is incredible. The next 10 years are going to be very weird.
homesweethomeMrL@lemmy.world 2 months ago
“at some point” being like 400 years in the future? Sure.
Ok that’s probably a little bit of an exaggeration. 250 years.
Blue_Morpho@lemmy.world 2 months ago
I can see the statement in the same way word processing displaced secretaries.
There used to be two tiers in business. Those who wrote ideas/solutions and those who typed out those ideas into documents to be photocopied and faxed. Now the people who work on problems type their own words and email/slack/teams the information.
In the same way there are programmers who design and solve the problems, and then the coders who take those outlines and make it actually compile.
LLM will disrupt the programmers leaving the problem solvers.
IsThisAnAI@lemmy.world 2 months ago
It’ll have to improve a magnitude for that effect. Right now it’s basically an improved stack overflow.
ripcord@lemmy.world 2 months ago
…and only sometimes improved. And it’ll stop improving if people stop using Stack Overflow, since that’s one of the main places it’s mined for data.
IsThisAnAI@lemmy.world 2 months ago
Nah, it’s built into the editors and repos these days.
michaelmrose@lemmy.world 2 months ago
There is no reason to believe that LLM will disrupt anyone any time soon. As it stands now the level of workmanship is absolutely terrible and there are more things to be done than anyone has enough labor to do. Making it so skilled professionals can do more literally just makes it so more companies can produce quality of work that is not complete garbage.
Juniors produce progressively more directly usable work with reason and autonomy and are the only way you develop seniors. As it stands LLM do nothing with autonomy and do much of the work they do wrong. Even with improvements they will in near term actually be a coworker. They remain something you a skilled person actually use like a wrench. In the hands of someone who knows nothing they are worth nothing. Thinking this will replace a segment of workers of any stripe is just wrong.
felbane@lemmy.world 2 months ago
The problem with this take is the assertion that LLMs are going to take the place of secretaries in your analogy. The reality is that replacing junior devs with LLMs is like replacing secretaries with a network of typewriter monkeys who throw sheets of paper at a drunk MBA who decides what gets faxed.
Blue_Morpho@lemmy.world 2 months ago
I’m saying that devs will use LLM’s in the same way they currently use word processing to send emails instead of handing hand written notes to a secretary to format, grammar/spell check, and type.
homesweethomeMrL@lemmy.world 2 months ago
I thought by this point everyone would know how computers work.
That, uh, did not happen.
VubDapple@lemmy.world 2 months ago
Good take
Angry_Autist@lemmy.world 2 months ago
No
Angry_Autist@lemmy.world 2 months ago
I don’t know if you noticed but most of the people making decisions in the industry aren’t programmers, they’re MBAs.
CarbonatedPastaSauce@lemmy.world 2 months ago
Irrelevant, anyone who tries to replace their devs with LLMs will crash and burn. The lessons will be learned. But yes, many executives will make stupid ass decisions around this tech.
Angry_Autist@lemmy.world 2 months ago
It’s really sad how even techheads ignore how rapidly LLM coding has come in the last 3 years and what that means in the long run.
Just look how rapidly voice recognition developed once Google started exploiting all of its users’ voice to text data. There was a point that industry experts stated ‘There will never be a general voice recognition system that is 90%+ across all languages and dialects.’ And google made one within 4 years.
The natural bounty of a no-salary programmer in a box is too great for this to ever stop being developed, and the people with the money only want more money, and not paying devs is something they’ve wanted since the coding industry literally started.
Yes its terrible now, but it is also in its infancy, like voice recognition in the late 90s it is a novelty with many hiccoughs. That won’t be the case for long and anyone who confidently thinks it can’t ever happen will be left without recourse when it does.
But that’s not even the worst part about all of this but I’m not going into black box code because all of you just argue stupid points when I do but just so you know, human programming will be a thing of the past outside of hobbyists and ultra secure systems within 20 years.
Maybe sooner
CarbonatedPastaSauce@lemmy.world 2 months ago
Maybe in 20 years. Maybe. But this article is quoting CEOs saying 2 years, which is bullshit.
I think it’s just as likely that in 20 years they’ll be crying because they scared enough people away from the career that there aren’t enough developers, when the magic GenAI that can write all code still doesn’t exist.
assembly@lemmy.world 2 months ago
The one thing that LLMs have done for me is to make summarizing and correlating data in documents really easy. Take 20 docs of notes about a project and have it summarize where they are at so I can get up to speed quickly. Works surprisingly well. I haven’t had luck with code requests.
Zexks@lemmy.world 2 months ago
That’s not what was said. He specifically said coding.
lemmyuser100002@lemmy.world 2 months ago
Wrong, this is also exactly what people selling LLMs to people who can’t code would say.
APassenger@lemmy.world 2 months ago
It’s this. When boards and non-tech savvy managers start making decisions based on a slick slide deck and a few visuals, enough will bite that people will be laid off. It’s already happening.
There may be a reckoning after, but wall street likes it when you cut too deep and then bounce back to the “right” (lower) headcount. Even if you’ve broken the company and they just don’t see the glide path.
It’s gonna happen. I hope it’s rare. I’d argue it’s already happening, but I doubt enough people see it underpinning recent lay offs (yet).