Comment on Why LLMs can't really build software

<- View Parent
Aceticon@lemmy.dbzer0.com ⁨1⁩ ⁨week⁩ ago

Like the guy whose baby doubled in weight in 3 months and thus he extrapolated that by the age of 10 the child would weight many tons, you’re assuming that this technology has a linear rate of improvement of “intelligence”.

This is not at all what’s happening - the evolution of things like LLMs in the last year or so (say between GPT4 and GPT5) is far less than it was earlier in that Tech and we keep seeing more and more news on problems about training it further and getting it improved, including the big one which is that training LLMs on the output of LLMs makes them worse, and the more the output of LLMs is out there, the harder it gets to train them with clean data.

With this specific path taken in implementing AI, the question is not “when will it get there” but rather “can it get there or is it a technological dead-end”, and at least for things like LLMs the answer increasingly seems to be that it is a technological dead-end for the purpose of creating reasoning intelligence and doing work that requires it.

(For all your preemptive defense by implying that critics are “ai haters”, no hate is required to do this analysis, just analytical ability and skepticism, untainted by fanboyism)

source
Sort:hotnewtop