AI in general yes. LLMs in particular, I very much doubt it.
The difference here is that you’re never going to reach New Zealand that way but incremental improvements in AI will eventually get you to AGI*
*
Unless intelligence is substrate independent and cannot be replicated in silica or that we destroy ourselves before we get there
jmcs@discuss.tchncs.de 2 days ago
fine_sandy_bottom@discuss.tchncs.de 1 day ago
That assumes that whatever we have now is a precursor to AGI. There’s no evidence of that.
Free_Opinions@feddit.uk 1 day ago
No, it doesn’t assume that at all. This statement would’ve been true even before electricity was invented and AI was just an idea.
MidWestKhagan@lemmygrad.ml 1 day ago
What do you mean there’s no evidence? This seems like a difference of personal explanation of what AGI is where you can move the goal post as much as you want “it’s not really AGI until it can ___, ok just because it can do that doesn’t mean it’s AGI, AGI needs to be able to do _____”.
SkunkWorkz@lemmy.world 1 day ago
Yeah not with LLMs though.
Free_Opinions@feddit.uk 1 day ago
You can’t know that.
underscore_@sopuli.xyz 2 days ago
It is a common misconception that incremental improvements must equate to eventually achieving the goal, but it is perfectly possible that progress could be asymptotic and we never reach AGI even with constant “advancements”
Free_Opinions@feddit.uk 2 days ago
Incremental improvements by definition mean that you’re moving towards something. It might take a long time but my comment made no claims about the timescale. There’s only two plausible scenarios that I can think of in which we don’t reach AGI and they’re mentioned in my comment.
then_three_more@lemmy.world 2 days ago
That relies on the increments being the same. It’s much easier to accelerate from 0 to 60 mph than it is from 670,999,940 mph to C.
Thorry84@feddit.nl 2 days ago
It’s very easy with an incremental improvement tactic to get stuck in a local maximum. You’ve then hit a dead end, every available option leads to a degredation and thus isn’t viable. It isn’t a sure thing incremental improvements lead to the desired outcome.
Free_Opinions@feddit.uk 2 days ago
I simply cannot imagine a situation where we reach a local maximum and get stuck in it for the rest of human history. There’s always someone else trying a new approach. We will not stop trying to improve our technology. Even just simply knowing what doesn’t work is a step in the right direction.
davidgro@lemmy.world 1 day ago
I can imagine it really easily for the foreseeable future, all that would need to happen is for the big corporations and well funded researchers to stick to optimizing LLMs.
Yeah that’s not the rest of human history (unless the rest of it isn’t very much) but enough to make concerns about AGI into someone else’s problem.
Free_Opinions@feddit.uk 1 day ago
Like I said; I’ve made no claims about the timeline. All I’ve said is that incremental improvements will lead to us getting there eventually.
chonglibloodsport@lemmy.world 1 day ago
By saying this aren’t you assuming that human civilization will last long enough to get there?
Look at the timeline of other species on this planet. Vast numbers of them are long extinct. They never evolved intelligence to our level. Only we did. Yet we know our intelligence is quite limited.
What took biology billions of years we’re attempting to do in a few generations (the project for AI began in the 1950s). Meanwhile the amount of non-renewable energy resources we’re consuming has hit exponential takeoff. Our political systems are straining and stretching to the breaking point.
And of course progress towards AI has not been steady with the project. There was an initial burst of success in the ‘50s followed by a long AI winter when researchers got stuck in a local maximum. It’s not at all clear to me that we haven’t entered a new local maximum with LLMs.
Do we even have a few more generations left to work on this?
Free_Opinions@feddit.uk 1 day ago
I’m talking about AI development broadly, not just LLMs.
I also listed human extinction as one of the two possible scenarios in which we never reach AGI, the other being that there’s something unique about biological brains that cannot be replicated artificially.