I think the point is not that it is really going to happen at that pace, but to show that it very well might happen within our lifetime.
And also the authors have adjusted the earliest possible point of a possible hard to stop runaway scenario to 2028 afaik.
Kind of like the atomic doomsday clock, which has been oscillating between a quarter to twelve and a minute before twelve during the last decades, depending on active nukes and current politics.
Helps to illustrate an abstract but nonetheless real risk with maximum possible impact (annihilation of mankind - not fond of the idea…)
Even if it looks like AI has been hitting some walls for now (which I am glad about) and is overhyped, this might not stay this way.
So although AGI seems unlikely at the moment, taking the possibility into account and perhaps slowing down and making sure we are not recklessly risking triggering our own destruction is still a good idea, which is exactly the authors’ point.
Kind of like scanning the sky with telescopes and doing DART-style asteroid research missions is still a good idea, even though the probability for an extinction level meteorite event is low.
Their whole predication is based on exponential growth moving forward which is just impossible. The growth of new models already stagnated and all the new improvements are just optimizations and better interface layers. They are basically hard capped at what they can do and more powerful hardware can’t solve that.
Something ground breaking might happen that changes the whole landscape in the future, but it won’t be exponential growth.
BrikoX@lemmy.zip 19 hours ago
Make an archive copy of the site so that you can mock these morons in 2027.
TootSweet@lemmy.world 18 hours ago
web.archive.org/web/…/ai-2027.com/
archive.is/n6pie
Multiplexer@discuss.tchncs.de 18 hours ago
I think the point is not that it is really going to happen at that pace, but to show that it very well might happen within our lifetime. And also the authors have adjusted the earliest possible point of a possible hard to stop runaway scenario to 2028 afaik.
Kind of like the atomic doomsday clock, which has been oscillating between a quarter to twelve and a minute before twelve during the last decades, depending on active nukes and current politics. Helps to illustrate an abstract but nonetheless real risk with maximum possible impact (annihilation of mankind - not fond of the idea…)
Even if it looks like AI has been hitting some walls for now (which I am glad about) and is overhyped, this might not stay this way. So although AGI seems unlikely at the moment, taking the possibility into account and perhaps slowing down and making sure we are not recklessly risking triggering our own destruction is still a good idea, which is exactly the authors’ point.
Kind of like scanning the sky with telescopes and doing DART-style asteroid research missions is still a good idea, even though the probability for an extinction level meteorite event is low.
BrikoX@lemmy.zip 18 hours ago
Their whole predication is based on exponential growth moving forward which is just impossible. The growth of new models already stagnated and all the new improvements are just optimizations and better interface layers. They are basically hard capped at what they can do and more powerful hardware can’t solve that.
Something ground breaking might happen that changes the whole landscape in the future, but it won’t be exponential growth.