Comment on [deleted]

<- View Parent
Multiplexer@discuss.tchncs.de ⁨17⁩ ⁨hours⁩ ago

You are probably quite right, which is a good thing, but the authors take that into account themselves:

“Our team’s median timelines range from 2028 to 2032. AI progress may slow down in the 2030s if we don’t have AGI by then.”

They are citing an essay on this topic, which elaborates on the things you just mentioned:
lesswrong.com/…/slowdown-after-2028-compute-rlvr-…

I will open a champagne bottle if there is no breakthrough in the next few years, because than the pace will significantly slow down.
But still not stop and that is the thing.
I myself might not be around any more if AGI arrives in 2077 instead of 2027, but my children will, so I am taking the possibility seriously.

And pre-2030 is also not completely out of the question. Everyone has been quite surprised on how well LLMs were working.
There might be similar surprises for the other missing components like world model and continuous learning in store, which is a somewhat scary prospect.

And alignment is even now already a major concern, let’s just say “Mecha-Hitler”, crazy fake videos and bot-armies with someone questionable’s agenda…
So seems like a good idea to try and press for control and regulation, even if the more extreme scenarios are likely to happen decades into the future, if at all…

source
Sort:hotnewtop