And a third intrinsic problem is that the current models with infinite training data have been proven to never approach human language capability, from papers written by OpenAI in 2020 and Deepmind in 2023, and also a paper by Stanford which proposes AI simply have no emergent behavior and only convergent behavior.
So yeah. Lots of problems.
frezik@midwest.social 1 year ago
If gigantic amounts of capital weren’t available, then the focus would be on improving the models so they don’t need GPU farms running off nuclear reactors plus the sum total of all posts on the Internet ever.