Comment on AGI achieved 🤖
jsomae@lemmy.ml 2 days agoMachine learning algorithm from 2017, scaled up a few orders of magnitude so that it finally more or less works, then repackaged and sold by marketing teams.
Comment on AGI achieved 🤖
jsomae@lemmy.ml 2 days agoMachine learning algorithm from 2017, scaled up a few orders of magnitude so that it finally more or less works, then repackaged and sold by marketing teams.
SoftestSapphic@lemmy.world 2 days ago
Adding weights doesn’t make it a fundamentally different algorithm.
We have hit a wall where these programs have combed over the totality of the internet and all available datasets and texts in existence.
We’re done here until there’s a fundamentally new approach that isn’t repetitive training.
jsomae@lemmy.ml 2 days ago
Transformers were pretty novel in 2017, I don’t know if they were really around before that.
Anyway, I’m doubtful that a larger corpus is what’s needed at this point. (Though that said, there’s a lot more text remaining in instant messager chat logs like discord that probably have yet to be integrated into LLMs. Not sure.) I’m also doubtful that scaling up is going to keep working, but it wouldn’t surprise that much me if it does keep working for a long while. My guess is that there’s some small tweaks to be discovered that really improve things a lot but still basically like like repetitive training as you put it.
outhouseperilous@lemmy.dbzer0.com 1 day ago
Okay but have you considered that if we just reduce human intelligence enough, we can still maybe get these things equivalent to human level intelligence, or slightly above?
We have the technology.