Okay but have you considered that if we just reduce human intelligence enough, we can still maybe get these things equivalent to human level intelligence, or slightly above?
We have the technology.
Comment on AGI achieved š¤
SoftestSapphic@lemmy.world āØ2ā© āØweeksā© agoAdding weights doesnāt make it a fundamentally different algorithm.
We have hit a wall where these programs have combed over the totality of the internet and all available datasets and texts in existence.
Weāre done here until thereās a fundamentally new approach that isnāt repetitive training.
Okay but have you considered that if we just reduce human intelligence enough, we can still maybe get these things equivalent to human level intelligence, or slightly above?
We have the technology.
jsomae@lemmy.ml āØ2ā© āØweeksā© ago
Transformers were pretty novel in 2017, I donāt know if they were really around before that.
Anyway, Iām doubtful that a larger corpus is whatās needed at this point. (Though that said, thereās a lot more text remaining in instant messager chat logs like discord that probably have yet to be integrated into LLMs. Not sure.) Iām also doubtful that scaling up is going to keep working, but it wouldnāt surprise that much me if it does keep working for a long while. My guess is that thereās some small tweaks to be discovered that really improve things a lot but still basically like like repetitive training as you put it.