Comment on this one goes out to the arts & humanities

<- View Parent
evranch@lemmy.ca ⁨5⁩ ⁨weeks⁩ ago

We may not even “need” AGI. The future of machine learning and robotics may well involve multiple wildly varying models working together.

LLMs are already very good at what they do (generating and parsing text and making a passable imitation of understanding it).

We already use them with other models, for example Whisper is a model that recognizes speech. You feed the output to an LLM to interpret it, use the LLM’s JSON output with a traditional parser to feed a motion control system, then back to an LLM to output text to feed to one of the many TTS models so it can “tell you what it’s going to do”.

Put it in a humanoid shell or a Spot dog and you have a helpful robot that looks a lot like AGI to the user. Nobody needs to know that it’s just 4 different machine learning algorithms in a trenchcoat.

source
Sort:hotnewtop