For “impressive” general reasoning and conversation these LLM currently require pretty beefy hardware. You’re either lugging a GPU around or calling to an API.
For “impressive” general reasoning and conversation these LLM currently require pretty beefy hardware. You’re either lugging a GPU around or calling to an API.
elfin8er@lemm.ee 1 year ago
Aren’t these current personal assistants already relying on API calls for their responses?
GBU_28@lemm.ee 1 year ago
Like siri? Yes, my point pertained to hardware needed for LLM specifically though