There’s absolutely a push for specialized hardware, look up that company called Groq !
Comment on Proton’s Lumo AI chatbot: not end-to-end encrypted, not open source
cley_faye@lemmy.world 1 day agoIt’s probably different. The crypto bubble couldn’t actually do much in the field of useful things.
Now, I’m saying that with a HUGE grain of salt, but there are decent application with LLM (let’s not call that AI). Unfortunately, these usages are not really in the sight of any business putting tons of money into their “AI” offers.
I kinda hope we’ll get better LLM hardware to operate privately, using ethically sourced models, because some stuff is really neat. But that’s not the push they’re going for for now. Fortunately, we can already sort of do that, although the source of many publicly available models is currently… not that great.
Zos_Kia@lemmynsfw.com 1 day ago
KingRandomGuy@lemmy.world 1 day ago
Yes, but at this point most specialized hardware only really work for inference. Most players are training on NVIDIA GPUs, with the primary exception of Google who has their own TPUs, but even these have limitations compared to GPUs (certain kinds of memory accesses are intractably slow, making them unable to work well for methods like instant NGP).
GPUs are already quite good, especially with things like tensor cores.
KumaSudosa@feddit.dk 1 day ago
LLMs are absolutely amazing for a lot of things. I use it at work all the time to check code blocks or remembering syntax. It is NOT and should NOT be your main source of general information and we collectively have to realise how problematic and energy consuming they are.