???
They’re Ryzen processors with “AI” accelerators, so an LLM can definitely run on hardware on one of those. Other options are available, like lower powered ARM chipsets (RK3588-based boards) with accelerators that might have half the performance but are far cheaper to run, should be enough for a basic LLM.
StrawberryPigtails@lemmy.sdf.org 4 months ago
It’s doable. Stick to the 7b models and it should work for the most part, but don’t expect anything remotely approaching what might be called reasonable performance. It’s going to be slow. But it can work.
To get a somewhat usable experience you kinda need an Nvidia graphics card or an AI accelerator.
1rre@discuss.tchncs.de 4 months ago
Intel Arc also works surprisingly fine and consistently for ML if you use llama.cpp for LLMs or Automatic for stable diffusion, it’s definitely much closer to Nvidia in terms of usability than it is to AMD
TheBigBrother@lemmy.world 4 months ago
You would suggest the K9 instead of the K8?
TheBigBrother@lemmy.world 4 months ago
I need it to make academic works pass the anti-ai systems, what do you recommend for that work? It’s for business so I need a reasonable good performance but nothing extravagant…
entropicdrift@lemmy.sdf.org 4 months ago
That’s not how it works, sorry.
hperrin@lemmy.world 4 months ago
Maybe just write the academic works yourself, then they should pass.
MangoPenguin@lemmy.blahaj.zone 4 months ago
Something with a GPU that’s good for LLMs would be best.