Comment on Consumer hardware is no longer a priority for manufacturers

<- View Parent
WhyJiffie@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

You can’t just do “ollama run” and expect good performance, as the local LLM scene is finicky and highly experimental. You have to compile forks and PRs, learn about sampling and chat formatting, perplexity and KL divergence, about quantization and MoEs and benchmarking. Everything is moving too fast, and is too performance sensitive, to make it that easy, unfortunately.

how do you have the time to figure all these out and keep being up to date? do you do this at work?

source
Sort:hotnewtop