Comment on [deleted]

Ziggurat@sh.itjust.works ⁨11⁩ ⁨months⁩ ago

Have you tried GPT4All gpt4all.io/index.html ? It runs on CPU so is a bit slow, but it’s a way to run various LLM locally with an plug and play, easy to use solution. That said, LLM are huge, and perform better on GPU, provided you have a GPU big enough. Here is the trap. How much do you want to spend in a GPU ?

source
Sort:hotnewtop