Comment on Hello GPT-4o

<- View Parent
Dyf_Tfh@lemmy.sdf.org ⁨4⁩ ⁨months⁩ ago

If you already didn’t know, you can run locally some small models with an entry level GPU.

For example i can run Llama 3 8B or Mistral 7B on a 1060 3GB with Ollama. It is about as bad as GPT-3 turbo, so overall mildly useful.

Although there is quite a bit of controversy of what is an “open source” model, most are only “open weight”

source
Sort:hotnewtop