Comment on Hello GPT-4o
abhibeckert@lemmy.world 6 months agoyou can run locally some small models
Emphasis on “small” models. The large ones need about $80,000 in RAM.
Comment on Hello GPT-4o
abhibeckert@lemmy.world 6 months agoyou can run locally some small models
Emphasis on “small” models. The large ones need about $80,000 in RAM.
bamboo@lemm.ee 6 months ago
Llama 2 70B can run on a specc-ed out current gen MacBook Pro. Not cheap hardware in any sense, but it isn’t a large data center cluster.