It’s available throug ollama already. i am running the 8b model on my little server with it’s 3070 as of right now.
It’s really impressive for a 8b model
Comment on The first GPT-4-class AI model anyone can download has arrived: Llama 405B
abcdqfr@lemmy.world 3 months ago
Wake me up when it works offline “The Llama 3.1 models are available for download through Meta’s own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time.”
It’s available throug ollama already. i am running the 8b model on my little server with it’s 3070 as of right now.
It’s really impressive for a 8b model
Intriguing. Is that an 8gb card? Might have to try this after all
Yup, 8GB card
Its my old one from the gaming PC after switching to AMD.
It now serves as my little AI hub and whisper server for home assistant
What the heck is whisper? Ive been fooling around with hass for ages, haven’t heard of it even after at least two minutes of searching. Is it openai affiliated hardwae?
I’m running 3.1 8b as we speak via ollama totally offline and gave info to nobody.
Through meta…
That’s where I stop caring
admin@lemmy.my-box.dev 3 months ago
It works offline. When you use with ollama, you don’t have to register or agree to anything.
Once you have downloaded it, it will keep on working, meta can’t shut it down.
MonkderVierte@lemmy.ml 3 months ago
Well, yes and no. See the other comment, 64 GB RAM at the lowest setting and the 70b running slow with modern CPU and 32 GB RAM.
admin@lemmy.my-box.dev 3 months ago
Oh, sure. For the 405B model it’s absolutely infeasible to host it yourself. But for the smaller models (70B and 8B), it can work.
I was mostly replying to the part where they claimed meta can take it away from you at any point - which is simply not true.