Wake me up when it works offline “The Llama 3.1 models are available for download through Meta’s own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time.”
The first GPT-4-class AI model anyone can download has arrived: Llama 405B
Submitted 3 months ago by Wilshire@lemmy.world to technology@lemmy.world
Comments
abcdqfr@lemmy.world 3 months ago
admin@lemmy.my-box.dev 3 months ago
It works offline. When you use with ollama, you don’t have to register or agree to anything.
Once you have downloaded it, it will keep on working, meta can’t shut it down.
MonkderVierte@lemmy.ml 3 months ago
Well, yes and no. See the other comment, 64 GB RAM at the lowest setting and the 70b running slow with modern CPU and 32 GB RAM.
RandomLegend@lemmy.dbzer0.com 3 months ago
It’s available throug ollama already. i am running the 8b model on my little server with it’s 3070 as of right now.
It’s really impressive for a 8b model
abcdqfr@lemmy.world 3 months ago
Intriguing. Is that an 8gb card? Might have to try this after all
Kuvwert@lemm.ee 3 months ago
I’m running 3.1 8b as we speak via ollama totally offline and gave info to nobody.
Fiivemacs@lemmy.ca 3 months ago
Through meta…
That’s where I stop caring
hperrin@lemmy.world 3 months ago
Yo this is big. In both that it is momentous, and holy shit that’s a lot of parameters. How many GB is this model?? I’d be able to run it if I had an few extra $10k bills lying around to buy the required hardware.
Ripper@lemmy.world 3 months ago
its around 800gb
2001zhaozhao@sh.itjust.works 3 months ago
Time to buy a thread ripper and 800gb of ram so that I can run this model at 1 token per hour.
i_am_a_cardboard_box@lemmy.world 3 months ago
Kind of petty from Zuck not to roll it out in Europe due to the digital services act… But also kind of weird since it’s open source? What’s stopping anyone from downloading the model and creating a web ui for Europe users?
obbeel@lemmy.eco.br 3 months ago
That looks good on paper, but while I find ChatGPT good to create critical thinking, I’ve found Meta’s products (Facebook and Instagram) to be sources of disinformation. That makes me have reservations about Meta’s intentions with LLMs. As the article says, the model comes pre-trained, so it’s most made up of information gathered by Meta.
BreadstickNinja@lemmy.world 3 months ago
Neither Meta nor anyone else is hand-curating their dataset. The fact that Facebook is full of grandparents sharing disinformation doesn’t impact what’s in their model.
But all LLMs are going to have accuracy issues because they’re 1) trained on text written by humans who themselves are inaccurate and 2) designed to choose tokens based on probability rather than any internal logic as to whether an answer is factual.
All LLMs are full of shit. That doesn’t mean they’re not fun or even useful in some applications, but you shouldn’t trust anything they write.
admin@lemmy.my-box.dev 3 months ago
Technically correct ™
Before you get your hopes up: Anyone can download it, but very few will be able to actually run it.
chiisana@lemmy.chiisana.net 3 months ago
What’s the resources requirements for the 405B model? I did some digging but couldn’t find any documentation during my cursory search.
modeler@lemmy.world 3 months ago
Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 408B parameter model. Ouch.
Blaster_M@lemmy.world 3 months ago
As a general rule of thumb, you need about 1 GB per 1B parameters, so you’re looking at about 405 GB for the full size of the model.
Quantization can compress it down to 1/2 or 1/4 that, but “makes it stupider” as a result.
coffee_with_cream@sh.itjust.works 3 months ago
This would probably run on a a6000 right?
5redie8@sh.itjust.works 3 months ago
“an order of magnitude” still feels like an understatement LOL
My 35b models come out at like Morse code speed on my 7800XT, but at least it does work?
LavenderDay3544@lemmy.world 3 months ago
When RTX 9090 Ti comes anyone who can afford it will be able to run it.
Contravariant@lemmy.world 3 months ago
That doesn’t sound like much of a change from the situation right now.
bitfucker@programming.dev 3 months ago
So does OSM data. Everyone can download the whole earth but to serve it and provide routing/path planning at scale takes a whole other skill and resources. It’s a good thing that they are willing to open source their model in the first place.