Comment on (A)lbert e(I)nstein
lmuel@sopuli.xyz 6 days agoTbf im not sure how much it helps them if you’re using the LLM without an account
Comment on (A)lbert e(I)nstein
lmuel@sopuli.xyz 6 days agoTbf im not sure how much it helps them if you’re using the LLM without an account
Whelks_chance@lemmy.world 6 days ago
Market share. They can show the usage figures to investors and ask for more cash
GeneralDingus@lemmy.cafe 6 days ago
Not if you run it locally!
ptu@sopuli.xyz 6 days ago
So what’s the ideal setup then?
ArsonButCute@lemmy.dbzer0.com 5 days ago
A Relatively recent gaming-type setup with local-ai or llama.cpp is what I’d recommend.
I do most of my AI stuff with an rtx3070, but I also have a ryzen 7 3800x with 64gb RAM for heavy models where I don’t so much care how long it takes but need the high parameter count for whatever reason, for example MoE and agentic behavior.
GeneralDingus@lemmy.cafe 5 days ago
I’m not sure what you mean by ideal. Like, run any model you ever wanted? Probably the latest ai nvidia chips.
But you can get away with a lot less for smaller models. I have the amd mid range card from 4 years ago (i forget the model at the top of my head) and can run text, 8B sized, models without issue.