GO self-hosted,
So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server’s Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.
I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn’t even thought to try and it worked.
But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.
8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?
mushroommunk@lemmy.today 5 days ago
LLMs are already shit. Going local is still burning the world just to run a glorified text production machine
suspicious_hyperlink@lemmy.today 5 days ago
Having just finished getting an entire front end for my website, I disagree. A few years ago I would offshore this job to some third-world country devs. Now, AI can do the same thing, for cents, without having to wait for a few days for the initial results and another day or two for each revision needed
mushroommunk@lemmy.today 5 days ago
The fact you see nothing wrong with anything you said really speaks volumes to the inhumanity inherent with using “AI”.
suspicious_hyperlink@lemmy.today 5 days ago
Please enlighten me. I am working on systems solving real-world issues, and now I can ship my solutions faster, with lower costs. Sounds like a win-win for everyone involved except for the offshore employees that have to look for new gigs now
TherapyGary@lemmy.dbzer0.com 5 days ago
1000013638
angstylittlecatboy@reddthat.com 4 days ago
Do local LLMs really consume much more than a task like, playing a video game?