PumpkinEscobar
@PumpkinEscobar@lemmy.world
- Comment on Worst amusement park ever. 4 weeks ago:
If you go, definitely stay at Four Seasons Total Landscaping next door, best accommodations around and their convention spaces are great for any press conferences you might need to hastily put together.
- Comment on OpenAI to remove non-profit control and give Sam Altman equity 1 month ago:
First a caveat/warning - you’ll need a beefy GPU to run larger models, there are some smaller models that perform pretty well.
Adding a medium amount of extra information for you or anyone else that might want to get into running models locally
Tools
- Ollama - great app for downloading/managing/running models locally
- OpenWebUI - A web app that provides a UI like the ChatGPT web app, but can use local models
- continue.dev - A VS Code extension that can use ollama to give a github copilot-like AI assistant running against a local model (can also connect to Anthropic Claude, etc…)
Models
If you look at ollama.com/library?sort=featured you can see models
Model size is measured by parameter count. Generally higher parameter models are better (more “smart”, more accurate) but it’s very challenging/slow to run anything over 25b parameters on consumer GPUs. I tend to find 8-13b parameter models are a sort of sweet spot, the 1-4b parameter models are meant more for really low power devices, they’ll give you OK results for simple requests and summarizing, but they’re not going to wow you.
If you look at the ‘tags’ for the models listed below, you’ll see things like
8b-instruct-q8_0
or8b-instruct-q4_0
. The q part refers to quantization, or shrinking/compressing a model and the number after that is roughly how aggressively it was compressed. Note the size of each tag and how the size reduces as the quantization gets more aggressive (smaller numbers). You can roughly think of this size number as “how much video ram do I need to run this model”. Models can run partially or even fully on a CPU but that’s much slower. Ollama doesn’t yet support these new NPUs found in new laptops/processors, but work is happening there.- Llama 3.1 - The 8b instruct model is pretty good, decent speed and good quality. This is a good “default” model to use
- Llama 3.2 - This model was just released yesterday. I’m only seeing the 1b and 3b models right now. They’ve changed the 8b model to 11b, I’m assuming the 11b model is going to be my new goto when it’s available.
- Deepseek Coder v2 - A great coding assistant model
- Command-r - This is a more niche model, mainly useful for RAG. It’s only available in a 35b parameter model, so not all that feasible to run locally
- Mistral small - A really good model, in the ballpark of Llama. I haven’t had quite as much luck with this as with Llama but it is good and I just saw that a new version was released 8 days ago, will need to check it out again
- Comment on OpenAI to remove non-profit control and give Sam Altman equity 1 month ago:
It’s a good thing that real open source models are getting good enough to compete with or exceed OpenAI.
- Comment on Zelda-Inspired Plucky Squire Shows What Happens When A Game Doesn't Trust Its Players 1 month ago:
I like the game, but agree with the over-tutorialed complaints. They have two difficulty modes, I wish only story mode got all the handholding. I think there’s enough obvious indicators to get you through all the game mechanics.
- Comment on Trump Airpods 4 months ago:
MAWP - Archer
- Comment on [deleted] 4 months ago:
Taking ollama for instance, either the whole model runs in vram and compute is done on the gpu, or it runs in system ram and compute is done on the cpu. Running models on CPU is horribly slow. You won’t want to do it for large models
LM studio and others allow you to run part of the model on GPU and part on CPU, splitting memory requirements but still pretty slow.
Even the smaller 7B parameter models run pretty slow in CPU and the huge models are orders of magnitude slower
So technically more system ram will let you run some larger models but you will quickly figure out you just don’t want to do it.
- Comment on don't use ladybird browser lol 4 months ago:
FWIW they didn’t merge it, they closed the PR without merging, link to line that still exists on master.
The recent comments are from the announcement of the ladybird browser project which is forked from some browser code from Serenity OS, I guess people are digging into who wrote the code.
Not arguing that the new comments on the PR are good/bad or anything, just a bit of context.
- Comment on Microsoft has gone too far: including a Game Pass ad in the Settings app ushers in a whole new age of ridiculous over-advertising 4 months ago:
Been 100% linux for like 6-9 months now, these stories make me thankful for finally making the switch.
I’ve tried to make the switch 3-4 times in the past and was stopped by 2 main things:
- Drivers / Laptops were tough to get set up
- Gaming
The experience was so much better this time and I really have no regrets. I don’t imagine I’ll ever run Windows again outside of a VM
- Comment on Elon Musk has another secret child with exec at his brain implant company 4 months ago:
Elon “Nick Cannon” Musk
- Comment on Star dates – is one day equal to 0,07 SD in TNG? 6 months ago:
Yeah, some shows did have their own consistent-ish systems, but I think some shows used a system that seemed to be relative to the center of the solar system, others from the perspective of the ship (which makes more sense to me, like naval bearings) - memory-alpha.fandom.com/wiki/Heading.
It was a quick lookup from a long time ago, I was working on a 3d space game and was curious if ST had a consistent model I could just use.
- Comment on Star dates – is one day equal to 0,07 SD in TNG? 6 months ago:
The headings / bearings they use are all over the place too, remember looking it up and it feels like the writers just picked whatever numbers best fit the flow / cadence of dialog they were looking for
- Comment on cute little devils 7 months ago:
Went to one of the sanctuaries around Hobart, their odor is pretty rough. I saw it described as smelling like a smelly wet dog, and it’s definitely like that, but combined with like a “skunk, but one that evolved on the other side of the planet” smell.
- Comment on Mini PC with Intel N100 and 6 x 2 5 GbE LAN ports 7 months ago:
Yeah, meant the website title, but in truth it’s tough to tell what’s astroturfing bots vs people here. And honestly these things with 6 2.5GbE ports is plenty impressive, not sure why the website felt the need to goose it like they did.
- Comment on Mini PC with Intel N100 and 6 x 2 5 GbE LAN ports 7 months ago:
2.5 GbE NOT 25. That’s the funkiest clickbait bullshit I’ve seen in a while.
- Comment on Roku explores taking over HDMI feeds with ads 7 months ago:
I like that. If there was a site that did like The Razzies for movies but for technology enshitification, I would definitely watch, and probably follow a blog if it was done well
- Comment on Security footage of Boeing repair before door-plug blowout was overwritten 8 months ago:
Woopsie!
South Park “we’re sorry” image goes here
- Comment on Delicious. 8 months ago:
If that’s true, that dude had the worst case of the munchies ever
- Comment on Delicious. 8 months ago:
TIL - en.wikipedia.org/wiki/Bath_salts_(drug).
I just assumed there was some dose of bath salts you could take that would get you high (and hungry for faces) but not kill you, like don’t people use tractor starter fluid to roofie people…
But happy I now know
- Comment on Delicious. 8 months ago:
I can’t see bath salts now without immediately thinking about the Florida man eats faces while high on bath salts story. Now I’m imagining tiny children eating faces while high on bath salts. Thanks a lot, internet.
- Comment on The New Audi A3 Is Amess With In-Car Subscriptions 8 months ago:
dumbest fucking timeline. A subscription for the physical thing you just paid $40k for.
- Comment on Average website visit in 2024 9 months ago:
Yeah, too many sites I’ve done 3+ captchas and still won’t let me in, and not even the ones where 1 cell has either a shadow or a sliver of a bike tire. And reports that bots are now better at passing these than people. I won’t use a site with a pick-the-squares captcha anymore.
Click a slider is the most I’ll do. If anyone needs me I’ll be over here hanging out with the bots that are too shitty to pass a captcha.
- Comment on Advice on encrypted storage 9 months ago:
TPM & sbctl. Look into sbctl for secure boot if you’re not on something that uses the signed shim like ubuntu. I know some hate secure boot but storing the unlock key in tpm is at least much more secure than having the key sitting on a usb drive
Tang - network based unlock. If you have a separate raspberry pi or something you can set it up as a tang server. You’ll want that thing encrypted too, can set that up to require manual unlock so if someone boosts your servers the tang server never comes up, storage server won’t either
Or just manually unlock the server with a password every boot?
That’s roughly my prioritized/preferred list
- Comment on HP raising Instant Ink subscription pricing significantly 11 months ago:
Holy shit, .15 euros per page? Why not just run to der Kinkos? I haven’t checked but I imagine it’s cheaper there. I get the convenience of having a printer at home but this is like if every cup of coffee you make at home cost you the Starbucks $8.25.
- Comment on Banana Pi BPI-M7 - More Reasons to Avoid the Raspberry Pi 11 months ago:
It’s the same, I picked up an Orange Pi 5 plus on sale and didn’t even think about the kernel and module driver situation. It’s rough. Joshua-Riek/ubuntu-rockchip and the other contributors do great work to un-fuck the situation and get a non-screwy ubuntu install cobbled together, but in the comments for issues even he gives off a “well, the situation is shit” sort of vibe.
I won’t buy another rockchip sbc.
- Comment on Why you shouldn't use Brave Browser 1 year ago:
What’s Mozilla doing?
- Comment on What issue is this? Something with the z-seam? 1 year ago:
I can’t quite tell but is it looks to me like it’s doing a turn-around there? Like instead of doing a complete circle for the inner wall it’s turning to do a continuous path for the rest of the print.
What slicer are you using? I only know cura and have never had this problem but I’d try doing less than 100% full rate which seems to have an indirect effect to cause it to prefer cleaner paths for walls. I’m sure there are some other wall related settings to play with.