afk_strats
@afk_strats@lemmy.world
- Comment on ROCm on older generation AMD gpu 19 hours ago:
ROCm on my 7900xt is solid. ROCm on my MI50s (Vega) is a NIGHTMARE
- Comment on Researchers figured out how to run a 120-billion parameter model across four regular desktop PCs 1 week ago:
I still think AI is mostly a toy and a corporate inflation device. There are valid use cases but I don’t think that’s the majority of the bubble
- For my personal use, I used it to learn how models work from a compute perspective. I’ve been interested and involved with natural language processing and sentiment analysis since before LLMs became a thing. Modern models are an evolution of that.
- A small, consumer grade model like GPT-oss-20 is around 13GB and can run on a single mid-grade consumer GPU and maybe some RAM. It’s capable of parsing text and summarizing, troubleshooting computer issues, and some basic coding or code review for personal use. I built some bash and home assistant automatons for myself using these models as crutches. Also, there is software that can index text locally to help you have conversations with large documents. I use this with documentation for my music keyboard which is a nightmare to program and with complex APIs.
- A mid-size model like Nemotron3 30B is around 20GB can run on a larger consumer card (like my 7900xtx with 24 gb of VRAM, or 2 5060tis with 16gb of vRAM each) and will have vaguely the same usability as the small commercial models, like Gemini Flash, or Claude Haiku. These can write better, more complex code. I also use these to help me organize personal notes. I dump everything in my brain to text and have the model give it structure.
- A large model like GLM4.7 is around 150GB can do all the things ChatGPT or Gemini Pro can do, given web access and a pretty wrapper. This requires big RAM and some patience or a lot of VRAM. There is software designed to run these larger models in RAM faster, namely ik_llama but, at this scale, you’re throwing money at AI.
I played around with image creation and there isn’t anything there other than a toy for me. I take pictures with a camera.
- Comment on Researchers figured out how to run a 120-billion parameter model across four regular desktop PCs 1 week ago:
I think you’re missing the point or not understanding.
Let me see if I can clarify
What you’re talking about is just running a model on consumer hardware with a GUI
The article talks about running models on consumer hardware. I am making the point that this is not a new concept. The GUI is optional but, as I mentioned, llama.cpp and other open source tools provide an OpenAI-compatible api just like the product described in the article.
We’ve been running models for a decade like that.
No. LLMs, as we know them, aren’t that old, were a harder to run and required some coding knowledge and environment setup until 3ish years ago, give or take when these more polished tools started coming out.
Llama is just a simplified framework for end users using LLMs.
Ollama matches that description. Llama is a model family from Facebook. Llama.cpp, which is what I was talking about, is an inference and quantization tool suite made for efficient deployment on a variety of hardware including consumer hardware.
The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it’s batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.
Map reduce, in very simplified terms, means spreading out compute work to highly pararelized compute workers. This is, conceptually, how all LLMs are run at scale. You can’t map reduce or parallelize LLMs any more than they already are. The article doent imply map reduce other than taking about using multiple computers.
They aren’t talking about just running models as you’re describing.
They don’t talk about how the models are run in the article. But I know a tiny bit about how they’re run. LLMs require very simple and consistent math computations on extremely large matrixes of numbers. The bottleneck is almost always data transfer, not compute. Basically, every LLM deployment tool is already tries to use as much parallelism as possible while reducing data transfer as much as possible.
The article talks about gpt-oss120, so were aren’t talking about novel approaches to how the data is laid out or how the models are used. We’re talking about tranformer models and how they’re huge and require a lot of data transfer. So, the preference is try to keep your model on the fastest-transfer part of your machine. On consumer hardware, which was the key point of the article, you are best off keeping your model in your GPU’s memory. If you can’t, you’ll run into bottlenecks with PCIe, RAM and network transfer speed. But consumers don’t have GPUs with 63+ GB of VRAM, which is how big GPT-OSS 120b is, so they MUST contend with these speed bottlenecks. This article doesn’t address that. That’s what I’m talking about.
- Comment on Researchers figured out how to run a 120-billion parameter model across four regular desktop PCs 1 week ago:
This is basically meaningless. You can already run gpt-OSS 120 across consumer grade machines. In fact, I’ve done it with open source software with a proper open source licence, offline, at my house. It’s called llama.cpp and it is one of the most popular projects on GitHub. It’s the basis of ollama which Facebook coopted and is the engine for LMStudio, a popular LLM app.
The only thing you need is around 64 gigs of free RAM and you can serve gpt-oss120 as an OpenAI-like api endpoint. VRAM is preferred but llama.cpp can run in system RAM or on top of multiple different GPU addressing technologies. It has a built-in server which allows it to pool resources from multiple machines…
I bet you could even do it over a series of high-ram phones in a network.
So I ask is this novel or is it an advertisement packaged as a press release?
- Comment on Help is needed 2 weeks ago:
- Cream Theater
- System of a Town
- Go:jira
- Comment on 2 weeks ago:
Source?
- Comment on using a binder clip as a spring instead of 3d printing one 2 weeks ago:
Some of those transitions were 🔥🔥🔥
- Comment on The crossover you've been waiting for 3 weeks ago:
ILLUSIONS, MICHAEL!
- Comment on You’ll never say, “Emmanuel” until you feel that stable overflow. 4 weeks ago:
It’s KLog!
- Comment on What's the best way to answer someone who accuses you of being a bot because they don't like what you have to say? 1 month ago:
🌟✨ Absolutely! 🌈 I’m so thrilled you found that answer 🌼 amazing! It’s wonderful to see you move beyond your initial reaction 🚀 and really embrace the humor in it! 😂💖 Keep shining bright! 🌟🌻
- Comment on GPU prices are coming to earth just as RAM costs shoot into the stratosphere - Ars Technica 1 month ago:
- Comment on Introducing SlopStop: Community-driven AI slop detection in Kagi Search 2 months ago:
Can we make an extension for Firefox and call it Sloppy-Stoppy?
- Comment on Taking a photo to remember a moment is actually outsourcing that memory to an image, so your brain does less work and remembers it worse. 2 months ago:
The brain is incredibly malleable and, for a lot of people, memory is a vague image or a concept of something which happened. For a smaller subset, visual memory and visual imagination is not possible. Pictures are a more permanent visual representation, which can be additive to an experience. That’s not to say you shouldn’t live in the moment or that you should take pictures in lieu of making memories. You do you. I’m biased because I’m a photographer though.
- Comment on Stop cramming everything onto one Pi: treat your home lab like a tiny ISP - hardware, stack, backups and an update plan 2 months ago:
I’ve been on the internet a long time and this made me say “what the fuck” out loud
- Comment on What budget friendly GPU for local AI workloads should I aim for? 2 months ago:
3090 24gb ($800 USD) 3060 12gb x 2 if you have 2 pcie slots (<$400 USD) Radeon mi50 32gb with Vulkan (<$300 ) if you have more time, space, and will to tinker
- Comment on Bewildered enthusiasts decry memory price increases of 100% or more — the AI RAM squeeze is finally starting to hit PC builders where it hurts 2 months ago:
I have a MI50/7900xtx gaming/ai setup at homr which in i use for learning and to test out different models. Happy to answer questions
- Comment on Nvidia reveals Vera Rubin Superchip for the first time — incredibly compact board features 88-core Vera CPU, two Rubin GPUs, and 8 SOCAMM modules 2 months ago:
14 GB of vRAM?
- Comment on 2 months ago:
no multiplayer paywall Until Microsoft changes the deal Or you have to scan your retinas to verify watching an ad before you queue for a round of Halo CE Re-Campaign remake HD remaster Master Chief Cortana Limited Edition
- Comment on Meet Mico, Microsoft’s AI version of Clippy 2 months ago:
Rover back on XP
- Comment on New Study: Global Fertility Rate Decline Now Linked Directly to the Commodification of Housing 2 months ago:
This is such an important finding if true. Does anyone have an idea about how reliable this is/ know of other news outlets reporting this as definitively?
I see sources which corroborate the thesis here and I’m asking if there are other news or policy outlets which agree with this.
Reason I think this is important is because falling birthrates are blamed on all kinds of reasons which are typical societal scapegoats. I’ve heard everything from immigrantiin, to women having jobs, to porn, and even videogames. I’d love it of we could focus on things that actually matter
- Comment on What's your greatest "gaming high" you've been chasing ever since? Please take care not to spoil anything, if you are going to be story-specific. 3 months ago:
The awe and grandeur of Occarina Of Time… at the time.
Disco Elysium is the best literature I’ve ever played.
I still feel like used to live in Skyrim. It was a place where I wanted to be and explore.
TF2/Halo CE multilayer mix of copetitive adrenaline and funny shenanigans
Those are the game experiences which stuck with me.
- Comment on The Great Software Quality Collapse: How We Normalized Catastrophe 3 months ago:
Accept that quality matters more than velocity. Ship slower, ship working. The cost of fixing production disasters dwarfs the cost of proper development.
This has been a struggle my entire career. Sometimes, the company listens. Sometimes they don’t. It’s a worthwhile fight but it is a systemic problem caused by management and short-term profit-seeking over healthy business growth
- Comment on [deleted] 3 months ago:
Cetus-Lupeedus!
- Comment on Big Surprise—Nobody Wants 8K TVs 4 months ago:
I haven’t seen this mentioned but apart from 8K being expensive, requiring new production pipelines, unweildley for storage and bandwidth, unneeded, and not fixing g existing problems with 4K, it requires MASSIVE screens to reap benefits.
There are several similar posts, but suffice to say, 8K content is only perceived by average eyesight at living room distances when screens are OVER 100 inches in diameter at the bare minimum. That’s 7 feet wide.
Source: https://www.rtings.com/tv/reviews/by-size/size-to-distance-relationship
- Comment on Google will require developer verification for Android apps outside the Play Store 4 months ago:
He’s very good!
- Comment on Kirkland strong 4 months ago:
- Comment on Germans will see that this map still has DDR in it. 4 months ago:
Oh look. A life savings worth of DDR4
- Comment on This website is for humans 5 months ago:
The op site is hosted on Neocities. They aim to foster that 2000s vibe. Check them out here
- Comment on Consumer debt hit a record high in the second quarter 5 months ago:
Off topic: Why does this picture look like AI? Its weirdly warm and lacking contrast? I’m not talking about telltale details, I could tell scrolling past
- Comment on One Angry Man 5 months ago:
American Puss