Comment on ChatGPT's new browser has potential, if you're willing to pay
cerebralhawks@lemmy.dbzer0.com 23 hours ago
Here’s the thing: I’m not willing to pay for AI. I liked Siri when its “cookie monster” joke about dividing zero by zero wasn’t considered offensive and before it had to Google everything or ask ChatGPT for everything. Now I just don’t care about it at all. And that’s Siri — I’m intentionally on the platform with the crappiest, deadest, most useless AI because I really don’t want AI in my life. And it’s great.
As long as I can use Firefox on the Mac and not worry about AI — Firefox did add some chatbot thing, but it was very easy to disable — I’m just going to keep doing that.
My only worry will be, at some point, the Net might get to where you need AI. Hopefully by then they will have figured out a way to make it free. I hope I can just ride that wave. If not, who knows. I worry for younger users, though many of them seem to be embracing the changes, kind of like how we embraced Web 2.0 before social media went to shit (and that was before fascists started taking over/spinning up their own).
brucethemoose@lemmy.world 23 hours ago
I hate to say it, but we’re basically there, and AI doesn’t help a ton. If the net is trash, there’s not a lot it can do.
Self hosted is 100% taking off. Getting a local agent to sift through the net’s sludge will be about as easy as tweaking Firefox before long.
MagicShel@lemmy.zip 22 hours ago
Local is also slower and… less robust in capability. But it’s getting there. I run local AI and I’m really impressed with gains in both. It’s just still a big gap.
We’re headed in a good direction here, but I’m afraid local may be gated by ability to afford expensive hardware.
brucethemoose@lemmy.world 22 hours ago
Not anymore. I can run GLM 4.6 on a Ryzen/single RTX 3090 at 7 tokens/s, and it runs rings around most API models. I can run 14-49Bs in more utilitarian cases that do just fine.
But again, it’s all ‘special interest tinkerer’ tier. You can’t do ollama run, you have to mess with exotic libraries and setups to squeeze out that kind of performance.
MagicShel@lemmy.zip 22 hours ago
I’ll look into it. OAI’s 30B model is the most I can run in my MacBook and it’s decent. I don’t think I can even run that on my desktop with a 3060 GPU. I have access to GLM 4.6 through a service but that’s the ~350B parameter model and I’m pretty sure that’s not what you’re running at home.
It’s pretty reasonable in capability. I want to play around with setting up RAG pipelines for specific domain knowledge, but I’m just getting started.