brucethemoose
@brucethemoose@lemmy.world
- Comment on Google Keeps Making Smartphones Worse 3 hours ago:
Irony is Android felt way more intuitive, including to my non techy family.
- Comment on Apple sues YouTuber who leaked iOS 26’s new “Liquid Glass” software redesign 4 hours ago:
Winterboard, that’s it!
- Comment on Apple sues YouTuber who leaked iOS 26’s new “Liquid Glass” software redesign 5 hours ago:
Ten years later, they finally have my iPhone 5 cydia theme!
- Comment on Google Keeps Making Smartphones Worse 6 hours ago:
Sorry, I meant the KA1 or KA3, got them mixed up. My KA3 was like $50.
I use it on my PC, too.
Considering the cost in reference to the hardware, and that I can use it basically forever? It’s not bad.
- Comment on poaceae 8 hours ago:
True.
- Comment on poaceae 9 hours ago:
Touche. Seems the common clade is commelinids:
- Comment on Google Keeps Making Smartphones Worse 9 hours ago:
Eh, even if they got it right and more popular, it would have enshittified quick.
- Comment on Google Keeps Making Smartphones Worse 9 hours ago:
TBH getting a nice dongle like a Fiio KA5 is not so bad. It’s small enough to just hang off the cord, and sounds better anyway.
- Comment on Google Keeps Making Smartphones Worse 9 hours ago:
Fast refresh rates are amazing. I cherised my old Razer Phone 2 before it was hip.
- Comment on Google Keeps Making Smartphones Worse 9 hours ago:
My last iPhone was a iPhone 5. Or 6, maybe?
Fast forward, and I’ve been on Android until right now, when I got an iPhone 16 in a loss-leader sale.
…And I am astounded by how much worse it is. My old jailbroken iPhone’s UI was both simpler and 100x times more customizable and useful than all the bizzare required gestures. I had basically every feature they have now, like the action button, and more. And it somehow feels slower in browsing than my SD845 Android 9 phone.
It wasn’t perfect back then, but the App Store is flooded with garbage now.
WTF has Apple been doing?
- Comment on Google Keeps Making Smartphones Worse 9 hours ago:
I feel like the standard should be two phones. A disposable ‘banking’ phone: tiny, no camera, no speakers, small SoC, just the absolute bare minimum to live.
…And then a ‘media’ phone without all the enshittification.
- Comment on Google Keeps Making Smartphones Worse 9 hours ago:
The iOS store needs the ability to report fraid which it doesn’t sort until you install an app.
That’s probably to reduce brigading? Android is infested with all sorts of fraduelnt marketing techniques like fake reviews, and mass fraud reporting for competition sounds like another.
- Comment on Google Keeps Making Smartphones Worse 9 hours ago:
Honestly I don’t think many people would care? Until the security holes became intractable, I guess.
Its proven Android phones are doing awful stuff, even client side, and has that slowed them down?
- Comment on Google Keeps Making Smartphones Worse 9 hours ago:
this time around all it would allow was “disable”.
This has been par for other OEM-flavored Android phones for years, unfortunately.
Disable
is alright, not that the phone itself isn’t a privacy nightmare. - Comment on poaceae 9 hours ago:
Palms trees are technically grass, AFAIK.
We figured this out in florida because targeted yard “weed spray” kills them, too.
- Comment on Delta moves toward eliminating set prices in favor of AI that determines how much you personally will pay for a ticket 1 day ago:
This could really suck for us because customers without a good advertising ‘paper trail’ (like many on Lemmy, I imagine) could get slapped with high default pricing.
…Otherwise (if they default to low pricing), people would try to game it, and they’re probably aware of that.
- Comment on Pedophiles celebrate US government policy 1 day ago:
The video is way worse.
- Comment on Pedophiles celebrate US government policy 1 day ago:
Does the endchan post predate the archive upload? Do you happen to have a link?
- Comment on Pedophiles celebrate US government policy 1 day ago:
Seems this was also posted awhile ago?
Search for the archive URL on google, and this post has been spammed around a few imageboards.
Seems… sketch?
- Comment on Pedophiles celebrate US government policy 1 day ago:
“Christian Mission US Border crossings and family flipping”
“Instructional Video: Her first time with a 7yo model”
“How to counter the pro-consent advocacy?”
WTF
“HR899 Terminate Department of Education. Lifelong goal achieved”
"Rural mass proverty and children that are cheaper than Vodka (2025 planning)
(Mod Pinned) Mass social manipulation has worked in America. The Second American Revolution has commenced. THE COUNTRY IS OURS
The claim:
Video was uploaded to the internet by a Jane Doe who said she had found it on her fathers computer and recorded the screen.
…If it’s a fake, it’s an elaborate one. Jesus. How is this not news?
- Comment on Very large amounts of gaming gpus vs AI gpus 2 days ago:
Eh, there’s not as much attention paid to them working across hardware because AMD prices their hardware uncompetitively (hence devs don’t test them much), and AMD themself focuses on the MI300X and above.
Also, I’m not sure what layer one needs to get ROCM working.
- Comment on I totally missed the point when PeerTube got so good 2 days ago:
Even the small local AI niche hates ChatGPT, heh.
- Comment on The Media's Pivot to AI Is Not Real and Not Going to Work 2 days ago:
, especially since something like a Mixture of Experts model could be split down to base models and loaded/unloaded as necessary.
It doesn’t work that way. All MoE experts are ‘interleaved’ and you need all of them loaded at once, for every token. Some API servers can hotswop wholes models, but its not fast, and rarely done since LLMs are pretty ‘generalized’ and tend to serve requests in parallel on API servers.
The closest to what you’re thinking of is LoRAX (which basically hot-swaps Loras efficiently). But it needs an extremely specialized runtime derived from its associated paper, hence people tend to not use it since it doesn’t support quantization and some other features as well: github.com/predibase/lorax
There is a good case for pure data processing, yeah… But it has little integration with LLMs themselves, especially with the API servers generally handling tokenizers/prompt formatting.
But, all of its components need to be localized
They already are! Local LLM tooling and engines are great and super powerful compared to ChatGPT (which offers no caching, no raw completion, primitive sampling, hidden thinking, and so on).
- Comment on Leading AI Models Are Completely Flunking the Three Laws of Robotics 2 days ago:
Clickbait.
- Comment on Leading AI Models Are Completely Flunking the Three Laws of Robotics 2 days ago:
There may be thought in a sense.
A good analogy might be a static biological “brain” custom grown to predict a list of possible next words in a block of text. It’s thinking, in a sense. Maybe it could acknowledge itself in a mirror. That doesn’t mean it’s self aware, though: It’s an unchanging organ.
And if one wants to go down the rabbit hole of “well there are different types of sentience, lines blur,” yada yada, with the end point of that being to treat things like they are…
All ML models are tools.
For now.
- Comment on The Media's Pivot to AI Is Not Real and Not Going to Work 2 days ago:
SGLang is partially a scripting language for prompt building leveraging its caching/logprobs output, for doing stuff like filling in fields or branching choices, so it’s probably best done in that. It also requires pretty beefy hardware for the model size (as opposed to backends like exllama or llama.cpp that focus more on tight quantization and unbatched performance), so I suppose theres not a lot of interest from more local tinkerers?
It would be cool, I guess, but ComfyUI does feel more geared for diffusion. Image/video generation is more multimodel and benefits from dynamically loading/unloading/swapping all sorts of little submodels, loras and masks, applying them, piping them into each other and such.
LLM running is more monolithic: you have the 1 big model, maybe a text embeddings model as part of the same server, and everything else is just processing strings to build the prompts which one does linearly om python or whatever. Stuff like CFG and Loras do exist, but aren’t used much.
- Comment on The Media's Pivot to AI Is Not Real and Not Going to Work 3 days ago:
Not specifically. Ultimately, ComfyUI would build prompts/API calls, which I tend to do in Python scripts.
I tend to use Mikupad or Open Web UI for more general testing.
There are some neat tools with ‘lower level’ integration into LLM engines, like SGlang (which leverages caching and constrained decoding) to do things one can’t do over standard APIs: docs.sglang.ai/frontend/frontend.html
- Comment on Very large amounts of gaming gpus vs AI gpus 3 days ago:
It depends!
Exllamav2 was pretty fast on AMD, exllamav3 is getting support soon. Vllm is also fast AMD. But its not easy to setup; you basically have to be a Python dev on linux and wrestle with pip.
Base llama.cpp is fine, as are forks like kobold.cpp rocm. This is more doable without so much hastle.
The AMD framework desktop is a pretty good machine for large MoE models. The 7900 XTX is the next best hardware, but unfortunately AMD is not really interested in competing with Nvidia in terms of high VRAM offerings :'/.
And there are… quirks, depending on the model.
I dunno about Intel Arc these days, but AFAIK you are stuck with their docker container or llama.cpp. And again, they don’t offer a lot of VRAM for the $ either.
Llama.cpp Vulkan (for use on anything) is improving but still behind in terms of support.
A lot of people do offload MoE models to Threadripper or EPYC CPUs. That’s the homelab way to run big models like Qwen 235B or deepseek these days. An Nvidia GPU is still standard, but you can use a 3090 or 4090.
- Comment on The Media's Pivot to AI Is Not Real and Not Going to Work 3 days ago:
I mean, I run Nemotron and Qwen every day, you are preaching to the choir here :P
- Comment on Very large amounts of gaming gpus vs AI gpus 3 days ago:
Depends. You’re in luck, as someone made a DWQ (which is the most optimal way to run it, and should work in LM Studio): huggingface.co/mlx-community/…/main
It’s chonky though. The weights alone are like 40GB, so assume 50GB of VRAM allocation for some context. I’m not sure what Macs that equates to… 96GB? Can the 64GB can allocate enough?
Otherwise, the requirement is basically a 5090. You can stuff it into 32GB as an exl3.
Note that it is going to be slow on Macs, being a dense 72B model.