You’re willing to pay $none to have hardware ML support for local training and inference?
Well, I’ll just say that you’re gonna get what you pay for.
Comment on China scientists develop flash memory 10,000× faster than current tech
boonhet@lemm.ee 6 hours agoWhat price point are you trying to hit?
With regards to AI?. None tbh.
With this super fast storage I have other cool ideas but I don’t think I can get enough bandwidth to saturate it.
You’re willing to pay $none to have hardware ML support for local training and inference?
Well, I’ll just say that you’re gonna get what you pay for.
No, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.
barsoap@lemm.ee 4 hours ago
TBH, that might be enough. Stuff like SDXL runs on 4G cards (the trick is using ComfyUI, like 5-10s/it), smaller LLMs reportedly too (haven’t tried, not interested). And the reason I’m eyeing a 9070 XT isn’t AI it’s finally upgrading my GPU, still would be a massive fucking boost for AI workloads.