I just use pre-made AI’s and write some detailed instructions for them, and then watch them churn out basic documents over hours…I need a better Laptop
Comment on China scientists develop flash memory 10,000× faster than current tech
gravitas_deficiency@sh.itjust.works 3 weeks agoYou can get a Coral TPU for 40 bucks or so.
You can get an AMD APU with a NN-inference-optimized tile for under 200.
Training can be done with any relatively modern GPU, with varying efficiency and capacity depending on how much you want to spend.
What price point are you trying to hit?
WorldsDumbestMan@lemmy.today 3 weeks ago
boonhet@lemm.ee 3 weeks ago
With regards to AI?. None tbh.
With this super fast storage I have other cool ideas but I don’t think I can get enough bandwidth to saturate it.
barsoap@lemm.ee 3 weeks ago
TBH, that might be enough. Stuff like SDXL runs on 4G cards (the trick is using ComfyUI, like 5-10s/it), smaller LLMs reportedly too (haven’t tried, not interested). And the reason I’m eyeing a 9070 XT isn’t AI it’s finally upgrading my GPU, still would be a massive fucking boost for AI workloads.
gravitas_deficiency@sh.itjust.works 3 weeks ago
You’re willing to pay $none to have hardware ML support for local training and inference?
Well, I’ll just say that you’re gonna get what you pay for.
bassomitron@lemmy.world 3 weeks ago
No, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.
boonhet@lemm.ee 3 weeks ago
Precisely.