I’m not opposed to A"I"; far from that, I actually use text generators a fair bit, sometimes image gens. It’s simply a technology and I use it as such. And I still bloody hate how corporations handle it:
- Always two weights, two measures. If you violate their IP, you’re a filthy criminal; if they violate yours, you’re overreacting and a luddite and harming progress. I want to see copyright gone, but if it is not, then apply it consistently to all sides. (By the way, fuck “Open"A"I” and their Bob Dylan defence.)
- Always nagging you to use it. If you’re nagging me to use something, it’s because it’s in yours best interests that I use it, not mine. No means “no” dammit.
- Always implicitly lying about its abilities. No, I’m not going to ask it anything where a bullshit answer might ruin my day, stop misleading me to do so.
- Always downplaying issues. Yeah, nah, I’m not blind to the environmental concerns around training those huge models. Or that corporations - that don’t understand what “consent” means - basically DDoS sites to train their models.
chicken@lemmy.dbzer0.com 11 months ago
I don’t know if I’m understanding this argument right, but the idea that integrating locally run AI is inherently privacy destroying in the same way as live service AI doesn’t make a lot of sense to me.
Umbrias@beehaw.org 11 months ago
building and centralizing pii is indeed a privacy point of failure. what’s not to understand?
chicken@lemmy.dbzer0.com 11 months ago
The use of local AI does not imply doing that, especially not the centralizing part.
lime@feddit.nu 11 months ago
think of apple’s on-device image scanner ai that flagged people as perverts after they had taken photos of sand dunes.
knightly@pawb.social 11 months ago
Microsoft Recall