tisktisk@piefed.social 1 week ago
Humbly requesting an explanation for the polarized takes on this. How are we so specifically split 50/50 rejoicing for the utility and cursing the 'slop' at the same? Someone sway me to a side? I'm addicted to reserving judgement
lime@feddit.nu 1 week ago
on the one hand, this is an ai horde-based bot. the ai horde is just a bunch of users who are letting you run models on their personal machines, which means this is not “big ai” and doesn’t use up massive amounts of resources. it’s basically the “best” way of running stable diffusion at small to medium scale.
on the other, this is still using “mainstream” models like flux, which has been trained on copyrighted works without consent and used shitloads of energy to train. unfortunately models trained on only freely available data just can’t compete.
lemmy is majority anti-ai, but db0 is a big pro-local-ai hub. i don’t think they’re pro-big-ai. so what we’re getting here is a clash between people who feel like any use of ai is immoral due to the inherent infringement and the energy cost, and people who feel like copyright is a broken system anyway and are trying to tackle the energy thing themselves.
it’s a pretty thorny issue with both sides making valid points, and depending on your background you may very well hold all the viewpoints of both sides at the same time.
tisktisk@piefed.social 1 week ago
Both sides having valid points is almost always the case with issues of any complexity. I'm very curious to know why there isn't a sweeping trump card that ultimately deems one side as significantly more ethical than the other
Great analysis tho--very thankful for the excellent breakdown unless you used ai to do it or if that ai is ultimately not justifying the means adequately. No actually I'm thankful regardless but I'm still internally conflicted by the unknown
lime@feddit.nu 1 week ago
no matter your stance on the morality of language models, it’s just plain rude to use a machine to generate text meant for people. i would never do that. if i didn’t take the time to write it, why would you take the time to read it?
daniskarma@lemmy.dbzer0.com 1 week ago
I think there may be two exceptions to that rule.
Accessibility. People who may have issues writing long coherent text due the need to use some different input method (think about tetraplegic people for instance). LLM generated text could be of great aid there.
Translation. I do hate forced translation. But it’s true that for some people it may be needed. And I think LLM translation models have already surpassed other forms of automatic software translation.
0xD@infosec.pub 1 week ago
But these are neither problems of the technology, nor of it being hosted. It’s an issue of the person using it, the situation, and the person receiving it, as well as all their values.
Not sure why people are directing their hate against the tools instead of the actual politics and governments not taking the current and potential future ones seriously. Technology and progress are never the problem.