on the one hand, this is an ai horde-based bot. the ai horde is just a bunch of users who are letting you run models on their personal machines, which means this is not “big ai” and doesn’t use up massive amounts of resources. it’s basically the “best” way of running stable diffusion at small to medium scale.
on the other, this is still using “mainstream” models like flux, which has been trained on copyrighted works without consent and used shitloads of energy to train. unfortunately models trained on only freely available data just can’t compete.
lemmy is majority anti-ai, but db0 is a big pro-local-ai hub. i don’t think they’re pro-big-ai. so what we’re getting here is a clash between people who feel like any use of ai is immoral due to the inherent infringement and the energy cost, and people who feel like copyright is a broken system anyway and are trying to tackle the energy thing themselves.
it’s a pretty thorny issue with both sides making valid points, and depending on your background you may very well hold all the viewpoints of both sides at the same time.
tisktisk@piefed.social 1 week ago
Both sides having valid points is almost always the case with issues of any complexity. I'm very curious to know why there isn't a sweeping trump card that ultimately deems one side as significantly more ethical than the other
Great analysis tho--very thankful for the excellent breakdown unless you used ai to do it or if that ai is ultimately not justifying the means adequately. No actually I'm thankful regardless but I'm still internally conflicted by the unknown
lime@feddit.nu 1 week ago
no matter your stance on the morality of language models, it’s just plain rude to use a machine to generate text meant for people. i would never do that. if i didn’t take the time to write it, why would you take the time to read it?
daniskarma@lemmy.dbzer0.com 1 week ago
I think there may be two exceptions to that rule.
Accessibility. People who may have issues writing long coherent text due the need to use some different input method (think about tetraplegic people for instance). LLM generated text could be of great aid there.
Translation. I do hate forced translation. But it’s true that for some people it may be needed. And I think LLM translation models have already surpassed other forms of automatic software translation.
dil@lemmy.zip 1 week ago
There are always exceptions/outliers to any rule, it’s basically playing devils advocate to bring them up, I never care for it, like a conversation about someone murdering someone for funsies and saying but there are cases where people should be murdered like the joker from batman
0xD@infosec.pub 1 week ago
But these are neither problems of the technology, nor of it being hosted. It’s an issue of the person using it, the situation, and the person receiving it, as well as all their values.
Not sure why people are directing their hate against the tools instead of the actual politics and governments not taking the current and potential future ones seriously. Technology and progress are never the problem.
lime@feddit.nu 1 week ago
the problem with entirely separating the two is that progress and technology can be made with an ideology in mind.
the current wave of language model development is spearheaded by what basically amounts to a cult of tech-priests, going all-in on reaching AGI as fast as possible because they’re fully bought into rokos basilisk. if your product built to collect and present information in context is created by people who want that information to cater to their world view, do you really think that the result is going to be an unbiased view of the world? sure the blueprint for how to make an llm or diffusion model is (probably) unbiased, but when you combine it with data?
as an example, did you know that all the big diffusion models (stable, flux, illustrious etc) use the same version of CLIP, the part responsible for mapping text to features? and that the CLIP part is trained on medical information? how might that affect the output? sure you can train your own CLIP, but will you? will anyone?