Comment on YSK: db0@lemmy.dbzer0.com set up a AI image generator bot on the Threadiverse for anyone here to freely use

<- View Parent
lime@feddit.nu ⁨1⁩ ⁨week⁩ ago

the problem with entirely separating the two is that progress and technology can be made with an ideology in mind.

the current wave of language model development is spearheaded by what basically amounts to a cult of tech-priests, going all-in on reaching AGI as fast as possible because they’re fully bought into rokos basilisk. if your product built to collect and present information in context is created by people who want that information to cater to their world view, do you really think that the result is going to be an unbiased view of the world? sure the blueprint for how to make an llm or diffusion model is (probably) unbiased, but when you combine it with data?

as an example, did you know that all the big diffusion models (stable, flux, illustrious etc) use the same version of CLIP, the part responsible for mapping text to features? and that the CLIP part is trained on medical information? how might that affect the output? sure you can train your own CLIP, but will you? will anyone?

source
Sort:hotnewtop