I think that it’s just words & images on a screen that we could easily ignore like people did before, and people are indulging a grandiose conceit by thinking that moderation is that important or serves any greater cause than the interests of moderators. On social media that seems to be to serve the consumers, by which I mean the advertisers & commercial interests who pay for the attention of users. While the old internet approach of ignoring, gawking at the freakshow, or ridiculing/flaming toxic & hateful shit worked fine then resulting in many people disengaging, ragequitting, or going outside to do something better, that’s not great for advertisers protecting their brand & wanting to keep people pliant & unchallenged as they stay engaged in their uncritical filter bubbles & echo chambers.
With old internet, safety didn’t wasn’t internet nanny, thought police shit, and stop burning my virgin eyes & ears. It was an anonymous handle, not revealing personally identifying information (a/s/l?), not falling for scams & giving out payment information (unless you’re into that kinky shit). Glad to see newer social media returning to some of that.
Excrubulent@slrpnk.net 1 month ago
I think the rise of hate speech on centralised platforms relies very heavily on theur centrlaised moderation and curation via algorithms.
They have all known for a long time that their algorithms promote hate speech, but they know that curbing that behaviour negatively affects their revenue, so they don’t do it. They chase the fast buck, and they appease advertisers who have a naturally conservative bent, and that means rage bait and conventional values.
That’s quite apart from when platform owners explicitly support that hate speech and actively suppress left leaning voices.
I think what we have on decentralised systems where we curate/moderate for ourselves works well because most of that open hate speech is siloed, which I think is the best thing you can do with it.