I don’t know exactly what angle you’re looking to clarify in that regard, but to ELI5 it:
There are two factors: targeted ads and algorithm manipulation.
Mainstream social media sites earn money from ads they deliver. The more people stay on the site and view posts, the more ads they see. The algorithm is designed to promote content that users are likelier to view, not necessarily content that they would like more. In practice, this tends to be content that provides some sort of shock value. That combination of targeted ads with clickbait creates “doomscrolling”.
Longer explanation below:
The value that social media sites give to advertisers is that they know everything about their users. They collect data based on posts and viewing habits to learn things like income, hobbies, location, sexual orientation, political affiliation, etc. When advertisers buy ads to show on social media sites, they get to target these ads at specific people that they are likely to leave the biggest impact on.
But what happens if you want to increase the visibility of your (not ad) content on social media? A lot of companies use social media to bring people to their own sites/channels where they make money. In some cases, they can pay to be promoted, giving them an advantage in the algorithm. In other cases, they can manipulate the algorithm using clickbait (to engage users using the doomscrolling trend) or even using bots to give a false sense of engagement.
In recent major elections/referendums, there were a lot of ads and promoted content intended to sway opinions. People would intentionally be shown content to upset them, increasing doomscrolling and increasing their chances of getting out to vote against these things. However, in many cases, the content that people would see would be half-truths or outright lies. Because they were earning money, social media sites did not care about verifying the content of the ads they were showing.
It’s been proven that Brexit, for example, was decided by voters who were manipulated via targeted ads and clickbait delivered by social media to believe falsehoods that swayed their vote. And in many cases, these lies weren’t just spread by specific political campaigns, but actually by external governmental entities who had a vested interest in the outcome. Namely Russia, who had a lot to gain from a weaker EU.
Lemmy is not immune to doomscrolling and bot manipulation, but it doesn’t have ads and, that we know of, does not sell user data. It’s harder to be targeted here because the only thing people can do is try to game the vote system to make their content more visible (which is sadly easier than it should be). But all you have access to are people subscribed to specific communities or registered on specific instances. It’s harder to target people en masse and you only have a single data point to target, namely people who like [community topic].
ArbiterXero@lemmy.world 1 year ago
Sooooo, there’s a lot of truth to it.
Once a site is big enough that they want to cash in on it, they develop tools and ai and make choices that are designed to keep you on the site longer.
These tools and ai quickly discover that the way to keep you engaged is to keep you enraged. Content that angers you will keep your engagement longer and keep you coming back.
This is well researched and I’ll cite sources if you need it.
So what happens is that the ai, while it isn’t designed explicitly to show right wing content, will end up learning that showing that content accomplished it’s actual goal. It’s original goal being “Keep people on the site longer”
Right wing content fits a nice niche where it engages a lot of people. Donald trump claiming that he lost the election will enraged the right because they believe in his horse shit and that the election was stolen, and the left gets enraged by it because it causes unnecessary violence like Jan 6th. The AI loves that because it’s fairly universally enraging, and engaging most people.
dmmeyournudes@lemmy.world 1 year ago
There is no truth to it. The vast majority of negative interactions and aberations on a social media site is brought about by the users, not by the operators of the site. These tools they have are not as powerful as you think they are. The only reason they have any power at all is because the users give them that power because that is what they want. You don’t have a successful site by manipulating the user base to do what you want them to do, they will just leave. You simply give them what they want and they never leave. “The algorithm” is there to give the user what they want, and they’re actually really bad at doing that.
ArbiterXero@lemmy.world 1 year ago
The users create the content, the background ai decides which content to prioritize and promote to the front page, etc…
Which part of that is wrong?
dmmeyournudes@lemmy.world 1 year ago
The fact that the user is the one imputing the data to determine the received content in some way. You’re selecting the content you interact with, not a black box trying to take over the population. They just want you to stay on the site, look at the ads, and never leave. They don’t care about your political allegiance or what movies you like, they will feed you whatever you want.