Comment on [deleted]
finitebanjo@lemmy.world 2 days agoI think it’s plenty feasible. Look at things like post and comment content, reports, frequency, upvote and downvote behavior, site access duration, and IP addresses and you start to see certain patterns emerge from bots and bad actors. What isn’t feasible is getting enough people in on the effort to do the work.
M1ch431@slrpnk.net 2 days ago
None of that data proves anything besides identifying potential bots, and that’s the heart of the issue with your idea.
It’s just an exercise in group paranoia - and would likely be abused by actual state/professional actors attempting to silence users/create bubbles.
Myself? I trust that the mods and admins will sort the actual bots out. As for the bad actors, we can counter their propaganda with our good faith participation and our efforts to lead others to the truth.
finitebanjo@lemmy.world 2 days ago
Whether or not people agree with the list or its various forks and methods depends on the instance administrators. State and professional actors are already using tools like this to silence users and create bubbles, it’s only not available to real open source and selfhosts.
M1ch431@slrpnk.net 2 days ago
I disagree with the notion that we need to centralize such a list or that such a list is desirable. Again, I trust instances to sort this out, but people are free to scrutinize and investigate users to the best of their ability.
As LLMs and deepfaking technology advance, the likely result will be completely undetectable bots that effectively mimic human behavior, even with advanced and potentially automated defenses.
See: theintercept.com/…/pentagon-ai-deepfake-internet-…
I don’t know about you, but I wouldn’t want to have to e.g. take a selfie in a specific way to be able to use the fediverse and have access to reaching a broad audience just because I am suspected. Such a list will also likely chill participation, as an unintended effect.
finitebanjo@lemmy.world 2 days ago
LLMs will never reach human accuracy, OpenAI and Deepmind proved that in 2022 research papers that have never been refuted, they have a lot of obvious signs and are also largely incompetent due to lack of reasoning skills or memory, and require updating training sets in order to “learn” from past mistakes. They also become less capable when overconstrained as would be necessary to make them useful for a specific task.
The reasons the Pentagon and Defence Contractors like Microsoft want it to seem like we have this capability is 1) we want our enemies to think we have this capability and 2) they need to justify their exorbitant expenses trying to make these capabilities real.
But 1 in 100 bot getting past automated detection is not a reason not to use it.