Comment on [deleted]
M1ch431@slrpnk.net 2 days agoI disagree with the notion that we need to centralize such a list or that such a list is desirable. Again, I trust instances to sort this out, but people are free to scrutinize and investigate users to the best of their ability.
As LLMs and deepfaking technology advance, the likely result will be completely undetectable bots that effectively mimic human behavior, even with advanced and potentially automated defenses.
See: theintercept.com/…/pentagon-ai-deepfake-internet-…
I don’t know about you, but I wouldn’t want to have to e.g. take a selfie in a specific way to be able to use the fediverse and have access to reaching a broad audience just because I am suspected. Such a list will also likely chill participation, as an unintended effect.
finitebanjo@lemmy.world 2 days ago
LLMs will never reach human accuracy, OpenAI and Deepmind proved that in 2022 research papers that have never been refuted, they have a lot of obvious signs and are also largely incompetent due to lack of reasoning skills or memory, and require updating training sets in order to “learn” from past mistakes. They also become less capable when overconstrained as would be necessary to make them useful for a specific task.
The reasons the Pentagon and Defence Contractors like Microsoft want it to seem like we have this capability is 1) we want our enemies to think we have this capability and 2) they need to justify their exorbitant expenses trying to make these capabilities real.
But 1 in 100 bot getting past automated detection is not a reason not to use it.
M1ch431@slrpnk.net 2 days ago
If there was an actor behind a handful of accounts that are mostly run by LLMs that mimic human input and interaction or are in some form manually operated to avoid detection, it’d be easily viable for state-level/professional actors to pull such an operation off and successfully manipulate a small platform like the fediverse. Even taking believable selfies of real people that fit the profile is possible and can be anticipated.
I’m not entirely against instance-level detection that attempts to understand user patterns and prevent or flag abuse to mods and admins, but I do believe that humanized input and interaction can already be effectively emulated and will only advance as time passes.
I believe that increased scrutiny of users in a centralized manner is a privacy violation. I use my instance and I give some level of trust to the instance owners, but I wouldn’t consent to them (or the software they choose to use) handing over my PII or usage patterns to a third-party group that suspects me. I would discontinue using the service in such a scenario.
To support my point that bot detection is mostly futile on the fediverse, I’d like to your attention to a parallel to this situation in gaming with humanized aimbots - which are already incredibly viable and are implemented in a variety of ways. There are usually actual human actors guiding input to some degree, but the aimbot/etc. is designed to mimic human input to achieve believable results. I believe this could be advanced quite a bit and there are new methods popping up as every day passes.
Ultimately, I feel it boils down to just blocking instances that you disagree with the operation of to curate your experience.
finitebanjo@lemmy.world 2 days ago
You have a real hard-on for letting bots operate freely.
M1ch431@slrpnk.net 2 days ago
I trust instance owners to sort this out - until I don’t. I don’t support violations of privacy and I appreciate some level of pseudonymity and anonymity in social media.