Comment on [deleted]
finitebanjo@lemmy.world 2 days agoLLMs will never reach human accuracy, OpenAI and Deepmind proved that in 2022 research papers that have never been refuted, they have a lot of obvious signs and are also largely incompetent due to lack of reasoning skills or memory, and require updating training sets in order to “learn” from past mistakes. They also become less capable when overconstrained as would be necessary to make them useful for a specific task.
The reasons the Pentagon and Defence Contractors like Microsoft want it to seem like we have this capability is 1) we want our enemies to think we have this capability and 2) they need to justify their exorbitant expenses trying to make these capabilities real.
But 1 in 100 bot getting past automated detection is not a reason not to use it.
M1ch431@slrpnk.net 2 days ago
If there was an actor behind a handful of accounts that are mostly run by LLMs that mimic human input and interaction or are in some form manually operated to avoid detection, it’d be easily viable for state-level/professional actors to pull such an operation off and successfully manipulate a small platform like the fediverse. Even taking believable selfies of real people that fit the profile is possible and can be anticipated.
I’m not entirely against instance-level detection that attempts to understand user patterns and prevent or flag abuse to mods and admins, but I do believe that humanized input and interaction can already be effectively emulated and will only advance as time passes.
I believe that increased scrutiny of users in a centralized manner is a privacy violation. I use my instance and I give some level of trust to the instance owners, but I wouldn’t consent to them (or the software they choose to use) handing over my PII or usage patterns to a third-party group that suspects me. I would discontinue using the service in such a scenario.
To support my point that bot detection is mostly futile on the fediverse, I’d like to your attention to a parallel to this situation in gaming with humanized aimbots - which are already incredibly viable and are implemented in a variety of ways. There are usually actual human actors guiding input to some degree, but the aimbot/etc. is designed to mimic human input to achieve believable results. I believe this could be advanced quite a bit and there are new methods popping up as every day passes.
Ultimately, I feel it boils down to just blocking instances that you disagree with the operation of to curate your experience.
finitebanjo@lemmy.world 2 days ago
You have a real hard-on for letting bots operate freely.
M1ch431@slrpnk.net 2 days ago
I trust instance owners to sort this out - until I don’t. I don’t support violations of privacy and I appreciate some level of pseudonymity and anonymity in social media.
finitebanjo@lemmy.world 2 days ago
I am also trusting instance owners and software developers to implement an open source automated bot detection and filtering algorithm. I think forcing users to manually wade through and filter out threat actors we can identify with 99% certainty is nothing but a waste of their valuable time and disruption of the community and conversation we support and enable here, especially when false positives are easily rectifiable.