How can you tell?
Comment on A Flood of Green Tech From China Is Upending Global Climate Politics
Alphane_Moon@lemmy.world 12 hours agoWhat is the point of such accounts?
Why do this? I understand the point of setting up such an account on Reddit (gain karma and then start low key spamming or joining a bot-net), but on Threadi?
blubfisch@discuss.tchncs.de 7 hours ago
slothrop@lemmy.ca 7 hours ago
Initially they had 80 posts in under an hour after signing up. Same format, multiple paragraphs, same words…thousands of words,humdreds of paragraphs.
Now they’re fluent in German…They have a human handler…
Alphane_Moon@lemmy.world 7 hours ago
Identical post structure, tone and argumentation style across multiple posts.
queerlilhayseed@piefed.blahaj.zone 9 hours ago
If I were to hazard a guess, it’s for training. Make a bot, make a bunch of posts and comments and get organic interactions, see what get you flagged as a bot account, incorporate that data into your next version, rinse, repeat. The goal is probably to make a bot account that can blend in and interact without being flagged, presumably while also nudging conversations in a particular direction. Something I noticed on reddit is that the first comment can steer the entire thread, as long as it hews close enough to the general group consensus, and that kind of steering is really useful for the kinds of groups that like to influence public thinking.
I don’t think galacticwaffle is necessarily trying to steer here, I think they’re just trying to make a bot that flies under the radar. but I imagine that that kind of steering is what someone who would pay for this kind of bot would use it for.
Alphane_Moon@lemmy.world 8 hours ago
Interesting theory.
Although I do wonder if the approach is sufficiently scalable/right level of throughput (if this indeed what’s going on).
queerlilhayseed@piefed.blahaj.zone 8 hours ago
Who knows what scale they’re operating at. The problem with this kind of bot is that you only really notice if they’re doing a bad job (theoretically). This might be someone who wrote an LLM bot for a lark, a small-time social media botter testing a variant for fedi deployment, or an established bot trainer with dozens or hundreds of accounts that’s field-testing a more aggressive new model. I doubt you could get away with hundreds of bots like this on lemmy, I think the actual user pool is small enough that we’d notice hundreds of bots posting at this volume. but again, I don’t really know how I’d detect it if it were less “obviously smells like LLM slop” than this one. In bot detection, as in so many fields, false negatives are a real bitch to account for.
Alphane_Moon@lemmy.world 8 hours ago
I don’t doubt such approaches are used. They almost certainly are. I ma just wondering if Threadi is large enough for anyone to bother (be it oligarch backed groups or independent conmen).