Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment.

<- View Parent
FauxLiving@lemmy.world ⁨11⁩ ⁨hours⁩ ago

Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

You put it better than I could. I’ve noticed this too.

I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.

It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.

source
Sort:hotnewtop