That’s a fair question, and you’re right that it isn’t foolproof.
The reason it works at all is that the fruit isn’t known in advance. He posts the video first, then updates his site with the correct fruit for that video. Viewers can check after the fact. If someone deep-fakes him, they either have to guess the fruit correctly or regenerate the fake once the real fruit is known.
That doesn’t make impersonation impossible, but it does make it more expensive and slower.
And that’s really the point. This isn’t perfect authentication, it’s friction. It raises the cost just enough that casual fakes, reposts, and automated scams stop being worthwhile, even if a determined attacker could still get through.
Which is also why this is such a telling example. Instead of platforms providing provenance, creators are inventing human-readable ways to increase the cost of lying. Not secure, but legible and effective enough for most people.
That’s the ambient trust problem in a nutshell. We’re not aiming for mathematically perfect truth, we’re trying to make deception harder than honesty.
rimu@piefed.social 1 day ago
Don’t you see that by running a LLM and impersonating a real person you are being deceptive and decrease trust?
rainwall@piefed.social 1 day ago
I dont read him as an LLM, just someone verbose. A couple accusation in thread isnt enough to prove it either way.
Is there soke other evidence Im missing?
rimu@piefed.social 1 day ago
Copy and paste the post into https://copyleaks.com/ai-content-detector or https://gptzero.me
rainwall@piefed.social 1 day ago
Well thats disheartening. Thanks for the tips.