Comment on AI chatbots can infer an alarming amount of info about you from your responses

GenderNeutralBro@lemmy.sdf.org ⁨1⁩ ⁨year⁩ ago

“It’s not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zürich in Switzerland who led the research.

You fix this problem with locally-run models that do not send your conversations to a cloud provider. That is the only real technical solution.

Unfortunately, the larger models are way too big to run client-side. You could launder your prompts through a smaller LLM to standardize phrasing (e.g. removing idiosyncrasies or local dialects), but there’s only so far you can go with that, because language is deeply personal, and the things people will use chatbots for are deeply personal.

This is by no means exclusive to LLMs, of course. Google has your lifetime search history and they can glean all kinds of information from that alone. If you’re older than ~30 or so, you might remember these same conversations from when Gmail first launched. You’d have to be crazy to let Google store all your personal emails for all eternity! And yet everybody does it (myself included, though I’m somewhat ashamed to admit it).

This same problem exists with pretty much any cloud service. When you send data to a third party, they’re going to have that data. And I guarantee you are leaking more information about yourself than you realize. You can even tell someone’s age and gender with fairly high accuracy from a small sample of their mouse movements.

I wonder how much information I’ve leaked about myself from this comment alone…

source
Sort:hotnewtop