It makes sense if you believe an LLM has actual intelligence, and works like an oracle.
It makes no sense at all if you understand how they work even superficially
Submitted 2 days ago by heyWhatsay@slrpnk.net to aboringdystopia@lemmy.world
https://futurism.com/ai-polling-inaccuracy
It makes sense if you believe an LLM has actual intelligence, and works like an oracle.
It makes no sense at all if you understand how they work even superficially
It would be interesting if they knew how to use LLMs. If I wanted output based on a middle class old white woman I would put effort into capturing that person in tone and style. Not just prompt with “respond like a old white woman please” It’s like they don’t yet grasp how to structure the landscape around their target output for the LLM to latch onto.
They should also consider that there’s a difference between a human saying something for another human to hear, and a human having a belief. There’s a fundamental gap between “what does this white woman think?” and a white woman telling a survey in public that she approves/disapproves of Trump.
Asking an LLM to respond in such a way, using such sterile tone… it just completely fails to capture the nuance I (naively) expected them to account for.
I mean that’s basically as accurate as their conventional methods, anyway
Polling is a science and is as reliable as the person doing it.
kennedy@lemmy.dbzer0.com 2 days ago
Yeah no shit. Why even do this isn’t the point of a survey to get the opinions of an actual person and how they’re being affected?? Why would a language model stringing word together be comparable.
ToastedRavioli@midwest.social 1 day ago
One inherently flawed aspect of the entire concept, even if LLMs were way better, is that polling is supposed to be a present snapshot of opinion. LLMs are based on inherently older than the present moment, it cant gauge the current opinion