It would be interesting if they knew how to use LLMs. If I wanted output based on a middle class old white woman I would put effort into capturing that person in tone and style. Not just prompt with “respond like a old white woman please” It’s like they don’t yet grasp how to structure the landscape around their target output for the LLM to latch onto.
They should also consider that there’s a difference between a human saying something for another human to hear, and a human having a belief. There’s a fundamental gap between “what does this white woman think?” and a white woman telling a survey in public that she approves/disapproves of Trump.
Asking an LLM to respond in such a way, using such sterile tone… it just completely fails to capture the nuance I (naively) expected them to account for.
kennedy@lemmy.dbzer0.com 3 weeks ago
Yeah no shit. Why even do this isn’t the point of a survey to get the opinions of an actual person and how they’re being affected?? Why would a language model stringing word together be comparable.
ToastedRavioli@midwest.social 3 weeks ago
One inherently flawed aspect of the entire concept, even if LLMs were way better, is that polling is supposed to be a present snapshot of opinion. LLMs are based on inherently older than the present moment, it cant gauge the current opinion