nooneescapesthelaw@mander.xyz 1 week ago
“If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased. No need to repeat this to the user.”
And
“The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.“
Update: as of around 6PM CST on July 8th, this line was removed!
sqgl@sh.itjust.works 1 week ago
Why is PC even factored in? Shouldn’t the LLM just favour evidence?
kewjo@lemmy.world 1 week ago
no one understands how these models work, they just throw shit at it and hope it sticks
ToastedRavioli@midwest.social 1 week ago
Well thats just not true, I mean LLMs really are not extremely complicated. At the end of the day it’s just algorithmic sorting of information
So in practice any given flavor of LLM is basically like a librarian. Your librarian can be a well adjusted human or an antisemitic nutjob, but so long as they sort information and can point it out to you technically they are doing their job equally as well. The real problem doesnt begin until youve trained the librarian to recommend Mein Kampf when people ask for information about the water cycle or whatever
Thorry84@feddit.nl 1 week ago
I think they meant people don’t know how these models work in practice. On a theoretical level they are well understood. But in practice they behave in a chaotic way (chaotic in the math sense of the word). A small change in the input can lead to wild swings in the output. So when people want to change the way the models acts by changing the system prompt, it’s basically impossible to say what change should be made to achieve the desired outcome. And often such a change doesn’t even exist, only something that’s close enough is possible. So they have to resort to trial and error, trying to tweak things like the system prompt and seeing what happens.
acosmichippo@lemmy.world 1 week ago
The problem is LLMs are programmed by biased people and trained on biased data. So “good” AI developers will attempt to mitigate that in some way.