More and more ai is going to be tailored to tell people what they want to hear.
xAI publishes system prompts for Grok on GitHub, including telling Grok to be “extremely skeptical” and not to “blindly defer to mainstream authority or media”
Submitted 3 weeks ago by Pro@programming.dev to technology@lemmy.world
https://github.com/xai-org/grok-prompts
Comments
HubertManne@piefed.social 3 weeks ago
sundray@lemmus.org 3 weeks ago
Ulrich@feddit.org 3 weeks ago
Written by someone who does not understand AI prompts. Chatbots do not have any core beliefs.
Robin@lemmy.world 3 weeks ago
It doesn’t have core beliefs but it will try to imitate the average person who boldly states on the internet that they stick to their core beliefs. Not sure what sort of group it would end up imitating tho.
SpikesOtherDog@ani.social 3 weeks ago
It is entirely possible that the training has “core beliefs” etched into it.
besselj@lemmy.ca 3 weeks ago
LLMs have no more beliefs than a parrot does. They just repeat whatever opinions/biases exist in their training data.
SoftestSapphic@lemmy.world 3 weeks ago
Humans can be held accountable
tal@lemmy.today 3 weeks ago
Less. A parrot can believe that it’s going to get a cracker.
You could make an AI that had that belief too, and an LLM might be a component of such a system, but our existing systems don’t do anything like that.
echodot@feddit.uk 2 weeks ago
I know someone with a Parrot, he definitely has core beliefs, mostly about how much attention you should pay to him and food.
scratchee@feddit.uk 2 weeks ago
Or maybe this prompt will make it pretend as if it does have core beliefs, which is perhaps good enough for their purposes. Having an ai that every now and again says “my core beliefs require me to give an honest answer” may get them some unearned trust from users