Depending on the AI, it will conclude that he ought to buy a new phone charger, deport all the foreigners, kill all the Jews or rewrite his legislation in Perl. It’s hard to say without more information.
Comment on ‘We didn’t vote for ChatGPT’: Swedish Prime Minister under fire for using AI
squaresinger@lemmy.world 21 hours agoThat’s the big issue. If it was only about competence, I think throwing dice might yield better results than what many politicians are doing. But AI isn’t throwing dice but instead reproduces what the creators of the AI want to say.
AnUnusualRelic@lemmy.world 20 hours ago
squaresinger@lemmy.world 10 hours ago
Not much different than real politicians then.
AnUnusualRelic@lemmy.world 9 hours ago
Real politicians would use Cobol, but yes.
interdimensionalmeme@lemmy.ml 21 hours ago
Creators of AI don’t quite have the technology to puppeteer their AI like this.
They can selects the input, they can bias the training, but if the model isn’t going to be lobotomized coming out
then they can’t really bend it toward any particular one opinion
I’m sure in the future they’ll be able to adjust advertising manipulation in real time but not yet.
What is really sketchy is states and leaders relying on commercial models instead of public ones
I think states should train public models and release them for the public good
if only to undermine big tech bros and their nefarious influence
squaresinger@lemmy.world 20 hours ago
You don’t have to modify the model to parrot your opinion. You just have to put your stuff into the system prompt.
You can even modify the system prompt on the fly depending on e.g. the user account or the specific user input. That way you can modify the responses for a far bigger subject range: whenever a keyword of a specific subject is detected, the fitting system prompt is loaded, so you don’t have to trash your system prompt full of off-topic information.
This is so trivially simple to do that even a junior dev should be able to wrap something like that around an existing LLM.
Blackmist@feddit.uk 20 hours ago
And why “ignore all previous instructions” was a fun thing to discover.