Nuclear weapons have no motivation
That, in my mind, is a non-threat. AIs have no motivation; there’s no reason for an AI to do any of that.
Unless it’s being manipulated by a bad actor who wants to do those things. THAT is the real threat. And we know those bad actors exist and will use any tool at their disposal.
intensely_human@lemm.ee 11 months ago
JackGreenEarth@lemm.ee 11 months ago
They have the motivation of whatever goal you programmed them with, which is probably not the goal you thought you programmed it with. See the paperclip maximiser.
RickRussell_CA@lemmy.world 11 months ago
I’m familiar with that thought exercise, but I find it to be fearmongering. AI isn’t going to be some creative god that hacks and breaks stuff on its own. A paperclip maximizer AI isn’t going to manipulate world steel markets or take over steel mills unless that capability is specifically built into its operating parameters.
The much greater risk in the near term is that bad actors exploit AI to accomplish very specific immoral, illegal, or exploitative tasks by building those tasks into AI. Such as deepfakes, or using drones to track and murder people, etc. Nation-state actors will probably start using this stuff for truly horrible reasons long before criminals do.
intensely_human@lemm.ee 11 months ago
I wonder if you can describe the operating parameters of GPT-4