I mean, there’s a good reason the first rules of firearm safety are to always treat a weapon as loaded, and to never direct the weapon at something you aren’t prepared to destroy. The key point being that you never know when some freak accident can happen with a loose pin, bad ammo, a broken spring, or just a person tripping and shaking the gun a bit too hard.
A gun should never go off by itself. You still treat it as if it can, because in the real world freak accidents happen.
Sure. The point is it’s entirely possible to use a firearm safely. There is no safe use for LLMs because they “make decisions”, for lack of a better phrase, for themselves, without any user input.
That is not at all how LLMs work. It’s the software written around LLMs that aide it in constructing and running commands and “making decisions”. That same software can also prompt the user to confirm if they should do something or sandbox the actions in some way.
They are not foolproof. They will absolutely cause problems in the hands of a fool. But they will not cause problems all on their lonesome. They’re inanimate objects. They cannot do absolutely anything without interaction from the user. If you can’t understand this, you should never be allowed to own one.
And neither can anthropic, anthropic isn’t randomly deleting people’s websites, the kid gave anthropic bad instructions, it didn’t spontaneously decide anything.
artyom@piefed.social 19 hours ago
They absolutely do not.
thebestaquaman@lemmy.world 19 hours ago
I mean, there’s a good reason the first rules of firearm safety are to always treat a weapon as loaded, and to never direct the weapon at something you aren’t prepared to destroy. The key point being that you never know when some freak accident can happen with a loose pin, bad ammo, a broken spring, or just a person tripping and shaking the gun a bit too hard.
A gun should never go off by itself. You still treat it as if it can, because in the real world freak accidents happen.
artyom@piefed.social 19 hours ago
Sure. The point is it’s entirely possible to use a firearm safely. There is no safe use for LLMs because they “make decisions”, for lack of a better phrase, for themselves, without any user input.
etchinghillside@reddthat.com 19 hours ago
That is not at all how LLMs work. It’s the software written around LLMs that aide it in constructing and running commands and “making decisions”. That same software can also prompt the user to confirm if they should do something or sandbox the actions in some way.
4am@lemmy.zip 18 hours ago
“Guns are foolproof”
You should have yours taken away.
artyom@piefed.social 18 hours ago
They are not foolproof. They will absolutely cause problems in the hands of a fool. But they will not cause problems all on their lonesome. They’re inanimate objects. They cannot do absolutely anything without interaction from the user. If you can’t understand this, you should never be allowed to own one.
Bluescluestoothpaste@sh.itjust.works 16 hours ago
And neither can anthropic, anthropic isn’t randomly deleting people’s websites, the kid gave anthropic bad instructions, it didn’t spontaneously decide anything.