Comment on

<- View Parent
XLE@piefed.social ⁨1⁩ ⁨week⁩ ago

AI companies are definitely aware of the real risks. It’s the imaginary ones ("what happens if AI becomes sentient and takes over the world?") that I imagine they’ll put that money towards.

Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that’s expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.

source
Sort:hotnewtop