ClosedAI
Submitted 3 days ago by mesamunefire@piefed.social to technology@lemmy.world
Comments
nomadman@piefed.social 2 days ago
BrianTheeBiscuiteer@lemmy.world 2 days ago
OpenYerFuckinWalletAI
cecilkorik@lemmy.ca 2 days ago
More Financial Engineering to try to obscure the fact that they’re all caught in a rapidly expanding bubble that they’ve lost any hope of controlling.
Cryan24@lemmy.world 2 days ago
Too the suprise of absolutely no-one.
EnsignWashout@startrek.website 2 days ago
It’s often the ones we most suspected.
mesamunefire@piefed.social 3 days ago
I cant find too many good articles with in depth coverage but theres a bunch confirming the move.
ryper@lemmy.ca 3 days ago
mesamunefire@piefed.social 3 days ago
Thanks!
XLE@piefed.social 3 days ago
Paying yourself to promote your own product. Promising to fix vague “risks” that make the product sound more powerful than it is, with “fixes” that won’t be measurable.
In other words, Sam is cutting a $25 billion check to himself.
etherphon@piefed.world 3 days ago
So they're already aware of the risks, AI companies are being run with the same big oil/big tobacco playbook lol. You can have all the fancy new technology but if the money is still coming from the same group of rich inbred douchebags it doesn't matter because it will turn to shit.
XLE@piefed.social 2 days ago
AI companies are definitely aware of the real risks. It’s the imaginary ones ("what happens if AI becomes sentient and takes over the world?") that I imagine they’ll put that money towards.
Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that’s expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.