ClosedAI
Submitted 1 month ago by mesamunefire@piefed.social to technology@lemmy.world
Comments
nomadman@piefed.social 1 month ago
BrianTheeBiscuiteer@lemmy.world 1 month ago
OpenYerFuckinWalletAI
cecilkorik@lemmy.ca 1 month ago
More Financial Engineering to try to obscure the fact that they’re all caught in a rapidly expanding bubble that they’ve lost any hope of controlling.
EnsignWashout@startrek.website 1 month ago
It’s often the ones we most suspected.
Cryan24@lemmy.world 1 month ago
Too the suprise of absolutely no-one.
mesamunefire@piefed.social 1 month ago
I cant find too many good articles with in depth coverage but theres a bunch confirming the move.
ryper@lemmy.ca 1 month ago
mesamunefire@piefed.social 1 month ago
Thanks!
XLE@piefed.social 1 month ago
Paying yourself to promote your own product. Promising to fix vague “risks” that make the product sound more powerful than it is, with “fixes” that won’t be measurable.
In other words, Sam is cutting a $25 billion check to himself.
etherphon@piefed.world 1 month ago
So they're already aware of the risks, AI companies are being run with the same big oil/big tobacco playbook lol. You can have all the fancy new technology but if the money is still coming from the same group of rich inbred douchebags it doesn't matter because it will turn to shit.
XLE@piefed.social 1 month ago
AI companies are definitely aware of the real risks. It’s the imaginary ones ("what happens if AI becomes sentient and takes over the world?") that I imagine they’ll put that money towards.
Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that’s expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.