Comment on Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon
corsicanguppy@lemmy.ca 4 days agoI think anthropic is in the “ah shit we’re dying” startup stage. They’re not looking it profit so much as not lose ALL of their shirt in the coming crash. So it pivoted and ditched its morals. Hate them for that.
Don’t hate them for being greedy yet: they’re not there. Maybe they’ll never get there. But they’ve lived long enough to become the thing they strived to destroy, so that’s a milestone.
vacuumflower@lemmy.sdf.org 3 days ago
People are talking about AI killbots and upcoming crash at the same time, and complain about AI slop and vibe coding.
Sorry, but if something is usable for making killbots, there will be no crash. And AI slop proves that for someone it’s useful to make slop. And vibe coding proves that someone makes things working in production with those tools. Saying that quality suffers is like saying that cobb houses are not comparable to brick houses and vice versa. Both exist. There are places where technologies related to cobb are still common for construction.
But the most important reason is the first one, if some technique gives you a more convenient and sharper stick to kill someone from another tribe, then that something stays as tribe’s cherished wisdom.
That LLMs consume too much resources … You might have noticed there’s a huge space for optimization. They are easy to parallelize, and we are in market capture stage, which means that optimization is not yet a priority. When it becomes a priority, there might happen a moment when all the arguments about operations costing in resources more than they give profit and that being funded by investors are suddenly not true anymore.
I have been converted. Converted back, one might say, there was a time around years 2011-2014.
Corkyskog@sh.itjust.works 2 days ago
You clearly don’t understand how finance works or don’t understand how leveraged these incestuous deals are. It’s perfectly possible for AI to make killbots and for an AI economic crash to happen.
They industry needs to make Trillions of dollars to pay off their creditors and to achieve the profit their investors need to make this worthwhile. That only happens if most white collar workers are replaced with AI.
vacuumflower@lemmy.sdf.org 2 days ago
You might want to consult a history book. There are a few recurring themes there, silent leges inter arma and vae victis capture most of them. New weapons might change the intensiveness of wars all around the world, because they help those owning them avoid loss of life whatsoever and those not owning them to pay with lives for dealing damage that doesn’t even upset their adversary. Which will bring enormous profits, just not to everyone, only those who conquer. Finance is not all you need for that subject.
On a humanist note, in “drone army against another drone army wars of the future” scenarios loss of life might be so small that pain and death in wars will be reduced to cases of deliberate sadism. Meaning that … again, there’ll be more war.
No, because profits are not only made from replacing existing mechanisms, but also from building new ones.
Specifically, most people don’t use computers as really-really meta-machines. They use them as platforms for running specialized applications.
But LLMs, however expensive in resources, change that. They make computers meta-machines for everyone.
And also in some races you want to be further from the rear, not closer to the front. If this technology promises a profound crash in any case (because, suppose, it’ll bring about planet-wide totalitarianism), those investments might mean that rush to try to avoid getting eaten completely in the future. Losing less, not gaining more.