Comment on Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon

<- View Parent
vacuumflower@lemmy.sdf.org ⁨3⁩ ⁨days⁩ ago

People are talking about AI killbots and upcoming crash at the same time, and complain about AI slop and vibe coding.

Sorry, but if something is usable for making killbots, there will be no crash. And AI slop proves that for someone it’s useful to make slop. And vibe coding proves that someone makes things working in production with those tools. Saying that quality suffers is like saying that cobb houses are not comparable to brick houses and vice versa. Both exist. There are places where technologies related to cobb are still common for construction.

But the most important reason is the first one, if some technique gives you a more convenient and sharper stick to kill someone from another tribe, then that something stays as tribe’s cherished wisdom.

That LLMs consume too much resources … You might have noticed there’s a huge space for optimization. They are easy to parallelize, and we are in market capture stage, which means that optimization is not yet a priority. When it becomes a priority, there might happen a moment when all the arguments about operations costing in resources more than they give profit and that being funded by investors are suddenly not true anymore.

I have been converted. Converted back, one might say, there was a time around years 2011-2014.

source
Sort:hotnewtop