If we get killed by auto complete we deserve to die.
Comment on AI's Future Hangs in the Balance With California Law
Sgagvefey@lemmynsfw.com 4 months ago
Though it sounds extreme, there are a lot of smart people in the AI community who truly believe AI could end humanity.
No. There are not.
Believing anything resembling current tools has the capacity to end humanity in incontrovertible proof that you are not smart.
afraid_of_zombies@lemmy.world 4 months ago
DudeImMacGyver@sh.itjust.works 4 months ago
AI takes a crazy amount of power, which is largely fueled by the same fossil fuels that are indeed killing us off and destroying our habitat, which will kill even more of us, so AI could definitely indirectly kill off humanity.
5C5C5C@programming.dev 4 months ago
AI person reporting in. Without saying whether or not I personally believe that the current tools will lead to the end of humanity, I’ll point out a few possibilities that I find concerning about what’s going on:
The hype around AI is being used to justify mass layoffs, where humans are being replaced by tools that do a questionable job and can’t really understand the things those humans could understand. Whether or not the AI can do as good of a job according to some statical measurement is less relevant than the fact that a human is less likely to make an extremely grave mistake and more likely to be able to recognize when that does happen. I’m concerned this will lead to cross-industry enshitification on an unprecedented scale.
The foundation models consume a huge amount of energy. The more impressive you want it to be, the more energy it needs. As long as the data centers which run them are dependent on fossil fuels, they’ll be pumping a huge amount of carbon in the air just to do replace jobs that we didn’t need to have replaced.
As these tools are used more and more, they’re going to end up “learning” from content created by themselves instead of something that’s closer to a ground truth. It’s hard to predict what kind of degradation of service will come from this, but the more we create systems that rely on these tools, the more harm it will do to us.
Given the cost and nature of these tools, they’re likely to yield the most benefit to moneyed interests that want to automate the systems that maintain their power and wealth. E.g. generating large amounts of convincing disinformation to manipulate the public into supporting politicians or policies that benefit a small number of wealthy people in the short term while locking humanity into a path towards destruction.
And none of this accounts for possible future iterations of AI tools that may be far more capable than what exists today. That future technology will most likely be controlled by powerful people who are primarily interested in using it to bolster the systems that keep them in power, to the detriment of humanity as a whole.
Personally I’m far less concerned about a malicious AI intentionally doing harm to humanity than AI being used as a weapon by unscrupulous people.
unconfirmedsourcesDOTgov@lemmy.sdf.org 4 months ago
I agree with everything you said and wanted to point out that you offered quite a compelling argument that even current AI tools are capable of significant amounts of damage without even touching on the autonomous weapons systems that are starting to be deployed.
Not even just talking about the military intelligence systems that may or may not have been deployed (Israel: Lavender et al), but we’re starting to show off weapons platforms that may someday be empowered to perform their own threat analysis and take real world actions accordingly. That shit is terrifying in more of a Terminator/Matrix way than anything else imo.
afraid_of_zombies@lemmy.world 4 months ago
Ban stock buybacks, abolish non-competes, fine the CEO and major stockholders personally for layoffs.
Nuclear power, renewables, carbon tax
Not really our problem it is their problem.
Restore the fairness doctrine limit the ability of groups like Sinclair.
Got any other impossible to solve issues let me know.
5C5C5C@programming.dev 4 months ago
I never suggested these problems are impossible to solve, but you haven’t solved them in your post because you haven’t laid out how to overcome the political and economic resistance to implementing any of this, and that’s where the biggest challenge is.
Although I think it’s naive to believe that nuclear power and renewable energy can allow us to keep consuming energy recklessly. Renewable energy technology still puts a significant strain on the environment, in terms of mining rare-earth elements, pollution produced during manufacturing, and material waste from devices that have reached end of life. Nuclear energy is rife with controversy… I used to be firmly in support of it, but I’ve grown skeptical, largely because of the ecological damage from the mining and construction processes, and we don’t have a clear story of what end of life looks like for a nuclear power plant. A plant can only be expected to operate for 40-60 years at which point it needs to be demolished and rebuilt, repeating the massive costs of material waste and construction all over again.
At the end of the day the only way for humanity to survive is for everyone to be reducing their consumption, but I honestly the think the vast majority of people today would rather die and take everyone else down with them than accept more responsible consumption habits.
afraid_of_zombies@lemmy.world 4 months ago
That isn’t my job.
Misn showing me where I said that? Cause I am pretty freaken sure I mentioned a carbon tax and incentives for companies to generate their own power.
More tech. Let me know when you have an actual challenge.
So are vaccines.
Are you being serious right now? I am in infrastructure and 40 years is well beyond the scope of anything I build. Get me a freaken Bible and I will swear on it, your waste system in your area can never ever ever last 4 decades. They are constantly having to rip it all apart and rebuild. 11 years is what I typically hope for. Find me a wet well that is 4 decades old, find me a pump, find me a screw conveyor, find me a metering pump, find me a shredder, find me the UPS/generator, find me a DCS, that lasts 40 years. I am pretty tempted to share your comment with the office tomorrow, so we can have a good laugh at it.
Now you compare that to nuclear. Where everything is overbuilt everything is accounted for. No one improvises. Stuff in nuclear plants outlasts everything else. I have worked on very non-critical systems for nuclear plants and had to follow the strictest rules of my career. It takes a certain level of insanity to specify what type of tape should be used on a bundle of wires.
Guys at nuclear plants are freaken artisans, unionized, paid the highest in the industry for a reason.
Got to love this site sometimes. No where else can I hear people arguing against highly trained people getting paid very well being evil and instead being told that everything was so freaken perfect during the dark ages.
Sgagvefey@lemmynsfw.com 4 months ago
Zero of these things are impacted by this legislation in any way.
This is exclusively the mentally unstable “killer AI” nonsense. We’re not even 1% of 1% of the way to anything resembling agency.
ShittyBeatlesFCPres@lemmy.world 4 months ago
It’s good for marketing, though. “Ah, our software is so powerful, it could destroy humanity! Please pass a bill saying so while we market friendly chatbots to the public while actually making money by selling our products to despots and warmongers that might actually end humanity.”
Sgagvefey@lemmynsfw.com 4 months ago
It’s regulatory capture. Add deluded barriers to entry to make it difficult for competition and community projects to develop, and you have a monopoly.
5C5C5C@programming.dev 4 months ago
Sure, but this outcome is not at all surprising. There are plenty of smart AI people that have nuanced views of what kind of threat could be posed by recklessly unleashing tools that we don’t fully understand into the hands of people who are likely to do harmful things with them.
It’s not surprising that those valid nuanced concerns get translated into overly simplistic misrepresentations entangled with pop sci fi panic around rogue AI as they try to move into public discourse.
Sgagvefey@lemmynsfw.com 4 months ago
We do fully understand them. Not knowing the exact model they come to doesn’t mean the algorithm has a shred of mystery involved.
It’s autocomplete with a really big training set and a really big model. It cannot possibly develop agency. It’s hundreds of orders of magnitude of complexity short of a human.