It's also about distraction. The whole point of the letter and the campaign behind it is slight-of-hand; to get the media obsessing over hypothetical concerns about hypothetical future AIs rather than talking about the actual concerns around current LLMs. They don't want the media talking about the danger of deepfaked videos, floods of generated disinformation, floods of generated scams, deepfaked audio scams, and on and on, so they dangle Skynet in front of them and watch the media gladly obsess over our Terminator-themed future because that's more exciting and generates more clicks than talking about the flood of fake news that is going to dominate every democratic election in the world from now on.
Comment on AI-focused tech firms locked in ‘race to the bottom’, warns MIT professor
lily33@lemm.ee 1 year ago
competition too intense
dangerous technology should not be open source
So, the actionable suggestions here are: reduce competition, ban open source. I guess what this article is really about, is using fear to make sure AI remains in the hands of a few…
Heresy_generator@kbin.social 1 year ago
photonic_sorcerer@lemmy.dbzer0.com 1 year ago
But… shouldn’t it? I mean, if everyone had a nuke, the world would look a whole lot different
lily33@lemm.ee 1 year ago
Since I don’t think this analogy works, you’re going to have to explain how the world would look like if everyone had access to AI technology advanced enough to be comparable to a nuke, vs how it would look like if only a small elite has access to it.
photonic_sorcerer@lemmy.dbzer0.com 1 year ago
Okay, well, if everyone had access to an AGI, anyone could design and distribute a pathogen that could wipe out a significant portion of the population. Then again, you’d have the collective force of everyone else’s AI countering that plot.
I think that putting that kind of power into the hands of everyone shouldnt be done lightly.
Hanabie@sh.itjust.works 1 year ago
There are papers online on how to design viruses. Now to get funding for a lab and staff, because this is nothing like Breaking Bad.
Rayspekt@kbin.social 1 year ago
You still can't manufacture it. Your comparision with nukes is actually a good example: The basic knowledge how a nuke works is out there, yet most people struggle in refining weapon-grade plutonium.
Knowledge is only one part in doing something.
lily33@lemm.ee 1 year ago
I would say the risk of having AI be limited to the ruling elite is worse, though - because there wouldn’t be everyone else’s AI to counter them.
And if AI is limited to a few, those few WILL become the new ruling elite.
Honytawk@lemmy.zip 1 year ago
Since when does AI translate to being able to create bacteria and stuff?
If having the information on how to do so was enough to create pathogens, we should already have been wiped out because of books and libraries.
serratur@lemmy.wtf 1 year ago
You’re just gonna print the pathogens with the pathogen printer? You understand that getting the information doesn’t mean you’re able to produce it.
Touching_Grass@lemmy.world 1 year ago
Are we back to freaking out about the anarchists cookbook
bioemerl@kbin.social 1 year ago
Your brain is an (NA)GI
Kichae@kbin.social 1 year ago
Let's assume your hypothetical here isnt bonkers: How, exactly, do you propose limiting people's access to linear algebra?
Touching_Grass@lemmy.world 1 year ago
We could all do our taxes for free. Fix grammatical errors. Have a pocket legal, medical advice. A niche hobby advisor. Pocket professor. All in one. Or we could ban it because I fear maybe someone will use it to make memes
Hanabie@sh.itjust.works 1 year ago
You can google how to make a nuke. Of course, you’re gonna get your hands on the plutonium, which is something even countries struggle with.
Rayspekt@kbin.social 1 year ago
Then I'll ask AI how to obtain plutonium, checkmate.
But by that point I might just ask the all-knowing AI how I can achieve what I want to with the nuke and cut out the radioactive middle man. Unless the AI tells me to build a nuke, then it's nuke time anyway.
Hanabie@sh.itjust.works 1 year ago
The point I was trying to make is, all the information about viruses and nuclear bombs are already readily available. AI doing the googling for you will not have an actual impact, especially considering what else you’ll need to make it all work.
I would assume you get the fear of AI from the news media. Understandable, they have a vested interest in keeping you afraid. AI is gonna steal their ad revenue, when you won’t have to visit their shitty websites anymore.
Hanabie@sh.itjust.works 1 year ago
That’s exactly what it is.
thehatfox@lemmy.world 1 year ago
Yes, this the setup for regulatory capture before regulation has even been conceived. The likes of OpenAI would like nothing more than to be legally declared the only stewards of this “dangerous” technology. The constant doom laden hype that people keep falling for is all part of the plan.
lily33@lemm.ee 1 year ago
I think calling it “dangerous” in quotes is a bit disingenuous - because there is real potential for danger in the future - but what this article wants is totally not the way to manage that.
foggy@lemmy.world 1 year ago
It would be an obvious attempt at pulling up the ladder if we were to see regulation on ai before we saw regulation on data collection from social media companies. Wen have already seen that weaponized. Why are we going to regulate something before it gets weaponized when we have other recent tech, unregulated, being weaponized?
Touching_Grass@lemmy.world 1 year ago
I saw a post the other day about how people crowd sourced scraping grocery store prices. They could present a good case for price fixing and collusion. Web scraping is already pretty taboo and this AI fear mongering will be the thing that is used to make it illegal.