Comment on Scientists Need a Positive Vision for AI
givesomefucks@lemmy.world 1 week ago
You know what else would make life better for people?
Accessible healthcare…
You know why that’s better than AI? We don’t need to burn the planet down to use it after spending billions to get it going
Artisian@lemmy.world 1 week ago
I strongly agree. But I also see the pragmatics: we have already spent the billions, there is (anti labor, anti equality) demand for AI, and bad actors will spam any system that took novel text generation as proof of humanity.
So yes, we need a positive vision for AI so we can deal with these problems. For the record, AI has applications in healthcare accessibility. Translation, and navigation of beurocracy (including automating the absurd hoops insurance companies insist on. Make insurance companies deal with the slop) come immediately to mind.
givesomefucks@lemmy.world 1 week ago
Artisian@lemmy.world 1 week ago
I think the argument is that, like with climate, it’s really hard to get people to just stop. They must be redirected with a new goal. “Don’t burn the rainforests” didn’t change oil company behavior.
givesomefucks@lemmy.world 1 week ago
The problem is instead of finding better ways to stop it (regulations) you’re looking for “productive” ways to use it…
Apparently because you’ve pre-emptively given up.
But if you succeed it would lead to more AI and more damage to our planet.
I fully understand you believe you have good intentions, I’m just struggling to find a way to explain to you that intentions don’t matter. And I don’t think I’m going to come up with a way you’ll beavle to understand.
It’s like if someone was stuck in a hole in the ground, and instead of wanting to climb out, you yank everyone else back into the hole when they try and keep trying to get them to help you redecorate the hole.
I truly hope someone can present that in a way that gets through to you, because you are doing real damage.
Alphane_Moon@lemmy.world 1 week ago
I am genuinely curious why you think we need a positive vision for AI.
I say this as someone who regularly uses LLMs for work (more as a supplement to web searching) and uses “AI” in other areas as well (low resolution video upscaling). There are also many other very interesting use cases (often specialized) that tend to be less publicized than LLM related stuff.
I still don’t see why we need a positive vision for AI.
From my perspective, “AI” is a tool, it’s not inherently positive or negative. But as things stand right now, the industry is dominated by oligarchs and conmen types (although they of course don’t have a monopoly in this area). But since we don’t really have a way to reign in the oligarchs (i.e. make them take responsibility for their actions), the discussion around positive vision almost seems irrelevant. Let’s say we do have a positive vision for AI (I am not even necessarily opposed to such a vision), but my question would be, so what?
Perhaps we are just talking about different things. :)
Artisian@lemmy.world 1 week ago
I am primarily trying to restate or interpret Schneiers argument. Bring the link into the comments. I’m not sure I’m very good at it.
He points out a problem which is more or less exactly as you describe it. AI is on a fast track to be exploited by oligarchs and tyrants. He then makes an appeal: we should not let this technology, which is a tool just as you say, be defined by the evil it does. His fear is: “that those with the potential to guide the development of AI and steer its influence on society will view it as a lost cause and sit out that process.”
That’s the argument afaict. I think the “so what” is something like: scientists will do experiments and analysis and write papers which inform policy, inspire subversive use, and otherwise use the advantages of the quick to make gains against the strong. See the 4 action items that they call for.
Alphane_Moon@lemmy.world 1 week ago
Thanks.
Can’t say I agree though. I can’t think of any historical examples where a positive agenda in of itself made a difference.
One example would be industrialization at the end of the 19th century and the first part of the 20th century. One could argue it was far more disruptive of pre-industrial society (railroads, telegraph, radio, mass production) than the information age is now.
Clearly industrialization enabled mass benefits in society, but it took WW1/WW2 and the rise of uncompromising, brutal revolutionary regimes for societies to come to terms with pros and cons of industrial society and find a middle path of sorts (until the next disruption).
Let’s hope it doesn’t get to that point in our times. That being said, the current oligarch regime comes off as even more self assured than the beneficiaries of early industrial society (gilded age oligarch in the US, Romanov dynasty in Tsarist russia).
The current batch of oligarchs has the benefit of hindsight and yet they is no end to their hubris with Bezos talking about millions living in space and comically stupid projects like data centres in orbit and The Simpsons-style “block the sun” schemes to address climate change.
demonsword@lemmy.world 1 week ago
sunken cost fallacy
Artisian@lemmy.world 1 week ago
I think of it more like genie-out-of-lamp. It’s now very cheap to fine tune a huge model and deploy it. Policy and regulation need to deal with that fact.