… in the United States, public investment in science seems to be redirected and concentrated on AI at the expense of other disciplines. And Big Tech companies are consolidating their control over the AI ecosystem. In these ways and others, AI seems to be making everything worse.
This is not the whole story. We should not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path. Here’s how.
The essential point is that, like with the climate crisis, a vision of what positive future outcomes look like is necessary to actually get things done. Things with the technology that would make life better. They give a handful of examples and provide broad categories if activities that can help steer what is done.
givesomefucks@lemmy.world 1 week ago
You know what else would make life better for people?
Accessible healthcare…
You know why that’s better than AI? We don’t need to burn the planet down to use it after spending billions to get it going
Artisian@lemmy.world 1 week ago
I strongly agree. But I also see the pragmatics: we have already spent the billions, there is (anti labor, anti equality) demand for AI, and bad actors will spam any system that took novel text generation as proof of humanity.
So yes, we need a positive vision for AI so we can deal with these problems. For the record, AI has applications in healthcare accessibility. Translation, and navigation of beurocracy (including automating the absurd hoops insurance companies insist on. Make insurance companies deal with the slop) come immediately to mind.
givesomefucks@lemmy.world 1 week ago
Alphane_Moon@lemmy.world 1 week ago
I am genuinely curious why you think we need a positive vision for AI.
I say this as someone who regularly uses LLMs for work (more as a supplement to web searching) and uses “AI” in other areas as well (low resolution video upscaling). There are also many other very interesting use cases (often specialized) that tend to be less publicized than LLM related stuff.
I still don’t see why we need a positive vision for AI.
From my perspective, “AI” is a tool, it’s not inherently positive or negative. But as things stand right now, the industry is dominated by oligarchs and conmen types (although they of course don’t have a monopoly in this area). But since we don’t really have a way to reign in the oligarchs (i.e. make them take responsibility for their actions), the discussion around positive vision almost seems irrelevant. Let’s say we do have a positive vision for AI (I am not even necessarily opposed to such a vision), but my question would be, so what?
Perhaps we are just talking about different things. :)
demonsword@lemmy.world 1 week ago
sunken cost fallacy