Success would lead to AI use that properly accounted for its environmental impact and had to justify it’s costs. That likely means much AI use stopping, and broader reuse of models that we’ve already invested in (less competition in the space please).
The main suggestion in the article is regulation, so I don’t feel particularly understood atm. The practical problem is that, like oil, LLM use can be done locally at a variety of scales. It also provides something that some people want a lot:
- Additional (poorly done) labor. Sometimes that’s all you need for a project
- Emulation of proof of work to existing infrastructure (eg, job apps)
- Translation and communication customization
It’s thus extremely difficult to regulate into non-existence globally (and would probably be bad if we did). So effective regulation must include persuasion and support for the folks who would most benefit from using it (or you need a huge enforcement effort, which I think has its own downsides).
The problem is that even if everyone else leaves the hole, there will still be these users. Just like drug use, piracy, or gambling, it’s easier to regulate when we make a central easy to access service and do harm reduction. To do this you need a product that meets the needs and mitigates the harms.
Persuading me I’m directionally wrong would require such evidence as:
- Everyone does want to leave the hole (hard, I know people who don’t. And anti-AI messaging thus far has been more about signaling than persuasion)
- That LLMs really can’t/can be made difficult to be done locally (hard, the Internet gives too much data, and making computing time expensive has a lot of downsides)
- Proposed regulation that would actually be enforceable at reasonable cost (haven’t thought hard about it, maybe this is easy?)
frongt@lemmy.zip 1 week ago
I don’t think we should stop it, exactly. Just tax it appropriately based on environmental impact.
If that makes it prohibitively expensive, no great loss.