The seal looks like this:
Code completion is probably a gray area.
Those models generally have much smaller context windows, so the energy concern isn’t quite as extreme.
You could also reasonably make a claim that the model is legally in the clear as far as licensing, if the training data was entirely open source (non-attribution, non-share-alike, and commercial-allowed) licensed code.
That said, I think the general sentiment is less “what the technology does” and more “who it does it to”. Code completion, for the most part, isn’t deskilling labor, or turning experts into accountability sinks.
Like, I don’t think the Luddites would’ve had a problem with an artisan using a knitting frame in their own home. They were too busy fighting against factories locking children inside for 18-hour shifts, getting maimed by the machines or dying trapped in a fire.
fonix232@fedia.io 3 weeks ago
Even yesteryear's code completion systems (that didn't rely on LLMs) are technically speaking, AI systems.
While the term "AI" became the next "crypto" or "Blockchain", in reality we've been using various AI products for the better part of the past 30 years.
nogooduser@lemmy.world 3 weeks ago
We used to call the code that determined NPC behaviour AI.
It wasn’t AI as we know it now but it was intended to give vaguely realistic behaviour (such as taking a sensible route from A to B).
4am@lemmy.zip 3 weeks ago
Used to?
yermaw@sh.itjust.works 2 weeks ago
Lol gramps here thinks bots are AI skullemoji skullemoji bro
pennomi@lemmy.world 3 weeks ago
And honestly lightweight neural nets can make for some interesting enemy behavior as well. I’ve seen a couple games using that and wouldn’t be surprised if it caught on in the future.
oatscoop@midwest.social 3 weeks ago
“AI” has become synonymous with “Generative AI”
ulterno@programming.dev 2 weeks ago
They were technically Expert Systems.
AI was was the Marketing Term even then.
Now they are LLMs and AI is still the marketing term.