I’m sure everyone has always explained this to you given the number of downfalls but algorithms aren’t equal to AI.
Ever since the evolution of AI people seem to have lost the ability to recall things prior to 2019.
Comment on Indie game developers have a new sales pitch: being ‘AI free’
Maestro@fedia.io 4 months ago
Does this specify the kinds of AI? Are none of these devs using code completion on their IDEs? Or refactoring tools? Because the bulk of them use AI these says.
I’m sure everyone has always explained this to you given the number of downfalls but algorithms aren’t equal to AI.
Ever since the evolution of AI people seem to have lost the ability to recall things prior to 2019.
I mean doesn’t it heavily depend what you refer to as AI?
ML algorithms, come very close to LLMs and have been back in the day refered to as AI. They are also used in code completion.
Also both of these are algorithms
If something uses a lot of if else statements to do stuff like become a “COM” player in a game, it is called an Expert System.
That is what is essentially in game “AI” used to be. That was not an LLM.
Stuff like clazy and clang-tidy are neither ML nor LLM.
They don’t rely on curve fitting or mindless grouping of data-points.
Parameters in them are decided, based on the programming language specification and tokenisation is done directly using the features of the language. How the tokens are used, is also determined by hard logic, rather than fuzzy logic and that is why, the resultant options you get in the completion list, end up being valid syntax for said language.
Now if you are using Cursor for code completion, of course that is AI.
It is not programmed using features of the language, but iterated until it produces output that matches what would match the features of the language.
It is like putting a billion monkeys in front of a typewriter and then selecting one that make something Shakespeare-ish, then killing off all the others. Then cloning the selected one and rinse and repeat.
And that is why it takes a stupendously disproportionate amount of energy, time and money to train something that gives an output that could otherwise be easily done better using a simple bash script.
To be honest, I feel like what you describe in the second part is more of a genetic algorithm than a machine learning one, but I get your point.
Quick side note, I wasn’t at all including a discussion about energy consumption and in that case ML based algorithms, whatever form they take, will mostly consume more energy (assuming not completely inefficient “classical” algorithms). I do admit, I am not sure how much more (especially after training), but at least the LLMs with their large vector/matrix based approaches eat a lot (I mean that in the case for cross-checking tokens in different vectors or such). Non LLM, ML, may be much more power efficient.
My main point, however, was that people only remember AI from ~2022 and forgot about things from before (e.g. non LLM, ML algorithms) that were actively used in code completion. Obviously, there are things like ruff, clang-tidy (as you rightfully mentioned) and more that can work without and machine learning. Although, I didn’t check if there literally is none, though I assume it.
On the point of game “AI”, as in AI opponents, I wasn’t talking of that at all (though since deep mind, they did tend to be a bit more ML based also, and better at games, see Starcraft 2, instead of cheating only to get an advantage)
You seriously misunderstand what the acronyms you’re using refer to. I’d suggest some reading before commenting, next time.
How so? A Large Language Model is usually a transform based approach nowadays, right (correct me if outdated)?
AI is artifical intelligence, which has been used and abused for many different things, none of which are intelligent right now (ampng others used for machine learning).
Machine learning is based on linear algebra like linear regression or other methods depending what you want to do.
An algorithm is by definition anything that follows a recipe so to say.
No because AI replaces a human role.
Code completion does not replace a human role, that’s like saying that spell check is AI.
I am not talking about what it does, I am talking about what it is.
And all tools do tend to replace human labor. For example, tractors replaced many farmhands.
The thing we face nowadays and this is by no means limited to things like AI, is that less jobs are created by new tools than old destroyed (in my earlier simile, a tractor needs mechanics and such).
The definition is something is entirely disconnected from its usage (mainly).
And just because everyone calls LLMs now AI, there are plenty of scientific literature and things that have been called AI before. As of now, as it boils down all of these are algorithms.
The thing with machine learning is just that it is an algorithm that fine tunes itself. And strictly speaking LLMs, commonly refered to as AI, are a subclass of ML with new technology.
I make and did not make any statement of the values of that technology or my stance on it
But these tools are not mere algorithms or ML products, they are LLM backed
Emmet has been around since 2015. So it was definitely not LLM backed.
My friend, nobody says all of them are LLM backed, but some are
Personally speaking I don’t care at all about dev tools, as they have always been used. Vibe coding does bother me though - if you don’t know HOW to code, you probably shouldn’t be doing it.
The real issue though is using AI generated assets. If you have a game that uses human made art, story, and music, no one is going to complain about you using AI. Even if you somehow managed to get there via vibe coding.
Jesus fuck that’s some goal post moving.
Here is a frog, please help me split its hairs
The seal looks like this:
Code completion is probably a gray area.
Those models generally have much smaller context windows, so the energy concern isn’t quite as extreme.
You could also reasonably make a claim that the model is legally in the clear as far as licensing, if the training data was entirely open source (non-attribution, non-share-alike, and commercial-allowed) licensed code.
That said, I think the general sentiment is less “what the technology does” and more “who it does it to”. Code completion, for the most part, isn’t deskilling labor, or turning experts into accountability sinks.
Like, I don’t think the Luddites would’ve had a problem with an artisan using a knitting frame in their own home. They were too busy fighting against factories locking children inside for 18-hour shifts, getting maimed by the machines or dying trapped in a fire.
Lovely writing, I agree 👍
This is exactly my thoughts. You need to specify. Is a product AI when Windows is used to develop it? Windows is an “AI” product as in assisted to be produced by AI.
Labels are meaningless without sensible rules and enforcement.
Another case of Lemmy users angrily downvoting because they don’t understand how the world works. These are exactly the questions that need to be asked.
Right now, I could slap the label “No AI” on my completely AI generated game and just claim that I interprete it as "the game doesn’t use LLMS while running.
I would primarily understand it as being free of generative AI (picture and sound), which is what is most obvious when actually playing a game. I’m personally not against using LLMs for coding if you actually know what you’re doing and properly review the output. However at that point most will come to the conclusion that you could write the code manually anyways and probably save time.
Using ai to generate samples to get a framework of the product would be permitted or not? Is placeholder generation allowed?
Since you would never see it that’s pretty much irrelevant. Clearly this is about AI generated art and AI generated assets
Whether or not you use AI to grey box something is a pointless distinction given the fact that there’s no way to prove it one way or the other.
But it still removes labor from the working class. My point is that the lines are blurry. You practically cannot draw a useful line based on the tooling used.
Even yesteryear's code completion systems (that didn't rely on LLMs) are technically speaking, AI systems.
While the term "AI" became the next "crypto" or "Blockchain", in reality we've been using various AI products for the better part of the past 30 years.
They were technically Expert Systems.
AI was was the Marketing Term even then.
Now they are LLMs and AI is still the marketing term.
“AI” has become synonymous with “Generative AI”
We used to call the code that determined NPC behaviour AI.
It wasn’t AI as we know it now but it was intended to give vaguely realistic behaviour (such as taking a sensible route from A to B).
And honestly lightweight neural nets can make for some interesting enemy behavior as well. I’ve seen a couple games using that and wouldn’t be surprised if it caught on in the future.
Used to?
Lol gramps here thinks bots are AI skullemoji skullemoji bro
ulterno@programming.dev 4 months ago
I don’t consider
clangtools to be AI.They parse the code logically and don’t do blind pattern matching and curve fitting.
The rules they use are properly defined in code.
If that was AI, then all compilers made with LLVM would be AI.