Comment on GOG job listing for a Senior Software Engineer notes "Linux is the next major frontier"
XLE@piefed.social 1 day agoDoomposting about AI inevitability is only beneficial to AI companies… If your claim is even true. And if it is, we should shame everybody else.
dukemirage@lemmy.world 1 day ago
XLE@piefed.social 1 day ago
Citation needed.
You’re on a post about Linux, an OS has grown in popularity thanks to Microsoft ruining Windows with the “true aids” you’re promoting here.
dukemirage@lemmy.world 1 day ago
Whatever MS bakes into Windows is not what I listed above. Spin up a local LLM trained on your code base and try using it.
XLE@piefed.social 1 day ago
No thanks AI bro
Goodeye8@piefed.social 1 day ago
None of what you brought up as a positive are things an LLM does. Most of those things existed before the modern transformer-based LLMs were even a thing.
LLM-s are glorified text prediction engines and nothing about their nature makes them excel at formal languages. It doesn’t know any rules. It doesn’t have any internal logic. For example if the training data consistently exhibits the same flawed piece of code then an LLM will spit out the same flawed piece of code, because that’s the most likely continuation of its current “train of thought”. You would have to fine-tune the model around all those flaws and then hope some combination of a prompt won’t lead the model back into that flawed data.
I’ve used LLMs to generate SQL, which according to you is something they should excel at, and I’ve had to fix literal syntax errors that would prevent the statement from executing. A regular SQL linter would instantly pick up that the SQL is wrong but an LLM can’t pick up those errors because an LLM does not understand the syntax.
False@lemmy.world 1 day ago
I’ve seen humans generate code with syntax errors, try to run it, then fix it. I’ve seen llms do the same stuff - it does that faster than the human though
Goodeye8@piefed.social 1 day ago
But that extra time is then wasted because humans still have to review the code an LLM generates and fix all the other logical errors it makes because at best an LLM does exactly what you tell them to do. I’ve worked with a developer who did exactly what the ticket says and nothing more and it was a pain in the ass because their code always needed double checking that their narrow focus on a very specific problem didn’t break the domain as a whole. I don’t think you’re gaining any productivity with LLMs, you’re only shifting the work from writing code to reviewing code and I’ve yet to meet a developer who enjoys reviewing code more than writing code, which means code will receive less attention and thus becomes more prone to bugs.
HarkMahlberg@kbin.earth 1 day ago
We had all of those things before AI and they worked just fine and didn't require 50 Exowatts of electricity to run.
stephen01king@piefed.zip 1 day ago
Neither does a locally run LLM model.
XLE@piefed.social 1 day ago
Hey Steven, how do you think they make those models?
(As if you genuinely believe those are the ones GOG is using.)
4am@lemmy.zip 1 day ago
None of that is “AI” dumbass. Stop watering down the terminology.
LLMs run from cloud data canters are the thing that everyone is against, and that is what the term “AI” means. No one thinks IntelliSense is AI; no one thinks adding jslint to your CI pipeline is AI.
dukemirage@lemmy.world 1 day ago
I wasn’t talking about existing tools.