If your morals and ethics can be changed by inevitability then what’s that say about you?
Comment on GOG job listing for a Senior Software Engineer notes "Linux is the next major frontier"
dukemirage@lemmy.world 1 day agoIf you want to ditch every software company/vendor that uses LLM code tools, you may want to never touch software ever again.
the_q@lemmy.zip 1 day ago
ampersandrew@lemmy.world 1 day ago
Would you have taken a moral stance against automated telephone switchboards or online shopping?
the_q@lemmy.zip 1 day ago
Yeah if their impact was as negative.
ampersandrew@lemmy.world 1 day ago
Both of those things put a lot of people out of work, but our economy adapted, and there was nothing to be gained by shaming the people embracing the technology that was clearly going to take over. I’m not convinced AI tools are that, but if they are, then nothing can stop it, and you’re shaming a bunch of people who have literally no choice.
dukemirage@lemmy.world 1 day ago
If you think every LLM tool is a product of an over valued tech bro company then what’s that say about you?
the_q@lemmy.zip 1 day ago
The referenced job is clearly talking about the current over valued tech bro kind, you buffoon.
tjsauce@lemmy.world 1 day ago
That’s only if the HR knew what they were talking about when crafting the listing. Not saying GOG will use AI for good, but we don’t know if the job will require something like ChatGPT or something in-house that isn’t like GPT.
XLE@piefed.social 1 day ago
Doomposting about AI inevitability is only beneficial to AI companies… If your claim is even true. And if it is, we should shame everybody else.
dukemirage@lemmy.world 1 day ago
XLE@piefed.social 1 day ago
Citation needed.
You’re on a post about Linux, an OS has grown in popularity thanks to Microsoft ruining Windows with the “true aids” you’re promoting here.
dukemirage@lemmy.world 1 day ago
Whatever MS bakes into Windows is not what I listed above. Spin up a local LLM trained on your code base and try using it.
Goodeye8@piefed.social 1 day ago
None of what you brought up as a positive are things an LLM does. Most of those things existed before the modern transformer-based LLMs were even a thing.
LLM-s are glorified text prediction engines and nothing about their nature makes them excel at formal languages. It doesn’t know any rules. It doesn’t have any internal logic. For example if the training data consistently exhibits the same flawed piece of code then an LLM will spit out the same flawed piece of code, because that’s the most likely continuation of its current “train of thought”. You would have to fine-tune the model around all those flaws and then hope some combination of a prompt won’t lead the model back into that flawed data.
I’ve used LLMs to generate SQL, which according to you is something they should excel at, and I’ve had to fix literal syntax errors that would prevent the statement from executing. A regular SQL linter would instantly pick up that the SQL is wrong but an LLM can’t pick up those errors because an LLM does not understand the syntax.
False@lemmy.world 1 day ago
I’ve seen humans generate code with syntax errors, try to run it, then fix it. I’ve seen llms do the same stuff - it does that faster than the human though
HarkMahlberg@kbin.earth 1 day ago
We had all of those things before AI and they worked just fine and didn't require 50 Exowatts of electricity to run.
stephen01king@piefed.zip 1 day ago
Neither does a locally run LLM model.
4am@lemmy.zip 1 day ago
None of that is “AI” dumbass. Stop watering down the terminology.
LLMs run from cloud data canters are the thing that everyone is against, and that is what the term “AI” means. No one thinks IntelliSense is AI; no one thinks adding jslint to your CI pipeline is AI.
dukemirage@lemmy.world 1 day ago
I wasn’t talking about existing tools.