I agree but I would clarify that this is true for the current gen of LLMs. AI is much broader subject.
Comment on Why don't these code-writing AIs just output straight up machine code?
naught101@lemmy.world 1 day agoStrong doubt that AI would be useful for producing improved compilers. That’s a task that would require extremely detailed understanding of logical edge cases of a given language to machine code translation. By definition, no content exists that can be useful for training in that context. AIs will certainly try to help, because they are people pleasing machines. But I can’t see them being actually useful.
JustJack23@slrpnk.net 1 day ago
naught101@lemmy.world 15 hours ago
Yeah, good catch. I know that, but was was forgetting it in the moment.
riskable@programming.dev 1 day ago
Umm… AI has been used to improve compilers dating all the way back to 2004:
github.com/…/Artificial-Intelligence-in-Compiler-…
Sorry that I had to prove you wrong so overwhelmingly, so quickly 🤷
naught101@lemmy.world 15 hours ago
Yeah, as @uranibaba@lemmy.world says, I was using the narrow meaning of AI=ML (as the OP was). Certainly not surprised that other ML techniques have been used.
That Cummins paper looks pretty interesting. I only skimmed the first page, but it looks like they’re using LLMs to estimate optimal compiler parameters? That’s pretty cool. But they also say something about it having a 91% hit compliant code hit rate, I wonder what’s happening in the other 9%. Noncompliance seems like a big problem? But I only have surface-level compiler knowledge, probably not enough to follow the whole paper properly…
uranibaba@lemmy.world 1 day ago
Looking at the tags, I only found one with the LLM tag, which I assume naught101 meant. I think people here tend to forget that there is more than one type of AI, and that they have been around for longer than ChatGPT 3.5.