TheRegister had an article, a year or 2 ago, about using AI in the opposite way: instead of creating the code, someone was using it to discover security-problems in it, & they said it was really useful for that, & most of its identified things, including some codebase which was sending private information off to some internet-server, which really are problems.
I wonder if using LLM’s as editors, instead of writers, would be better-use for the things?
_ /\ _
Whostosay@sh.itjust.works 3 hours ago
A second pair of eyes has always been an acceptable way to use this imo, but it shouldnt be primary