OP was able to write a bash script that works… on his machine 🤷 that’s far from having to review and send code to production either in FOSS or private development.
Comment on Why are people seemingly against AI chatbots aiding in writing code?
simplymath@lemmy.world 2 months ago
People who use LLMs to write code incorrectly perceived their code to be more secure than code written by expert humans.
nfms@lemmy.ml 2 months ago
petrol_sniff_king@lemmy.blahaj.zone 2 months ago
I also noticed that they were talking about sending arguments to a custom function? That’s like a day-one lesson if you already program. But this was something they couldn’t find in regular search?
Maybe I misunderstood something.
sugar_in_your_tea@sh.itjust.works 2 months ago
Exactly. If you understand that functions are just commands, then it’s quite easy to extrapolate how to pass arguments to that function:
function my_func () { echo $1 $2 $3 # prints a b c } my_func a b c
Once you understand that core concept, a lot of Bash makes way more sense. Oh, and most of the syntax I provided above is completely unnecessary, because Bash…
JasonDJ@lemmy.zip 2 months ago
Hmm, I’m having trouble understanding the syntax of your statement.
Is it
(People who use LLMs to write code incorrectly) (perceived their code to be more secure) (than code written by expert humans.)
Or is it
(People who use LLMs to write code) (incorrectly perceived their code to be more secure) (than code written by expert humans.)
nfms@lemmy.ml 1 month ago
The “statement” was taken from the study.
We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants’ language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.
simplymath@lemmy.world 2 months ago
I intended B, but A is also true, no?
sugar_in_your_tea@sh.itjust.works 2 months ago
Lol.
We literally had an applicant use AI in an interview, failed the same step twice, and at the end we asked how confident they were in their code and they said “100%” (we were hoping they’d say they want time to write tests). Oh, and my coworker and I each found two different bugs just by reading the code. That candidate didn’t move on to the next round. We’ve had applicants write buggy code, but they at least said they’d want to write some test before they were confident, and they didn’t use AI at all.
I thought that was just a one-off, it’s sad if it’s actually more common.