New Vulnerability in GitHub Copilot and Cursor: How Hackers Can Weaponize Code Agents
Submitted 2 days ago by AppleStrudel@reddthat.com to techsploits@reddthat.com
Submitted 2 days ago by AppleStrudel@reddthat.com to techsploits@reddthat.com
Jayjader@jlai.lu 2 days ago
That is a bit too overblown. If your “review” phase is only once the code is committed, pushed, and it’s done through the GitHub online interface then sure, but I’d argue in that case that your enjoyed development process needs to be overhauled. Who commits without reviewing what you are including into the commit?! An extra script tag with a huge url like that should jump out at your eyes, scream in your face “this doesn’t feel right”, etc.
At some point people need to be responsible with what they’re doing. There’s no software that can fix laziness nor ignorance.
AppleStrudel@reddthat.com 2 days ago
That was a toy example, a real life malicious prompt can be engineered to be a whole lot subtler than this, for example:
And when AI would happily generate 300+ lines of code when you simply ask it for some bootstrap that you may fill the details in yourself, and it’s happily continue to generate hundreds more if you aren’t careful when chatting with it, subtle little things can and do slip through.
That prompt is a little something I thought of in 10 minutes, imagine what a adversarial actor can come up with after a whole week of brain storming?
Jayjader@jlai.lu 2 days ago
That little prompt is still clearly telling the LLM to “add a memory leak”.
Not to mention that I don’t trust a 300+ line blob of code no matter who or what writes it.
But I guess this is why the other engineering fields have disdain for “software engineers”, the entire field is falling over itself to stop paying attention to details.