I guess it’s about copilot scanning the code, submitting PRs, reporting security issues, doing code reviews and such.
Comment on Gentoo Linux Begins Codeberg Migration In Moving Away From GitHub, Avoiding Copilot
Lost_My_Mind@lemmy.world 20 hours ago
Hold on …
Are you saying all software hosted on github is infected with copilot? Or am I misreading the situation?
ExLisper@lemmy.curiana.net 13 hours ago
TheSeveralJourneysOfReemus@lemmy.world 5 hours ago
Copilot is everywhere and inescapable on any m$ service.
Ladislawgrowlo@lemy.lol 13 hours ago
reporting security issues
Is this not an advantage? If AI can find new security vulnerabilities reliably?
gwl@lemmy.blahaj.zone 11 hours ago
It cannot
JordanZ@lemmy.world 7 hours ago
I’ve had copilot suggest ‘fixing’ code to something that wasn’t even syntactically correct for the language and would break the build. If it can’t even figure out the super well documented syntax of a language I don’t trust it to find anything. The icing on the cake…it was a Microsoft language(C#).
jjagaimo@sh.itjust.works 12 hours ago
It often makes up non existent vulnerabilities. I think it was curl getting flooded with fake vulnerability reports which drowns out real reports, esp because it can take time to parse through the code or run the poc
sp3ctr4l@lemmy.dbzer0.com 7 hours ago
Basically anywhere that LLMs are implemented… they are a security vulnerability, for any situation in which they are not sandboxed.
Anything they can interface with?
You can probably trick it or exploit it into doing something unintended or unexpected to anything else it is connected to.
Theoretically you could use an LLM to do something like come up with more accurate heuristics for identifying malware.
But… they’re nowhere near ‘intelligent’ enough to like, give it a whole code base for some kind of software, and thoroughly make that software 100% secure.
bananabread@lemmy.zip 12 hours ago
Or it could introduce new ones :)
eronth@lemmy.world 12 hours ago
Yeah, but you can have it scan without implementing.
sin_free_for_00_days@sopuli.xyz 20 hours ago
Copilot steals from all the code on github.
renegadespork@lemmy.jelliefrontier.net 20 hours ago
Your confusion is understandable since MS has called like 4 different products “Copilot”. This refers to the coding assistant built into GitHub for everything from CI/CD to coding itself.
All code uploaded to GitHub is subject to being scraped by Copilot to both train and provide inference context to its model(s).
Zwuzelmaus@feddit.org 16 hours ago
No kidding: That was literally my very first thought back in the days when I heard that M$ has taken over GitHub.
A_norny_mousse@piefed.zip 12 hours ago
Mine too. More precisely: code uploaded to GH won’t be yours anymore. IIRC there were changes to the TOS that supported this. But even if not, predicting the obvious doesn’t make us prophets.
TheOctonaut@mander.xyz 18 hours ago
No, it isn’t.
“Basically” your vibes aren’t an actual answer. Businesses are not forking over millions to give away their code.
You can have conspiracy theories about it using the code anyway (I’m particularly confused about your use of the word “scrape” which tells me you don’t know how AI training works, how hosting a website works, or how scraping works - maybe all three?) but surreptitiously using its competitors’ code to train CoPilot would be a rare existential threat to Microsoft itself.
github.com/features/copilot#faq
kilgore_trout@feddit.it 17 hours ago
FAQs are not legally binding. If you want to quote something, then do privacy policy and terms of service.
TheOctonaut@mander.xyz 16 hours ago
It’s in every enterprise and business contract signed with them. The FAQ was just the first result on Google. Its obviousness shouldn’t even require that much. It’s extremely clear how few of Lemmy’s “technology” crowd have any contact with adult life.
ayyy@sh.itjust.works 5 hours ago
Someday when you’re grown up you will realize how cringe your way of communicating is.
TheOctonaut@mander.xyz 4 hours ago
Sure. Any day now.
Being embarrassed by association with people who say things like “all code uploaded to Github is subject to being scraped” might be childish. Not sure it’s as childish as being embarrassed by “cringe” though. That would imply I care about your opinion on my communication. I don’t.
I do care that you understand that a half dozen people in this thread are actively outing themselves as completely ignorant about the real world of software development and the software industry in general. Probably not surprising given the words “Gentoo” and “Codeberg” in the title of the post.
renegadespork@lemmy.jelliefrontier.net 9 hours ago
Lmao desperately trying to justify sunk cost, I see?
You’re right, it’s not scraping, it’s worse. Most AI bots do scrape sites for data, though since MS has direct access to the GH backend, they don’t even need to scrape the data. You’re giving it to them directly.
The issue here is trust. Microsoft, along with every other company invested in the AI race has proven repeatedly that getting ahead in said race is more important to them than anything else. It’s more important than user privacy, ToS, contracts, intellectual property, and the law itself.
If they stand to make more money screwing you over than they stand to lose from a slap on the wrist in court, the choice is clear. And they will lie to your face about it. Profit machines as big as MS don’t care. They can’t. They are optimized for one thing.
ToTheGraveMyLove@sh.itjust.works 7 hours ago
Don’t forget its more important than human rights!
bearboiblake@pawb.social 16 hours ago
Just to add to what the other commenters said, the quote you highlighted doesn’t even say what you think it does.
It says that Copilot data is not used to train the models, not that code uploaded to Github isn’t used to train the models.
As an aside, your nitpicking of the term “scrape” is cringe, jsyk.
zr0@lemmy.dbzer0.com 13 hours ago
Oh my. The “you are all noobs, I am the only techie here, so I know it” argument is so unnecessary and makes you appear super entitled.
You obviously seem not to have an idea how all that shit works, where OpenAI and Microsoft scrape copyrighted material, which is illegal, to train their models. On top of that, in the US there are many laws where they can circumvent ToS if it helps national security, and we all know with Trump, that he will do everything to support his economy. So we end up with a situation, where the contracts say they will not use the data to train models, while doing this exact thing, and nobody ever will be able to prove it and the whole legal system in the US will protect the corporation. So good luck with that “lawsuit”.
But that is only when Microsoft would play by rules, which they don’t. Which no one does. So they just use the data to train the models, generating billions of value, and just wait for a lawsuit where they pay a fine of 100k.
This all comes to the conclusion that you are not just naive and inexperienced, but also an entitled asshole.
RichardDegenne@lemmy.zip 17 hours ago
If you’re gullible enough to believe an FAQ coming from Github themselves, then I have bad news for you.
TheOctonaut@mander.xyz 16 hours ago
“Gullible” is not a thing you can be when somehow has signed a contract with you… that’s why contracts exist.
ZombieCyborgFromOuterSpace@piefed.ca 17 hours ago
Like Meta and it’s privacy rules, I bet they do even if they’re saying they don’t.
TheOctonaut@mander.xyz 16 hours ago
You aren’t paying enterprise subscriptions to use Facebook, and as bad as they are, Microsoft are not Meta.