Comment on Why Mark Zuckerberg wants to redefine open source so badly
Kompressor@lemmy.world 1 day ago
Desperately trying tap in to the general trust/safety feel that open source software typically has. Trying to muddy the waters because they’ve proven they cannot be trusted whatsoever
kava@lemmy.world 1 day ago
when the data used to train the AI is copyrighted, how do you make it open source? it’s a valid question.
one thing is the model or the code that trains the AI. the other thing is the data that produces the weights which determines how the model predicts
of course, the obligatory fuck meta and the zuck and all that but there is a legal conundrum here we need to address
buddascrayon@lemmy.world 8 hours ago
It is actually possible to reveal the source of training data without showing the data itself. But I think this is a bit deeper since I’ll bet all of my teeth that the training data they’ve used is literally the 20 years of Facebook interactions and entries that they have just chilling on their servers. Literally 3+ billion people’s lives are the training data.
kava@lemmy.world 1 hour ago
yep. I never thought about it but you’re absolutely right. that is Facebook’s “competitive advantage” that the other AI companies don’t have.
although that’s part of it. I’m sure they do web scraping, novels, movie transcripts, college textbooks, research papers, newspapers, etc.
jacksilver@lemmy.world 1 day ago
I mean, you can have open source weights, training data, and code/model architecture. If you’ve done all three it’s an open model, otherwise you state open “component”. Seems pretty straightforward to me.
kava@lemmy.world 23 hours ago
Yes, but that model would never compete with the models that use copyrighted data.
There is a unfathomably large ocean of copyrighted data that goes into the modern LLMs. From scraping the internet to transcripts of movies and TV shows to tens of thousands of novels, etc.
That’s the reason they are useful. If it weren’t for that data, it would be a novelty.
So do we want public access to AI or not? How do we wanna do it? Zuck’s quote from article “our legal framework isn’t equipped for this new generation of AI” I think has truth to it
jacksilver@lemmy.world 22 hours ago
I mean using proprietary data has been an issue with models as long as I’ve worked in the space. It’s always been a mixture of open weights, open data, open architecture.
I admit that it became more obvious when images/videos/audio became more accessible, but from things like facial recognition to pose estimation have all used proprietary datasets to build the models.
So this isn’t a new issue, and from my perspective not an issue at all. We just need to acknowledge that not all elements of a model may be open.
WalnutLum@lemmy.ml 1 day ago
The OSI’s definition actually tackles this pretty well:
Sufficient information as to the source of the data so that one could potentially go out and to retrieve it, and recreate the model, is sufficient to fall within the OSAI definition.
FooBarrington@lemmy.world 1 day ago
When part of my code base belongs to someone else, how do I make it open source? By open sourcing the parts that belong to me, while clarifying that it’s only partially open source.
kava@lemmy.world 23 hours ago
This is essentially what Llama does, no? The reason they are attempting a clarification is because they would be subject to different regulations depending on whether or not it’s open source.
If they open source everything they legally can, then do they qualify as “open source” for legal purposes? The difference can be tens of millions if not hundreds of millions of dollars in the EU according to Meta.
So a clarification on this issue, I think, is not asking for so much. Hate Facebook as much as the next guy but this is like 5 minute hate material
FooBarrington@lemmy.world 23 hours ago
No, definitely not! Open source is a binary attribute. If your product is partially open source, it’s not open source, only the parts you open sourced.
So Llama is not open source, even if some parts are.