FatCrab
@FatCrab@slrpnk.net
- Comment on President Trump: It's Not Doable for AI Companies to Pay for All Copyrighted Input * TorrentFreak 1 week ago:
Richard Stallman, the creator of the GPL, a “copyleft” license. The problem is that Open Source Software is entirely reliant on a strong copyright system to be enforced. There is nothing else that prevents companies from just keeping everything proprietary, even if it’s all oss under the hood.
- Comment on “You can't be expected to have a successful AI program when every single article, book or anything else that you've read or studied, you're supposed to pay for” Donald Trump said 2 weeks ago:
That is not what judges have said. They’ve said that merely training on text is not a copyright infringement. However, companies that downloaded enormous amounts of pirated texts (i.e., stuff they did not have license to download in the first place) still infringed copyright just like anybody else. Effectively the courts have been holding that if you study material you have license to access, you aren’t infringing, but if you pirate that material, even if it is merely to study it, it’s still infringing. For better or worse this is natively basically how it’s always been.
I have no idea what Trump is proposing. Like most republicans, but especially him, he is incapable of even approaching understanding of nuanced and technical areas of law and/or technology.
- Comment on A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say 2 weeks ago:
I agree. I’m generally pretty indifferent to this new generation of consumer models–the worst thing about it is the incredible amount of idiots flooding social media witch hunting it or evangelizing it without any understanding of either the tech or the law they’re talking about–but the people who use it so frequently for so many fundamental things that it’s observably diminishing their basic competencies and health is really unsettling.
- Comment on An AI That Promises to “Solve All Diseases” Is About to Test Its First Human Drugs 4 weeks ago:
Diffusion models iteratively convert noise across a space into forms and that’s what they are trained to do. In contrast to, say, a GPT that basically performs a recursive token prediction in sequence. They’re just totally different models, both in structure and mode of operation. Diffusion models are actually pretty incredible imo and I think we’re just beginning to scratch the surface of their power. A very fundamental part of most modes of cognition is converting the noise of unstructured multimodal signal data into something with form and intention, so being able to do this with a model, even if only in very very narrow domains right now, is a pretty massive leap forward.
- Comment on An AI That Promises to “Solve All Diseases” Is About to Test Its First Human Drugs 4 weeks ago:
A quick search turns up that alpha fold 3, what they are using for this, is a diffusion architecture, not a transformer. It works more the image generators than the GPT text generators. It isn’t really the same as “the LLMs”.
- Comment on [deleted] 5 weeks ago:
Do you understand that even just in the category of communism, there is an enormous gamut of different approaches, of which you only seem to understand a very specific one? You understand that the bolshevicks, even under Lenin, murdered the “competing” communist groups, effectively regressing right back to authoritarianism and what would inevitably degrade into Stalinist autocracy? And so that even the, mischaracterizing, claim that “communism” went bad is basically nonsensical?
- Comment on Creative Commons is Introducing CC Signals: A New Social Contract for the Age of AI 5 weeks ago:
I imagine not, though I haven’t looked into it.
- Comment on Creative Commons is Introducing CC Signals: A New Social Contract for the Age of AI 5 weeks ago:
There are many open sourced locally executable free generative models available.
- Comment on Meta wins artificial intelligence copyright case in blow to authors 5 weeks ago:
You are agreeing with the post you responded to. This ruling is only about training a model on legally obtained training data. It does not say it is ok to pirate works–if you pirate a work, no matter what you do with the infringing copy you’ve made, you’ve committed copyright infringement. It does not talk about model outputs, which is a very nuanced issue and likely to fall along similar analyses as music copyright imo. It only talks about whether training a model is intrinsically an infringement of copyright. And it isn’t because anything else is insane and be functionally impossible to differentiate from learning a writing technique by reading a book you bought from an author. Even a model that has overfit training data, it is in no way recognizable to any particular training datum. It’s hyperdimensioned matrix of numbers defining relationships between features and relationships between relationships.