Mahlzeit
@Mahlzeit@feddit.de
- Comment on OpenAI Suspends ByteDance's Account After It Used GPT To Train Its Own AI Model 11 months ago:
I doubt if it is clear-cut enough to bring down enforcement in any case. However, that does not mean that the clause is enforceable.
It is easy to circumvent such a ban. Eventually, the only option that MS has is suing. Then what?
- Comment on OpenAI Suspends ByteDance's Account After It Used GPT To Train Its Own AI Model 11 months ago:
The issue is that what they are doing there is blatantly anticompetitive.
- Comment on OpenAI Suspends ByteDance's Account After It Used GPT To Train Its Own AI Model 11 months ago:
I wonder if that clause is legal. It could be argued that it legitimately protects the capital investment needed to make the model. I’m not sure if that’s true, though.
- Comment on Google DeepMind used a large language model to solve an unsolvable math problem 11 months ago:
FunSearch (so called because it searches for mathematical functions, not because it’s fun)
I’m probably not the only one who wondered.
- Comment on Mozilla announces their new AI website builder, community reacts appropriately 11 months ago:
Understandable.
- Comment on Mozilla announces their new AI website builder, community reacts appropriately 11 months ago:
Why is it important to you what some corporation does or doesn’t do?
- Comment on Mozilla announces their new AI website builder, community reacts appropriately 11 months ago:
Can I ask why this is important to you? Did you donate and don’t like how your money is used?
- Comment on AI Doomerism: Intelligence Is Not Enough -- “The lack of arms and legs becomes really load-bearing when you want to kill all humans.” 11 months ago:
It’s likely a reference to Yudkowsky or someone along those lines. I don’t follow that crowd.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
This touches several difficult topics.
I think my disagreement with you about AI copyright infringement is that you think that AI can create new things whereas I don’t think that.
I don’t think that matters to copyright law, as it exists.
Copyright law is all about substantial similarity in copyrightable elements. All portraits are similar by virtue of being portraits. Portraits are not copyrighted, nor can one copyright genres and such. A translation of a text has superficially no similarity with the original, but has to be authorized.
What you are saying would mean, that similarity is no longer a requirement for an infringement. That’s a big change. It is copyright, after all.
Furthermore it really wouldn’t take a huge change to copyright law, just clear differences between the rules that apply to sentient vs non-sentient sources.
Non-sentient sources are not new. Take cameras, for example. Cameras have been improved over time so that less skill is necessary to operate one. It’s no longer necessary to manually focus, to set the exposure time, to develop the film, … This also means that photos today have less human creative input. In current smartphone cameras, neural AIs make many decisions and also “photoshop” the result.
It doesn’t really make sense to me to treat modern cameras differently to old ones. Or: Someone poses and renders a figure in Blender. What difference does it make if they use an old-fashioned physical based render or a genAI?
Nevertheless, the question whether AIs can create something new, can be answered. The formal definition of “information” is that it is a reduction in uncertainty. For example, take the sequence of letters: “creativit_”. You probably have a very clear idea what the last, missing letter is. So learning that it is “y” doesn’t give you much information.
But take the sequence: “juubfpvoi_”. The missing letter could be any lower-case letter. You may not feel very informed when you learn that it is “f”, but it does represent a much bigger reduction in uncertainty.
When we write texts, we use the same old words in the dictionary; just a few 10,000 at most. We string them together with the same old rules of grammar to tell the same old things. The sky is blue, things fall down, not up; people love and hate, and in the end the good guys win. You can probably think of exceptions to all these. They are exceptions. We create small variations on the same old themes. We rehash.
If a story does not cater to expectations, then it’s not believable. People should behave as we know people to behave. The laws of nature should be consistent and familiar. Most of all: The conventions of the genre should be followed. As a human, you are supposed to lift ideas from previous works. New ideas may be appreciated, but are not required.
The second string was, in fact, created by a machine; not an AI, but an RNG. Even with many GBs of output, it should be impossible to find any biases or patterns that allow one to guess at the next letter. I didn’t make one up myself because humans are not very random even when we try. And when we write, we do our best to reduce our randomness even further. We try not to invent new spellings; ie make spelling errors.
AIs receive input from a pRNG, which means that they create new things. What they are supposed to do is to strip away all that novel information and create something largely predictable. They often fail and, say, create images of humans with an innovative number of fingers. LLMs make continuity errors, or straight start to spout gibberish. The problem is that AIs create too many new things, not that they don’t.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
Can we get back to this? I am confused why you believe that AIs like ChatGPT spit out “exact copies”. That they spit out memorized training data is unusual in normal operation. Is there some misunderstanding here?
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
Ok, where did GPT-4 copy the ransomware code? You can’t reshuffle lines of code much before the program breaks. Should be easy to find.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
Well, that’s simply not true.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
Well, that is a philosophical or religious argument. It’s somewhat reminiscent of the claim that evolution can’t add information. That can’t be the basis for law.
In any case, it doesn’t matter to copyright law as is, that you see it that way. The AI is the equivalent to that book on how to write bestsellers in my earlier reply. People extract information from copyrighted works to create new works, without needing permission. A closer example are programmers, who look into copyrighted references while they create.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
I didn’t downvote you. (Just gave you an upvote, though.) You’re reasonable and polite, so a downvote would be very inappropriate. Sorry for that.
Music is having ongoing problems with copyright litigation, like Ed Sheeran most recently. From what I have read, it’s blamed on juries without the necessary musical background. As far as I know, higher courts usually strike down these cases, as with Sheeran. Hip hop was neutered, in a blow to (African-)American culture. While it was obviously wrong, not to find for fair use in that case, samples are copies.
It’s not so bad outside of music. You can write books on “how to write a bestseller”, or “how to draw comics” without needing permission. Of course, you would study many novels and images to get material. The purpose of books is that we learn from them. That we go on to use this to make our own thing is intended (in the US).
What you’re proposing there would be a great change to copyright law and probably disastrous. Even if one could limit the immediate effect to new technologies, it would severely limit authors in adopting these technologies.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
Yes, if it’s new content, it’s obviously no copy; so no copyvio (unless derivative, like fan fiction, etc.). I was thinking of memorized training data being regurgitated.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
I understand. The idea would be to hold AI makers liable for contributory infringement, reminiscent of the Betamax case.
I don’t think that would work in court. The argument is much weaker here than in the Betamax case, and even then it didn’t convince. But yes, it’s prudent to get the explicit permission, just in case of a case.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
That shouldn’t be an issue. If you look at an unauthorized image copy, you’re not usually on the hook (unless you are intentionally pirating). It’s unlikely that they needed to get explicit “consent” (ie license the images) in the first place.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
The models are deliberately engineered to create “good” images, just like cameras get autofocus, anti-shake and stuff. There are many tools that will auto-prettify people, not so many for the reverse.
There are enough imperfect images around for the model to know what that looks like.
- Comment on Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos 11 months ago:
That ought to satisfy all those who wanted “consent” for training data.
- Comment on If Creators Suing AI Companies Over Copyright Win, It Will Further Entrench Big Tech 11 months ago:
If you want this to be unpopular, then you need to point out some of the implications. Lemme…
They hire artists, tell them to make stuff and because they are on payroll the company owns it.
This means, that those who think that AI training should require a license are not standing up for artists. They are shilling for intellectual property owners; for the corporations and rich people.
If it requires a license, that means that money must be paid to property owners simply because they are owners. The more someone owns, the more money they get. Rich people own the most property, so rich people get the most money.
And what do employees get? They get to pay.
- Comment on The History and Future of Digital Ownership 11 months ago:
But how often do you install the same game? A streaming movie needs to be (partially) downloaded every time someone watches it. But yes, I shouldn’t jump to the conclusion that this ends up being a higher bandwidth cost per dollar purchasing price.
When you keep a backup, then the download was basically just a way of delivering a physical copy. I answered why we can’t have online property.
As to why many don’t allow you to keep a private copy. For the obvious reason: To maintain control over their property and monetize it to the highest degree possible.
- Comment on The History and Future of Digital Ownership 11 months ago:
Hah! Yeah, that’s so weird when seen from my culture (Germany). Here, prosecutors must enforce all laws on the books. Anything less would be a criminal offense. The actual day-to-day problems are very similar, though. It is kinda infuriating that the English system works as well as it does.
- Comment on The History and Future of Digital Ownership 11 months ago:
That takes a lot less bandwidth than streaming. All business have fixed costs. Blockbuster Video had to pay rent for physical stores, for example. Delivering via the net is relatively cheap compared to stores or physical postage. I’d be surprised if GOG’s cost aren’t much lower than anything physical.
- Comment on The History and Future of Digital Ownership 11 months ago:
If it doesn’t bother you that you are threatened with jail over something you might do with your own property, in your own home, without affecting others, then… Well, I can see that you would be living a very jolly life indeed. Good on ya.
- Comment on If Creators Suing AI Companies Over Copyright Win, It Will Further Entrench Big Tech 11 months ago:
IMO, we need to ask: What benefits the people? or What is in the public interest?
That should be the only thing of importance. That’s probably controversial. Some will call it socialism. It is pretty much how the US Constitution sees it, though.
Maybe you agree with this. But when you talk about “models trained on public data” you are basically thinking in terms of property rights, and not in terms of the public benefit.
- Comment on If Creators Suing AI Companies Over Copyright Win, It Will Further Entrench Big Tech 11 months ago:
The models (ie the weights specifically) may not be copyrightable, anyways. There’s no copyright on the result of number crunching. Once the model is further fine-tuned, there might be copyright, but it’s still unlike anything covered by copyright in the past.
One analogy I have is a 3D engine. The engineers design the look of the typical output by setting parameters, but that does not create a specific copyright on the parameters. There’s copyright on the design documents, the code, the UI, if any and maybe other stuff. It’s not quite the same, though.
Some jurisdictions have IP on databases. I think that would cover AI models. If I am right, then that means that any license agreements that come with models are ineffective in the US.
However, to copy these models, you first need to get your hands on them. They are still trade secrets, so don’t on leaks.
- Comment on The History and Future of Digital Ownership 11 months ago:
Maybe it’s not just what the rich want.
OP is unhappy about how little control we have over some of our property. The catch is that this property is also the property of someone else. Media is (mostly) the intellectual property of someone, and the owners can decide over it. So, in order for OP to have more control over “digital property”, one would need laws that limit control over property. Tough sell.
If you look at threads on AI, you will find them full of people who want expanded intellectual property. I doubt those are all bots or shills. I think they just want control over their own property, without considering that they are forging their own chains. When you increase the power of property, you increase the power of the rich and diminish your own.
- Comment on The History and Future of Digital Ownership 11 months ago:
Digital media means that there is an ongoing service behind it. The servers use energy. The parts age and break. It requires a continuing feed of labor and resources to keep going.
Imagine a streaming service that is all based on buying media, instead of subscription or renting. Then suppose all the customers somehow decide that the media they own are enough for now (maybe because money is tight, because inflation). With no more cash coming in, the service goes bankrupt.
In principle, you could have a type of license that allows you to get a new copy in any way you can (torrent, etc.). That would be hard to police, though.
FWIW, owning a physical copy isn’t all that, either. There are various ways built-in to make life harder for customers, like geo-blocking. Bypassing these tends to be a criminal offense.
- Comment on Asking ChatGPT to Repeat Words ‘Forever’ Is Now a Terms of Service Violation 11 months ago:
Oh. I see. The attempts to extract training data from ChatGPT may be criminal under the CFAA. Not a happy thought.
I did say “making available” to exclude “hacking”.
- Comment on Asking ChatGPT to Repeat Words ‘Forever’ Is Now a Terms of Service Violation 11 months ago:
Sure, you can put something up and explicitly deny permission to visit the link. But courts rarely back up that kind of silliness.