Twitter enforces strict restrictions against external parties using its data for AI training, yet it freely utilizes data created by others for similar purposes.
the ideological composition that:
if you are allowed to do something then I must be allowed to
coupled with
just because i can do something doesn’t mean that you can do it
… is the basis for all human chauvinism be it gender, racial, or national. And now these fictions … these quasi-legal fictions called corporations … are taking the rights of humans being and laying claim to them while simultaneously declaring human don’t have these rights.
What the fuck is going on ?
brsrklf@jlai.lu 11 months ago
Yet another reminder that LLM is not “intelligence” for any common definition of the term. The thing just scraped responses of other LLM and parroted it as it’s own response, even though it was completely irrelevant for itself. All with an answer that sounds like it knows what it’s talking about, copying the simulated “personal implication” of the source.
In this case, sure, who cares? But the problem is something that is sold by its designers to be an expert of sort is in reality prone to making shit up or using bad sources, while using a very good language simulation that sounds convincing enough.
Hyperreality@kbin.social 11 months ago
Meat goes in. Sausage comes out.
The problem is that LLM are being sold as being able to turn meat into a black forest gateau.
brsrklf@jlai.lu 11 months ago
Absolutely true. But I suspect the problem is that the thing is too expensive to make to be sold as a sausage, so if they can’t make it look like tasty confection they can’t sell it at all.
CaptainSpaceman@lemmy.world 11 months ago
Soon enough AI will be answering questions with only its own previous answers, meaning any flaws are hereditary to all future answers.
samus7070@programming.dev 11 months ago
That’s already happening. What’s more is that training an llm on llm generated content degrades the llm for some reason. It’s becoming a mess.
Fades@lemmy.world 11 months ago
Anyone that needs reminding that LLMs are not intelligent has bigger problems