Comment on Judge disses Star Trek icon Data’s poetry while ruling AI can’t author works
cattywampas@lemm.ee 2 weeks ago
Data’s poem was written by real people trying to sound like a machine.
ChatGPT’s poems are written by a machine trying to sound like real people.
While I think “Ode to Spot” is actually a good poem, it’s kind of a valid point to make since the TNG writers were purposely trying to make a bad one.
grrgyle@slrpnk.net 2 weeks ago
Lest we concede the point, LLMs don’t write. They generate.
ProfessorScience@lemmy.world 2 weeks ago
What’s the difference?
PlasticExistence@lemmy.world 2 weeks ago
Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.
LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.
If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.
The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.
And when they start hallucinating, it’s because they don’t understand how they sound and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.
ProfessorScience@lemmy.world 2 weeks ago
So, I will grant that right now humans are better writers than LLMs. And fundamentally, I don’t think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.
So, not to be pedantic, but:
Couldn’t you say the same thing about a person? A person couldn’t write something without having learned to read first. And without having read things similar to what they want to write.
This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we’d say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren’t just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?
People do this too, though… It’s just that LLMs do it more frequently right now.
I guess I’m a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.
obvs@lemmy.world 2 weeks ago
It’s interesting how humanity thinks that humans are smarter than animals, but that the benchmark it uses for animals’ intelligence is how well they do an imitation of an animal with a different type of brain.
As if humanity succeeds in imitating other animals and communicating in their languages or about the subjects that they find important.
grrgyle@slrpnk.net 2 weeks ago
The writer