Comment on [deleted]

DandomRude@lemmy.world ⁨1⁩ ⁨week⁩ ago

Indeed. A major problem with LLMs is the marketing term “artificial intelligence”: it gives the false impression that these models would actually understand their output, which is not the case - in essence, it is more of a probability calculation based on what is available in the training data and what the user asks - it’s a kind of collage of different pieces of info from the training data that gets mixed and arranged in a new way based on the query.

As long as it’s not a prompt that conflicts directly with the data set (“Explain why the world is flat”), you get answers that are relevant to the question - however, LLMs are neither able to decide on their own whether one source is more credible than another, nor can they make moral decisions because they do not “think,” but are merely another kind of search engine so to speak.

However, the way many users use LLMs is more like a conversation with a human being – and that’s not what these models are; it’s just how they’re sold but not at all what they are designed to do or what they are capable of.

But yes, this will be a major problem in the future as most models are controlled by billionaires that do not want them to be what they should be: Tools that help parsing great amounts of Information. They want them to be propaganda machines. So as with other Technologies: Not AI ist the problem but the ruthless way in which this technology is being used (by greedy wheelers and dealers).

source
Sort:hotnewtop