Comment on [deleted]
DandomRude@lemmy.world 1 week ago
Indeed. A major problem with LLMs is the marketing term “artificial intelligence”: it gives the false impression that these models would actually understand their output, which is not the case - in essence, it is more of a probability calculation based on what is available in the training data and what the user asks - it’s a kind of collage of different pieces of info from the training data that gets mixed and arranged in a new way based on the query.
As long as it’s not a prompt that conflicts directly with the data set (“Explain why the world is flat”), you get answers that are relevant to the question - however, LLMs are neither able to decide on their own whether one source is more credible than another, nor can they make moral decisions because they do not “think,” but are merely another kind of search engine so to speak.
However, the way many users use LLMs is more like a conversation with a human being – and that’s not what these models are; it’s just how they’re sold but not at all what they are designed to do or what they are capable of.
But yes, this will be a major problem in the future as most models are controlled by billionaires that do not want them to be what they should be: Tools that help parsing great amounts of Information. They want them to be propaganda machines. So as with other Technologies: Not AI ist the problem but the ruthless way in which this technology is being used (by greedy wheelers and dealers).
ZeDoTelhado@lemmy.world 1 week ago
There is a podcast about this called “better offline”. I will say right off the bat: his delivery is not always great, but at least explains clearly what is wrong with ai at this point. From an economical stand point, this whole endeavour is weird that is even a thing. However, we have bosses that are SALIVATING to get people replaced, because frankly they cannot think of anything else.
DandomRude@lemmy.world 1 week ago
Yes, that’s right: LLMs are definitely sold that way: “Save on employees because you can do it with our AI”, which sounds attractive to naive employers because personnel costs are the largest expense in almost every company.
And that’s also true: it obscures what LLMs can actually do and where their value lies: this technology is merely a tool that workers in almost any industry can use to work even more effectively - but that’s apparently not enough of an USP: people are so brainwashed that they eat out of the marketing people’s hands because they hear exactly what they want to hear: I don’t need employees anymore because now there are much cheaper robot slaves.
In my opinion, all of this will lead to a step backward for humanity because it will mean that lots and lots of artists, scientists, journalists, writers, even Administrative staff and many other essential elements of society will no longer be able to make a living from their profession.
In the longer term, it will lead to the death of innovation and creativity because it will no longer be possible to make a living from such traits - AI can’t do any of that.
In other words, AI is the wet dream of all those who do not contribute to value creation but (strangely enough) are paid handsomely to manage the wonderful work of those who actually do contribute to value creation.
Unfortunately, it was to be expected how this technology would be used, because sadly, in most societies, the focus is not on contributing to society, but on who has made the most money from these contributions, which in the vast majority of cases is not the person who made the contribution. The use of AI is also based on this logic – how could it be otherwise?
TheBat@lemmy.world 1 week ago
TCS, one of India’s biggest IT sweatshop, just announced layoffs. Like if they’re going to use AI to replace their employees, won’t their clients use AI to replace TCS?