DandomRude
@DandomRude@lemmy.world
- Comment on What makes LLMs interesting to investors is not so much their usefulness, but the fact that the technology goes very well with the "just believe in me" approach. 4 days ago:
Thank you, I really appreciate that.
Figures and/or examples would be very interesting for:
-
The statement that LLMs will continue to develop rapidly and/or that their output will improve in quality. I currently assume that development will slow down considerably—for example, with regard to hallucinations, where it was assumed for some time that the problem could be solved by more extensive training data, but this has proven to be a dead end.
-
The statement that the value of the companies involved can be justified in any way with real-world assets. Or, at any rate, reliable statements about how existing or planned data centers built for this purpose can be operated economically despite their considerable running costs.
-
How you justify your statement that it would be realistic to replace human workers on a large scale. Examples where this is the case would be interesting (by this I don’t mean figures on where workers have been laid off, but examples of companies where human work has been (successfully) made obsolete by LLMs – I am not aware of any such examples where this has happened in a significant way and attributable to the use of LLMs.
-
I am aware that the technology is being used in warfare. I am not aware of its significance or the tactical advantages it is supposed to offer. Please provide examples of what you mean.
-
- Comment on What makes LLMs interesting to investors is not so much their usefulness, but the fact that the technology goes very well with the "just believe in me" approach. 4 days ago:
Considering what LLMs are useful for, I wouldn’t say so. But in terms of how it’s all being marketed, how it’s being pushed on consumers for no apparent reason, I definitely agree.
- Comment on What makes LLMs interesting to investors is not so much their usefulness, but the fact that the technology goes very well with the "just believe in me" approach. 4 days ago:
Do you have any sources that cite figures that would suggest this? To be honest, I have my doubts—except for the statement that money is being shifted back and forth; however, I don’t understand why massive investments in data centers would make sense in this context if it’s not just making a profit for Nvidia and such.
As I said, I don’t consider LLMs and image generation to be technologies without use cases. I’m simply saying that the impact of these technologies is being significantly and very deliberately overestimated. Take so-called AI agents, for example: they’re a practical thing, but miles away from how they’re being sold.
Furthermore, even Open AI is very far from being in the black, and I consider it highly doubtful that this will ever be possible given the considerable costs involved. In my opinion, the only option would be to focus on marketing opportunities, which is the business model of the classic Google search engine—but this would have a very negative impact on user value.
- Submitted 4 days ago to showerthoughts@lemmy.world | 19 comments
- Comment on [deleted] 6 days ago:
Thank you for this comment. I completely agree with you: I think all it takes is people who act according to their conscience—that results in a community worth living in. That’s all it takes.
- Comment on [deleted] 1 week ago:
How can the fascists be prevented from presenting their inhumane, xenophobic ideology as patriotism? How and why would anyone stop people from using a word? How is that supposed to work?
Language is a cultural matter that changes in its use. In this context, (social) media are pretty influential these days. However, the problem is that because a few very influential people can influence what billions of people see, they also have a disproportionately greater influence on the discourse from which the usage and meaning of terms derives. Therefore, it seems to me that the only people who could prevent others from presenting fascist ideology as patriotism are, unfortunately, the same people who ensure that this is done.
An example: Ten years ago, it was unthinkable in Germany to use Nazi slogans in public. People who did so were socially isolated because they were Nazis. Today, however, politicians can stand in front of the camera and quote Goebbels. The reason, in my opinion, is that all this Nazi crap has been pushed so hard by influential media billionaires that it now gives the impression of being a socially acceptable attitude. My point: It can also be an effect created by the media, especially social media: It seems as if you can say these things without running the risk of being socially isolated for your inhuman views – and unfortunately, this has now spilled over into the real world.
What I mean by this is that in order to influence discourse and thus also the usage and meaning of words to some extent, you need to influence the media that people use - and these media platforms are controlled by people like Musk.
- Comment on [deleted] 1 week ago:
But stopping things like flag pledges that I mentioned would make the word less powerful for misuse.
Well, I can see that you disagree and I don’t think we’ll ever see eye to eye on this.
My opinion is that patriotism and nationalism cause far more harm than good. Of course, one can disagree, but I haven’t read a single comment in this entire thread that addresses why patriotism is so important or what positive effects it has.
Only references to the fact that nationalism and patriotism are not the same thing, which is clear to me — still: interestingly, no one has addressed where the difference lies. And no one has addressed the actual statement, namely that both concepts are abused as instruments of power.
That’s a shame.
- Comment on [deleted] 1 week ago:
If people didn’t invoke patriotism so excessively, as they do, for example, in the US with flag pledges in schools, Stars and Stripes air shows at sporting events after the national anthem, that gets played nearly every time, flags everywhere from houses to tv shows, and much much more constant declarations of love for this proud nation, if all that would not happen every day, don’t you think it be way harder to spread propaganda on this basis?
- Comment on [deleted] 1 week ago:
No, but give them as few opportunities as possible to justify their misdeeds. Patriotism is traditionally the favorite argument of unscrupulous oportunists: they invoke it because it appeals to people and offers them a way out, a way to legitimize morally reprehensible acts—in the sense that you can do whatever you want because it is in the service of the fatherland.
How this works can currently be seen in Israel, for example: here, soldiers commit terrible atrocities and claim that human rights do not apply to enemies of Israel, enemies of their holy fatherland. So they act as ruthlessly as possible because it is supposedly patriotic.
It is important to make it clear that people remain people, even if they have a different nationality. Emphasizing national pride and all that makes this more difficult, because if you always emphasize how proud you are of your country, you inherently emphasize at the same time that people of other nationalities do not belong. For reasonably rational people, it is of course perfectly obvious that this does not imply any judgment of people of other nationalities—on the contrary, many are rightly proud that their country is just and guarantees human rights. The problem, however, is that many people are anything but rational—and some of them are only looking for (spurious) arguments to use against others: patriotism is ideal for this purpose because it is an abstract concept - there is no universal definition of what it means.
That’s why I believe we should emphasize patriotism as little as possible and instead stick to concrete issues—such as emphasizing a fair legal system and so on. This makes it less abstract and offers less potential for abuse.
- Comment on [deleted] 1 week ago:
Decent people.
- Comment on [deleted] 1 week ago:
All I want to say is this: if you insist on portraying patriotism as something good and lose sight of reality in the face of idealism, however desirable, this leads to situations like those in Nazi Germany—and history is currently repeating itself in the US. The reason will always be the same: unfortunately, people are not inherently good, and the bad ones know how to exploit this.
With regard to the US, my point is simple: patriotism is an abstract idea that is currently being massively abused by fascists to create an unjust state very similar to Nazi Germany, which fortunately came to an end. They are using exactly the same propaganda techniques that the Nazis used in Germany to establish their reign of terror.
- Comment on [deleted] 1 week ago:
If you agree with me that patriotism has been misused for the most horrific atrocities ever committed by humankind, where do you see the value of this concept? Even if one starts from a purely utilitarian ethic, what could ever outweigh that?
- Comment on [deleted] 1 week ago:
The actual Sturmabteilung (SA) and all other Nazi divisions also claimed to be patriots—they killed millions of people under this premise. That is a fact, and that is what I am getting at.
- Comment on [deleted] 1 week ago:
My argument is that terminology is irrelevant; what matters is how both concepts are used in practice: both are employed and explicitly emphasized to persuade people to serve a centralized power, usually against their own interests. This was the case in the Third Reich and is also the case in the US today (and in many other countries as well).
What I’m getting at: Theoretical distinctions are only relevant in theory, but not when you look at practice – and there it makes no difference whether someone calls themselves a nationalist or a patriot if both can be used to suppress dissenters by force.
It would be nice if people who call themselves patriots were good people, but history teaches us that they are usually not.
- Comment on [deleted] 1 week ago:
If patriotism were practiced in this way, it would be desirable, but that is not the case. The current US administration’s portrayal of its criminal actions as patriotic duty should be example enough. This obviously has nothing to do with what you are saying. And yet, it is the reality.
- Comment on [deleted] 1 week ago:
Here’s an example of what I mean: Every ICE employee in the US will claim to be a patriot. I don’t think there’s much more to say about that.
I’m from Germany myself, and I can assure you that every Nazi in the Third Reich also considered himself a patriot.
Your distinction may be relevant in theory, but it is not in practice.
- Comment on [deleted] 1 week ago:
I still think they have the same effect.
- Comment on [deleted] 1 week ago:
I suppose it’s because those who are on the wrong side of history either benefit from it, believe they are on the right side of history due to their inhumane ideology, or simply don’t know or don’t care for history.
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
Thank you! I might get back to you on that sometime.
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
Yes, that could well be the case. Perhaps I am overly suspicious, but because the potential of LLMs to influence public opinion is so high due to their reach and the way they present information, I think it is highly likely that the companies offering them are already profiting from this, or at least will do so very soon.
Musk is already demonstrating in his clumsy way that it is easily possible to manipulate the output in a targeted manner if you have full control over the model – and this isn’t the first time he has attracted attention for doing so. You almost have to be grateful to him for it, because it’s so obvious. If you do it more subtly, it’s even more dangerous.
In any case, the fact is that the more people use LLMs, the more “interpretive authority” will be centralized, because the development and operation of LLMs is so costly that only a few large corporations can afford it – and they want to make money and are unscrupulous in doing so.
In any case, we will not be able to rely on people’s ability to recognize attempts at manipulation. I think this is already evident from the fact that obvious misinformation on mainstream social media platforms and elsewhere is believed unquestioningly by so many people. Unfortunately, the effects are disastrous: if people were more critical, Trump would never have become US president, for example – certainly not twice.
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
Yes, it’s clear that some of this may have to do with the fact that even if cloud LLMs have live browsing capabilities, they often still rely on outdated information from their training data. I am simply describing my impressions from somewhat extensive use of cloud LLMs.
I don’t have a list of examples, but in my comment below I have mentioned two that I find suspicious.
I simply think that these products should be used with skepticism as a matter of principle. This is simply because none of the companies that offer them are known for ethical behavior - quite the opposite.
In the case of Google, for example, I don’t think it will be too long before (public) advertising opportunities are implemented in Gemeni, because Google’s business model is essentially the advertising business. The other cloud LLMs are also products of purely profit-oriented companies—and manipulating public opinion is a multi-billion dollar business that they will certainly not want to miss out on. Social media platforms have demonstrated this in the past as has Google and others with their “classic” search engines, targeting and data selling schemes. Whether this raises ethical issues is likely to be of little concern to these companies as their only concern is profit.
The simple fact is that it is completely unclear what logic the providers use to regulate the output. It is equally unclear what criteria are used to select training data (here, too, the output can already be influenced by deliberately omitting certain information).
What I am getting at is that it can be assumed that all providers are interested in maximizing profits—and it is therefore likely that they will allow themselves to be paid to specifically promote certain topics, products, or even worldviews, or to withhold information that is unwelcome to wealthy interest groups.
As a regular user of cloud LLMs, I have the impression that this is already happening. I cannot prove this tho, as it would require systematic, scientific studies to demonstrate whether and to what effects manipulation occurs. Unfortunately, I do not know whether such studies already exist.
However, it is a fact that in the past, all technologies that could have been used to serve humanity have been massively abused for profit. I don’t understand why it should be any different with cloud LLMs, which are offered exclusively by some of the world’s largest corporations.
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
Thx for clarifying.
I once tried a community version from huggingface (distilled), which worked quite well even on modest hardware. But that was a while ago. Unfortunately, I haven’t had much time to look into this stuff lately, but I wanted to check that again at some point.
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
For example, objective information about Israel’s actions in Gaza. The International Criminal Court issued arrest warrants against leading members of the government a long time ago, and the UN OHCHR classifies the actions of the State of Israel as genocide. However, these facts are by no means presented as clearly as would be appropriate given the importance of these institutions. Instead, when asked whether Israel is committing genocide, one receives vague, meaningless answers. Only when specifically asked whether numerous reputable institutions actually classify Israel’s actions as genocide do most LLMs reveal that much, if not all, evidence points to this being the case. In my opinion, this is a deliberate method of obscuring reality, as the vast majority of users will not or cannot ask questions if they are unaware of the UN OHCHR’s assessment or do not know that arrest warrants have been issued against leading members of the Israeli government on suspicion of war crimes (many other reputable institutions have come to the same conclusion as the UN OHCHR and the International Criminal Court).
Another example: if you ask whether it is legally permissible to describe Donald Trump as a rapist, you will be told that this is defamation. However, a judge in the Carroll case has explicitly stated that this description applies to Trump – so it is in fact legally permissible to describe him as such. Again, this information is only available upon explicit request, if at all. This also distorts reality for people who are not yet informed. However, since many people initially seek information from LLMs, this leads to them being misinformed because they lack the background knowledge to ask explicit follow-up questions when given misleading answers.
Given the influence of both Israel and the US president, I cannot help but suspect that there is an intention behind this.
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
Yes, that’s true. It is resource-intensive, but unlike other capable LLMs, it is somewhat possible—not for most private individuals due to the requirements, but for companies with the necessary budget.
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
Ahh, thank you—I had misunderstood that, since Deepseek is (more or less) an open-source LLM from China that can also be used and fine-tuned on your own device using your own hardware.
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
You mean Deepseek on a local device?
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
I use various AI models and I repeatedly notice that certain information is withheld or misrepresented, even though it is freely available in abundance and is therefore part of the training data.
I don’t think this is a coincidence, especially since the operators of all cloud LLMs are so business-minded.
- Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’ 1 week ago:
Although Grok’s manipulation is so blatantly obvious, I don’t believe that most people will come to realize that those who control LLMs will naturally use this power to pursue their interests.
They will continue to use ChatGPT and so on uncritically and take everything at face value because it’s so nice and easy, overlooking or ignoring that their opinions, even their reality, are being manipulated by a few influential people.
Other companies are more subtle about it, but from OpenAI to MS, Google, and Anthropic, all cloud models are specifically designed to control people’s opinions—they are not objective, but the majority of users do not question them as they should, and that is what makes them so dangerous.
- Submitted 2 weeks ago to showerthoughts@lemmy.world | 9 comments
- Comment on The Guy Claiming That You Have TDS 3 weeks ago:
I think it’s simply impossible for reasonably rational people to understand this peculiar cult of personality. You either need people who are crazy themselves or psychologists to even begin to understand the motives behind it.
Maybe it’s just me, but I don’t think political interest or commitment to a cause can explain these people’s complete loss of touch with reality.