I understand what you mean and don’t entirely disagree. If you use it like a calculator with information YOU’VE ALREADY ASSEMBLED. If you rely on it to have all information and/or give an accurate rendering of whatever it’s trained on, you’re probably gonna have a bad time. I’m working on a book that includes, in part, information about the Mormons’ history with slavery. If you ask AI, it will sometimes insist they never had slaves, at all. In fact, it will argue with you until you get legitimately pissed. Ask me how I know.
Comment on Missouri AG: Any AI That Doesn’t Praise Donald Trump Might Be “Consumer Fraud” (No, Really)
Vupware@lemmy.zip 2 days ago
Despite the problems with these LLMs, I have found them tremendously valuable when it comes to finding sources regarding historical events.
It’s much easier to find historical evidence that goes against the status quo with LLMs than it is with a conventional search engine. I suspect that this is an unintentional side effect of the technology; the fact that these LLMS are black boxes might have something to do with it.
ivanafterall@lemmy.world 1 day ago
smayonak@lemmy.world 1 day ago
They are both great and also biased. Because LLM training data includes vast libraries of history books, including such works as Dark Alliance, it is capable of running down paths that few historians would walk. I recently used an LLM to find an unimpeachable source for the CIA’s connection to cocaine trafficking aircraft. This was not available through a simple Google search because the site was likely deindexed.
BiteSizedZeitGeist@lemmy.world 1 day ago
Have you asked Grok about the Jewish Holocaust?
Vupware@lemmy.zip 1 day ago
No haha but I see your point.
It is important to use these tools with an abundance of caution, and those who cannot think critically should certainly stay away from them.
LLMs are unequivocally a net negative to humankind, but there are silver linings.
M0oP0o@mander.xyz 1 day ago
The ability to make shit up helps as well.
Vupware@lemmy.zip 1 day ago
Certainly! that’s why you use them to find sources. I don’t immediately trust anything an LLM spits out.
M0oP0o@mander.xyz 1 day ago
The amount of times I see people just take what is outputted as gospel is scary. And on things with real risk like electrical repair!
Vupware@lemmy.zip 1 day ago
Absolutely. It is essential to use these LLMs with an abundance of caution, and to explore a broader range of sources from various viewpoints.
Caution and breadth of sources should be staples of any historical or contemporary investigation, but alas, the minds of my fellow citizens have been eroded; these practices cannot be assumed to be broadly adopted, and as such, when saying something positive about LLMs, we must provide ample disclaimers and remind the general populace of the dangers and harms these technologies expose us and our surroundings to.
13igTyme@lemmy.world 1 day ago
I’ve seen Google AI say one thing and provided the source. When I read the source, sometimes it’s completely different or even the exact opposite.