No AI needed for that. These bloody librarinans wouldn’t let us have the Necronomicon either. Selfish bastards…
Librarians Are Tired of Being Accused of Hiding Secret Books That Were Made Up by AI
Submitted 11 hours ago by jandoenermann@feddit.org to technology@lemmy.world
Comments
b_tr3e@feddit.org 3 hours ago
Ensign_Crab@lemmy.world 18 minutes ago
Well maybe if people could just say the three words right, they wouldn’t need to.
smh@slrpnk.net 49 minutes ago
Am librarian. Here you go
RalfWausE@feddit.org 2 hours ago
This one is on you. MY copy of the necronomicon firmly sits in my library in the west wing…
mPony@lemmy.world 1 hour ago
it sits on whatever shelf it sees fit to sit on, on any given day.
Naevermix@lemmy.world 3 hours ago
I swear, librarians are the only thing standing between humanity and true greatness!
b_tr3e@feddit.org 3 hours ago
There’s only the One High and Mighty who can bring true greatness to humanity! Praise Cthulhu!
brsrklf@jlai.lu 7 hours ago
Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.
Arthur C. Clarke was not wrong but he didn’t go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.
clay_pidgin@sh.itjust.works 7 hours ago
I find those prompts bizarre. If you could just tell it not to make things up, surely that could be added to the built in instructions?
mushroommunk@lemmy.today 6 hours ago
I don’t think most people know there’s built in instructions. I think to them it’s legitimately a magic box.
Tyrq@lemmy.dbzer0.com 6 hours ago
Almost as if misinformation is the product
Wlm@lemmy.zip 3 hours ago
Like a year ago adding “and don’t be racist” actually made the output less racist 🤷.
NikkiDimes@lemmy.world 3 hours ago
That’s more of a tone thing, which is something AI is capable of modifying. Hallucination is more of a foundational issue baked directly into how these models are designed and trained and not something you can just tell it not to do.
InternetCitizen2@lemmy.world 6 hours ago
From, enhance this image
(•_•)
( •_•)>⌐■-■
(⌐■_■)
U7826391786239@lemmy.zip 10 hours ago
i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit
BreadstickNinja@lemmy.world 10 hours ago
It’s a shit ouroboros, Randy!
tym@lemmy.world 9 hours ago
the movie idiocracy was a prophecy that we were too arrogant to take seriously.
now go away, I’m baitin
IronBird@lemmy.world 7 hours ago
we would be lucky to have a president as down to earth as camacho
CheeseNoodle@lemmy.world 8 hours ago
When is that movie set again? I want to mark my calender for the day the US finally gets a compitent president.
vacuumflower@lemmy.sdf.org 10 hours ago
It’s new quantities, but an old mechanism, though. Humans were making up shit for all of history of talking.
In olden days it was resolved by trust and closed communities (hence various mystery cults in Antiquity, or freemasons in relatively recent times, or academia when it was a bit more protected).
Still doable and not a loss - after all, you are ultimately only talking to people anyway. One can built all the same systems on a F2F basis.
wizardbeard@lemmy.dbzer0.com 9 hours ago
The scale is a significant part of the problem though, which can’t just be hand waved away.
U7826391786239@lemmy.zip 9 hours ago
i’m not understanding what you’re saying. “Still doable and not a loss”??
sounds like something AI would say
phutatorius@lemmy.zip 8 hours ago
At a cwetain point, quantity has a quality of its own.
nulluser@lemmy.world 9 hours ago
Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.
No, no, apparently not everyone, or this wouldn’t be a problem.
FlashMobOfOne@lemmy.world 7 hours ago
In hindsight, I’m really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.
SethTaylor@lemmy.world 2 hours ago
I guess Thomas Fullman was right: “When humans find wisdom in cold replicas of themselves, the arrow of evolution will bend into a circle”. That’s from Automating the Mind. One of his best.
MountingSuspicion@reddthat.com 8 hours ago
I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that “the following information has no relation to reality” or some other thing. The other person kept insisting it was not needed. I’m not saying it would stop all of these events, but it couldn’t hurt.
glitchdx@lemmy.world 6 hours ago
www.explainxkcd.com/…/2501:_Average_Familiarity
People who understand the technology forget that normies don’t understand the technology.
TubularTittyFrog@lemmy.world 4 hours ago
and normies think you’re an asshole if you try to explain the technology to them, and cling to their ignorance of it basuc it’s more ‘fun’ to believe in magic
eli@lemmy.world 5 hours ago
TIL there is a whole ass mediawiki for explaining XKCD comics.
pHr34kY@lemmy.world 10 hours ago
There’s an old Monty Python sketch that comes to mind when people ask a librarian for a book that doesn’t exist.
palordrolap@fedia.io 9 hours ago
Are you sure that's not pre-Python? Maybe one of David Frost's shows like At Last the 1948 Show or The Frost Report.
Marty Feldman (the customer) wasn't one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)
5too@lemmy.world 9 hours ago
Thanks for this, I hadn’t seen this one!
xthexder@l.sw0.com 7 hours ago
It’s always a treat to find a new Monty Python sketch. I hadn’t seen this one either and had a good laugh
brbposting@sh.itjust.works 7 hours ago
Ahahahahaha one of the best I’ve seen thanks
panda_abyss@lemmy.ca 8 hours ago
I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.
It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to sort its own world view from pre training instead of reality.
So it’s not really much better.
Hallucinations become a bigger problem the more info they have (that you now have to double check)
FlashMobOfOne@lemmy.world 7 hours ago
At my work, we don’t allow it to make citations. We instruct it to add in placeholders for citations instead, which allows us to hunt down the info, ensure it’s good info, and then add it in ourselves.
SkybreakerEngineer@lemmy.world 6 hours ago
That’s still looking for sources that fit a predetermined conclusion, not real research
panda_abyss@lemmy.ca 7 hours ago
That probably makes sense.
I haven’t played around since the initial shell shock of “oh god it’s worse now”
Imgonnatrythis@sh.itjust.works 4 hours ago
They really should stop hiding them. We all deserve to have access to these secret books that were made up by AI since we all contributed to the training data used to write these secret books.
Armand1@lemmy.world 5 hours ago
Good article with many links to other interesting articles. Acts like a good summary for the situation this year.
I didn’t know about the MAHA thing, but I guess I’m not surprised. It’s hard to know how much is incompetence and idiocy and how much is malicious.
vacuumflower@lemmy.sdf.org 10 hours ago
This and many other new problems are solved by applying reputation systems (like those banks use for your credit rating, or employers share with each other) in yet another direction. “This customer is an asshole, allocate less time for their requests and warn them that they have a bad history of demanding nonexistent books”. Easy.
Then they’ll talk with their friends how libraries are all possessed by a conspiracy, similarly to how similarly intelligent people talk about Jewish plot to take over the world, flat earth and such.
porcoesphino@mander.xyz 9 hours ago
Its a fun problem trying to apply this to the while internet. I’m slowly adding sites with obvious generated blogs to Kagi but it’s getting worse
DeathByBigSad@sh.itjust.works 9 hours ago
Skill issue, just use the Library of Babel
PlaidBaron@lemmy.world 6 hours ago
Everybody knows the world is full of stupid people.
SleeplessCityLights@programming.dev 2 hours ago
I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.
hardcoreufo@lemmy.world 43 minutes ago
Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I’m asking for better than a search engine. The rest of the time it runs me in circles that don’t work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.