Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Librarians Are Tired of Being Accused of Hiding Secret Books That Were Made Up by AI

⁨943⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨weeks⁩ ago⁩ by ⁨jandoenermann@feddit.org⁩ to ⁨technology@lemmy.world⁩

https://gizmodo.com/librarians-arent-hiding-secret-books-from-you-that-only-ai-knows-about-2000698176

source

Comments

Sort:hotnewtop
  • U7826391786239@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

    i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit

    source
    • BreadstickNinja@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      It’s a shit ouroboros, Randy!

      source
      • U7826391786239@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        garbage in, garbage out and back in again

        source
    • tym@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      the movie idiocracy was a prophecy that we were too arrogant to take seriously.

      now go away, I’m baitin

      source
      • IronBird@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        we would be lucky to have a president as down to earth as camacho

        source
        • -> View More Comments
      • CheeseNoodle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        When is that movie set again? I want to mark my calender for the day the US finally gets a compitent president.

        source
        • -> View More Comments
      • obinice@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Wouldn’t it be batein?

        It’s important we get this right

        for the new national anthem

        source
    • vacuumflower@lemmy.sdf.org ⁨3⁩ ⁨weeks⁩ ago

      It’s new quantities, but an old mechanism, though. Humans were making up shit for all of history of talking.

      In olden days it was resolved by trust and closed communities (hence various mystery cults in Antiquity, or freemasons in relatively recent times, or academia when it was a bit more protected).

      Still doable and not a loss - after all, you are ultimately only talking to people anyway. One can built all the same systems on a F2F basis.

      source
      • wizardbeard@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

        The scale is a significant part of the problem though, which can’t just be hand waved away.

        source
        • -> View More Comments
      • U7826391786239@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

        i’m not understanding what you’re saying. “Still doable and not a loss”??

        sounds like something AI would say

        source
      • phutatorius@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        At a cwetain point, quantity has a quality of its own.

        source
  • brsrklf@jlai.lu ⁨2⁩ ⁨weeks⁩ ago

    Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.

    Arthur C. Clarke was not wrong but he didn’t go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.

    source
    • clay_pidgin@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      I find those prompts bizarre. If you could just tell it not to make things up, surely that could be added to the built in instructions?

      source
      • mushroommunk@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

        I don’t think most people know there’s built in instructions. I think to them it’s legitimately a magic box.

        source
        • -> View More Comments
      • Tyrq@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        Almost as if misinformation is the product

        source
      • Rugnjr@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

        Testing (including my own) find some such system prompts effective. You might think it’s stupid. I’d agree - it’s completely banapants insane that that’s what it takes. But it does work at least a little bit.

        source
        • -> View More Comments
    • InternetCitizen2@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      From, enhance this image

      (•_•)
      ( •_•)>⌐■-■
      (⌐■_■)

      source
    • Wlm@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      Like a year ago adding “and don’t be racist” actually made the output less racist 🤷.

      source
      • NikkiDimes@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        That’s more of a tone thing, which is something AI is capable of modifying. Hallucination is more of a foundational issue baked directly into how these models are designed and trained and not something you can just tell it not to do.

        source
        • -> View More Comments
    • shalafi@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Problem is, LLMs are amazing the vast majority of the time. Especially if you’re asking about something you’re not educated or experienced with.

      Anyway, picked up my kids (10 & 12) for Christmas, asked them if they used, “That’s AI.” to call something bullshit. Yep!

      source
      • treadful@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        Problem is, LLMs are amazing the vast majority of the time. Especially if you’re asking about something you’re not educated or experienced with.

        Don’t you see the problem with that logic?

        source
        • -> View More Comments
      • vivalapivo@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

        Especially if you’re asking about something you’re not educated or experienced with

        That’s the biggest problem for me. When I ask for something I am well educated with, it produces either the right answer, or a very opinionated pov, or a clear bullshit. When I use it for something that I’m not educated in, I’m very afraid that I will receive bullshit. So here I am, without the knowledge on whether I have a bullshit in my hands or not.

        source
        • -> View More Comments
  • nulluser@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.

    No, no, apparently not everyone, or this wouldn’t be a problem.

    source
    • FlashMobOfOne@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      In hindsight, I’m really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.

      source
  • SleeplessCityLights@programming.dev ⁨2⁩ ⁨weeks⁩ ago

    I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

    source
    • hardcoreufo@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I’m asking for better than a search engine. The rest of the time it runs me in circles that don’t work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.

      source
      • MrScottyTay@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        It’s fucking awful isn’t it. Summer day soon when i can be arsed I’ll have to give one of the paid search engines a go.

        I’m currently on qwant but I’ve already noticed a degradation in its results since i started using it at the start of the year.

        source
        • -> View More Comments
      • BarneyPiccolo@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

        I usually skip the AI blurb because they are so inaccurate, and dig through the listings for the info I’m researching. If I go back and look at the AI blurb after that, I can tell where they took various little factoids, and occasionally they’ll repeat some opinion or speculation as fact.

        source
        • -> View More Comments
      • ironhydroxide@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        Agreed. And the search engines returning AI generated pages masquerading as websites with real information is precisely why I spun up a searXNG instance. It actually helps a lot.

        source
      • PixelPinecone@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

        I pay for Kagi search. It’s amazing

        source
        • -> View More Comments
      • SocialMediaRefugee@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I’ve asked it for a solution to something and it gives me A. I tell it A doesn’t work so it says “Of course!” and gives me B. Then I tell it B doesn’t work and it gives me A…

        source
        • -> View More Comments
    • markovs_gun@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I legitimately don’t understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it’s some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don’t bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don’t know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they’re extremely prolific writers, but it’s simply wrong to reply “Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark” because this view hasn’t been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash’ari Sunni Muslims and not the general scholarly consensus.

      source
      • echodot@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

        I mean. I’ve used AI to write my job mandated end of year self assessment report. I don’t care about this, it’s not like they’ll give me a pay rise so I’m not putting effort into it.

        The AI says I’ve lead a project related to windows 11 updates. I haven’t but it looks accurate and no one else will be able to dell it’s fake.

        So I guess the reason is they are using the AI to talk about subjects they can’t fact check. So it looks accurate.

        source
        • -> View More Comments
    • SocialMediaRefugee@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I have a friend who constantly sends me videos that get her all riled up. Half the time I patiently explain to her why a video is likely AI or faked some other way. “Notice how it never says where it is taking place? Notice how they never give any specific names?” Fortunately she eventually agrees with me but I feel like I’m teaching critical thinking 101. I then think of the really stupid people out there who refuse to listen to reason.

      source
    • SocialMediaRefugee@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      The results I get from chatgpt half the time are pretty bad. If I ask for simple code it is pretty good but ask it about how something works? Nope. All I need to do is slightly rephrase the question and I can get a totally different answer.

      source
      • MBech@feddit.dk ⁨2⁩ ⁨weeks⁩ ago

        I mainly use it as a search engine, like: “Find me an article that explains how to change a light bulb” kinda shit.

        source
    • vivalapivo@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

      I’m not using LLMs often, but I haven’t had a single hallucination for 6 months already. This recursive calls work I incline to believe

      source
      • DireTech@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        Either you’re using them rarely or just not noticing the issues. I mainly use them for looking up documentation and recently had Google’s AI screw up how sets work in JavaScript. If it makes mistakes on something that well documented, how is it doing on other items?

        source
        • -> View More Comments
      • Lfrith@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

        I got hallucination from trying to find a book I read but didn’t know the title of. And hallucinated NBA play off results of the wrong team winning. And gotten basic math calculations wrong.

        Its a language model so its purpose is to string together words that sound like sentences, but it can’t be fully trusted to be accurate. Best it can do is give you source so you can got straight to the resource to read that instead.

        It’s decent at generating basic code, and testing yourself to see if it outputs what you want. But I don’t trust it as a resource when it comes to information when even wrong sports facts have been provided.

        source
        • -> View More Comments
    • jtzl@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      They’re really good.*

      • you just gotta know the material yourself so you can spot errors, and you gotta be very specific and take it one step at a time.

      Personally, I think the term “AI” is an extreme misnomer. I am calling ChatGPT “next-token prediction.” This notion that it’s intelligent is absurd. Like, is a dictionary good at words now???

      source
  • b_tr3e@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    No AI needed for that. These bloody librarinans wouldn’t let us have the Necronomicon either. Selfish bastards…

    source
    • smh@slrpnk.net ⁨2⁩ ⁨weeks⁩ ago

      Am librarian. Here you go

      source
      • b_tr3e@feddit.org ⁨2⁩ ⁨weeks⁩ ago

        Limited preview - some pages are unavailable.

        Very funny… Yäääh! Shabb nigurath… wrdlbrmbfd,

        source
      • glitchdx@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Some pages are omitted. Yeah. There’s like four pages of 300. I’m disappointed beyond measure and my day is ruined.

        source
        • -> View More Comments
    • Naevermix@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I swear, librarians are the only thing standing between humanity and true greatness!

      source
      • b_tr3e@feddit.org ⁨2⁩ ⁨weeks⁩ ago

        There’s only the One High and Mighty who can bring true greatness to humanity! Praise Cthulhu!

        source
        • -> View More Comments
    • RalfWausE@feddit.org ⁨2⁩ ⁨weeks⁩ ago

      This one is on you. MY copy of the necronomicon firmly sits in my library in the west wing…

      source
      • mPony@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        it sits on whatever shelf it sees fit to sit on, on any given day.

        source
    • Ensign_Crab@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Well maybe if people could just say the three words right, they wouldn’t need to.

      source
    • SocialMediaRefugee@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      The Simpsons showed us the danger of the occult section in the library.

      source
  • pHr34kY@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    There’s an old Monty Python sketch that comes to mind when people ask a librarian for a book that doesn’t exist.

    source
    • palordrolap@fedia.io ⁨3⁩ ⁨weeks⁩ ago

      Are you sure that's not pre-Python? Maybe one of David Frost's shows like At Last the 1948 Show or The Frost Report.

      Marty Feldman (the customer) wasn't one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)

      source
    • 5too@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Thanks for this, I hadn’t seen this one!

      source
      • xthexder@l.sw0.com ⁨2⁩ ⁨weeks⁩ ago

        It’s always a treat to find a new Monty Python sketch. I hadn’t seen this one either and had a good laugh

        source
    • brbposting@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      Ahahahahaha one of the best I’ve seen thanks

      source
  • MountingSuspicion@reddthat.com ⁨2⁩ ⁨weeks⁩ ago

    I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that “the following information has no relation to reality” or some other thing. The other person kept insisting it was not needed. I’m not saying it would stop all of these events, but it couldn’t hurt.

    source
    • glitchdx@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      www.explainxkcd.com/…/2501:_Average_Familiarity

      People who understand the technology forget that normies don’t understand the technology.

      source
      • TubularTittyFrog@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        and normies think you’re an asshole if you try to explain the technology to them, and cling to their ignorance of it basuc it’s more ‘fun’ to believe in magic

        source
        • -> View More Comments
      • eli@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        TIL there is a whole ass mediawiki for explaining XKCD comics.

        source
        • -> View More Comments
  • zanzo@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Librarian here: Good news is that many libraries are standing up AI literacy programs to show people not only how to judge AI outputs but also how to get better results. If your local library isn’t doing this ask them why not.

    source
    • fruitycoder@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      Any good examples I could share with my local libraries?

      source
  • SocialMediaRefugee@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Every time I think people have reached maximum stupidity they prove me wrong.

    source
    • PetteriSkaffari@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      “Two things are infinite: the universe and human stupidity; and I’m not sure about the universe.”

      Albert Einstein (supposedly)

      source
  • panda_abyss@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.

    It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to sort its own world view from pre training instead of reality.

    So it’s not really much better.

    Hallucinations become a bigger problem the more info they have (that you now have to double check)

    source
    • FlashMobOfOne@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      At my work, we don’t allow it to make citations. We instruct it to add in placeholders for citations instead, which allows us to hunt down the info, ensure it’s good info, and then add it in ourselves.

      source
      • SkybreakerEngineer@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        That’s still looking for sources that fit a predetermined conclusion, not real research

        source
        • -> View More Comments
      • panda_abyss@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

        That probably makes sense.

        I haven’t played around since the initial shell shock of “oh god it’s worse now”

        source
  • SethTaylor@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I guess Thomas Fullman was right: “When humans find wisdom in cold replicas of themselves, the arrow of evolution will bend into a circle”. That’s from Automating the Mind. One of his best.

    source
  • Lucidlethargy@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    Wait, are you guys saying “Of Mice And Men: Lennie’s back” isn’t real? I will LOSE MY SHIT if anyone confirms this!! 1!! 2.!

    source
    • oppy1984@lemdro.id ⁨2⁩ ⁨weeks⁩ ago

      It’s ok, it’s real…now just tell me about the bunnies.

      source
    • Paranoidfactoid@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I got all hot and bothered by, “Of Mice in Glenn: an ER Doc’s Story”, which turned out to not be the porn I expected.

      source
    • jtzl@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      Lol. “I came to break some necks and chew some bubblegum – and I’m all out of bubblegum.”

      source
  • Blackmist@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

    Luckily, the future will provide not only AI titles, but the contents of said books as well.

    Given the amount of utter drivel people are watching and reading of late, we’re probably already most of the way there.

    source
    • innermachine@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I was under the impression there were completely ai written books for sale on the internet on places like Amazon already!

      source
  • Imgonnatrythis@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    They really should stop hiding them. We all deserve to have access to these secret books that were made up by AI since we all contributed to the training data used to write these secret books.

    source
  • vacuumflower@lemmy.sdf.org ⁨3⁩ ⁨weeks⁩ ago

    This and many other new problems are solved by applying reputation systems (like those banks use for your credit rating, or employers share with each other) in yet another direction. “This customer is an asshole, allocate less time for their requests and warn them that they have a bad history of demanding nonexistent books”. Easy.

    Then they’ll talk with their friends how libraries are all possessed by a conspiracy, similarly to how similarly intelligent people talk about Jewish plot to take over the world, flat earth and such.

    source
    • porcoesphino@mander.xyz ⁨3⁩ ⁨weeks⁩ ago

      Its a fun problem trying to apply this to the while internet. I’m slowly adding sites with obvious generated blogs to Kagi but it’s getting worse

      source
  • Kolanaki@pawb.social ⁨2⁩ ⁨weeks⁩ ago

    I read that as libertarians at first and wasn’t even fased.

    source
  • BilSabab@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    As if a huge chunk of genre section wasn’t already as formulaic as if it was written by AI

    source
  • DeathByBigSad@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    Skill issue, just use the Library of Babel

    source
  • petrjanda@gonzo.markets ⁨2⁩ ⁨weeks⁩ ago

    Good, people need realise AI is not intelligent. It’s like a program that has memorised millions of books, some truths some fiction but doesnt really have the intellectual capacity to distinguish truth from fiction

    source
  • jtzl@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    I really don’t have this experience with ChatGPT. Every once in a while, ChatGPT returns an answer that doesn’t seem legitimate, so I ask, “Really?” And then it returns, “No, that is incorrect.” Which… I really hope the robots responsible for eliminating humans are not so hapless. But the stories about AI encouraging kids to kill themselves or mentioning books that don’t exist seem a little made up. And, like, don’t get me wrong: I want to believe ChatGPT listed glue as a good ingredient for making pizza crust thicker… I just require a bit more evidence.

    source
  • Armand1@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Good article with many links to other interesting articles. Acts like a good summary for the situation this year.

    I didn’t know about the MAHA thing, but I guess I’m not surprised. It’s hard to know how much is incompetence and idiocy and how much is malicious.

    source
  • PlaidBaron@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Everybody knows the world is full of stupid people.

    source
  • abbiistabbii@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

    Oh God now we’re going to have people insisting that librarians are secretly part of a conspiracy.

    source