Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

AI chatbots unable to accurately summarise news, BBC finds

⁨541⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨months⁩ ago⁩ by ⁨misk@sopuli.xyz⁩ to ⁨technology@lemmy.world⁩

https://www.bbc.com/news/articles/c0m17d8827ko

source

Comments

Sort:hotnewtop
  • db0@lemmy.dbzer0.com ⁨2⁩ ⁨months⁩ ago

    As always, never rely on llms for anything factual. They’re only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

    source
    • kboy101222@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

      I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn’t need that thing included

      Sorry for being vague, I just didn’t want to post my home town on here

      source
      • homesweethomeMrL@lemmy.world ⁨2⁩ ⁨months⁩ ago

        You can say Space Needle. We get it.

        source
    • 1rre@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

      The issue for RPGs is that they have such “small” context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later

      Although, similar to how deepseek uses two stages (“how would you solve this problem”, then “solve this problem following this train of thought”), you could have an input of recent conversations and a private/unseen “notebook” which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn’t be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things

      source
      • db0@lemmy.dbzer0.com ⁨2⁩ ⁨months⁩ ago

        The problem is that the “train of the thought” is also hallucinations. It might make the model better with more compute but it’s diminishing rewards.

        Rpg can use the llms because they’re not critical. If the llm spews out nonsense you don’t like, you just ask to redo, because it’s all subjective.

        source
    • Eheran@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Nonsense, I use it a ton for science and engineering, it saves me SO much time!

      source
      • Atherel@lemmy.dbzer0.com ⁨2⁩ ⁨months⁩ ago

        Do you blindly trust the output or is it just a convenience and you can spot when there’s something wrong? Because I really hope you don’t rely on it.

        source
        • -> View More Comments
    • kat@orbi.camp ⁨2⁩ ⁨months⁩ ago

      Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).

      source
  • mentalNothing@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.

    source
  • homesweethomeMrL@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Turns out, spitting out words when you don’t know what anything means or what “means” means is bad, mmmmkay.

    It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

    It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

    Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

    Introduced factual errors

    Yeah that’s . . . that’s bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be “okay enough” for some tasks some day. That’ll be another 200 Billion please.

    source
    • chud37@lemm.ee ⁨2⁩ ⁨months⁩ ago

      that’s the core problem though, isn’t it. They are just predictive text machines, not understanding what they are saying. Yet we are treating them as if they were some amazing solution to all our problems

      source
      • homesweethomeMrL@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Well, “we” arent’ but there’s a hype machine in operation bigger than anything in history because a few tech bros think they’re going to rule the world.

        source
    • devfuuu@lemmy.world [bot] ⁨2⁩ ⁨months⁩ ago

      I’ll be here begging for a miserable 1 million to invest in some freaking trains qnd bicycle paths. Thanks.

      source
    • desktop_user@lemmy.blahaj.zone ⁨2⁩ ⁨months⁩ ago

      alternatively: 49% had no significant issues and 81% had no factual errors, it’s not perfect but it’s cheap quick and easy.

      source
      • itslilith@lemmy.blahaj.zone ⁨2⁩ ⁨months⁩ ago

        Flip a coin every time you read an article whether you get quick and easy significant issues

        source
      • Nalivai@lemmy.world ⁨2⁩ ⁨months⁩ ago

        It’s easy, it’s quick, and it’s free: pouring river water in your socks.
        Fortunately, there are other possible criteria.

        source
    • Rivalarrival@lemmy.today ⁨2⁩ ⁨months⁩ ago

      It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

      How good are the human answers? I mean, I expect that an AI’s error rate is currently higher than an “expert” in their field.

      But I’d guess the AI is quite a bit better than, say, the average Republican.

      source
      • balder1991@lemmy.world ⁨2⁩ ⁨months⁩ ago

        I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.

        There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles.

        source
        • -> View More Comments
    • SamboT@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Do you dislike ai?

      source
      • fine_sandy_bottom@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

        I don’t necessarily dislike “AI” but I reserve the right to be derisive about inappropriate use, which seems to be pretty much every use.

        Using AI to find pertoglyphs in Peru was cool. Reviewing medical scans is pretty great. Everything else is shit.

        source
      • WagyuSneakers@lemmy.world ⁨2⁩ ⁨months⁩ ago

        I work in tech and can confirm the the vast majority of engineers “dislike ai” and are disillusioned with AI tools. Even ones that work on AI/ML tools. It’s fewer and fewer people the higher up the pay scale you go.

        There isn’t a single complex coding problem an AI can solve. If you don’t understand something and it helps you write it I’ll close the MR and delete your code since it’s worthless. You have to understand what you write. I do not care if it works. You have to understand every line.

        “But I use it just fine and I’m an…”

        Then you’re not an engineer and you shouldn’t have a job. You lack the intelligence, dedication and knowledge needed to be one. You are detriment to your team and company.

        source
        • -> View More Comments
    • MDCCCLV@lemmy.ca ⁨2⁩ ⁨months⁩ ago

      Is it worse than the current system of editors making shitty click bait titles?

      source
      • homesweethomeMrL@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Surprisingly, yes

        source
  • Turbonics@lemmy.sdf.org ⁨2⁩ ⁨months⁩ ago

    BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline

    source
    • Krelis_@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Some examples of inaccuracies found by the BBC included:

      Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking
      
      ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left
      
      Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" *and described Israel's actions as "aggressive"*
      
      source
      • Turbonics@lemmy.sdf.org ⁨2⁩ ⁨months⁩ ago

        Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

        I did not even read up to there but wow BBC really went there openly.

        source
  • brucethemoose@lemmy.world ⁨2⁩ ⁨months⁩ ago

    What temperature and sampling settings? Which models?

    I’ve noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

    I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

    My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.

    source
    • 1rre@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

      I’ve found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords… It’s almost like they’ve played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

      source
      • brucethemoose@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Gemini Flash Thinking from earlier this year was good, but it regressed a ton.

        Gemini 1.5 is literally better than the new 2.0 in some of my tests, especially long-context ones.

        source
      • Imgonnatrythis@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

        Bing/chatgpt is just as bad. It loves to tell you it’s doing something and then just ignores you completely.

        source
    • paraphrand@lemmy.world ⁨2⁩ ⁨months⁩ ago

      I don’t think giving the temperature knob to end users is the answer.

      Turning it to max for max correctness and low creativity won’t work in an intuitive way.

      Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

      Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left our these facts and invented a back story to this small thing mentioned…”

      Not everyone is an engineer. Temp is an obtuse thing.

      source
      • brucethemoose@lemmy.world ⁨2⁩ ⁨months⁩ ago
        • Temperature isn’t even “creativity” per say, it’s more a band-aid to patch looping and dryness in long responses.

        • Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don’t offer this.

        • It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuned on their own output which “inbreeds” the model.

        • And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but… most UIs don’t even do this for some reason?

        What I am getting at is this is not a problem companies seem interested in solving.

        source
      • Eheran@lemmy.world ⁨2⁩ ⁨months⁩ ago

        This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.

        source
        • -> View More Comments
    • jrs100000@lemmy.world ⁨2⁩ ⁨months⁩ ago

      They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently didnt even note what versions of the other models were used.

      source
    • Eheran@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Rare that people here argument for LLMs like that here, usually it is the same kind of “uga suga, AI bad, did not already solve world hunger”.

      source
      • brucethemoose@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Lemmy is understandably sympathetic to self-hosted LLMs, but I get chewed out or even banned literally anywhere else.

        In this fandom I’m in, there used to be enthusiasm for a “community enhancement” of a show since the official release looks terrible. Years later, I don’t even mention the word “AI,” just the idea of restoration (now that we have the tools to do it), and I get bombed and threadlocked.

        source
      • heavydust@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

        Your comment would be acceptable if AI was not advertised as solving all our problems, like world hunger.

        source
        • -> View More Comments
      • Nalivai@lemmy.world ⁨2⁩ ⁨months⁩ ago

        What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
        In case you’re using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite.

        source
        • -> View More Comments
    • MoonlightFox@lemmy.world ⁨2⁩ ⁨months⁩ ago

      I have been pretty impressed by Gemini 2.0 Flash.

      Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?

      Anyways, which model of the commercial ones do you consider to be good?

      source
      • brucethemoose@lemmy.world ⁨2⁩ ⁨months⁩ ago

        benchmarks

        Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.

        Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real. Tencent’s API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.

        MiniMax is ok for long context, but I still tend to lean on Gemini for this.

        source
        • -> View More Comments
  • Petter1@lemm.ee ⁨2⁩ ⁨months⁩ ago

    ShockedPikachu.svg

    source
  • Etterra@discuss.online ⁨2⁩ ⁨months⁩ ago

    You don’t say.

    source
  • TroublesomeTalker@feddit.uk ⁨2⁩ ⁨months⁩ ago

    But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.

    source
    • MoonlightFox@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Why do you say that? I have had no reason to doubt their reporting

      source
      • StarlightDust@lemmy.blahaj.zone ⁨2⁩ ⁨months⁩ ago

        Look at their reporting of the Employment Tribunal for the nurse from Five who was sacked for abusing a doctor. They refused to correctly gender the doctor correctly in every article to a point where the lack of any pronoun other than the sacked transphobe referring to her with “him”. They also very much paint it like it is Dr Upton on trial and not Ms Peggie.

        source
      • TroublesomeTalker@feddit.uk ⁨2⁩ ⁨months⁩ ago

        It’s a “how the mighty have fallen” kind of thing. They are well into the click-bait farm mentality now - have been for a while.

        It’s present on the news sites, but far worse on things where they know they steer opinion and discourse. They used to ensure political parties has coverage inline with their support, but for like 10 years prior to Brexit, they gave Farage and his Jackasses hugely disproportionate coverage - like 20X more than their base. This was at a time when SNP were doing very well and were frequently seen less than 2006 to 2009.

        Current reporting is heavily spun and they definitely aren’t the worst in the world, but the are also definitely not the bastion of unbiased news I grew up with.

        Until relatively recently you could see the deterioration by flipping to the world service, but that’s fallen into line now.

        If you have the time to follow independent journalists the problem becomes clearer, if not, look at output from parody news sites - it’s telling that Private Eye and Newsthump manage the criticism that the BBC can’t seem to get too

        Go look at the bylinetimes.com front page, grab a random stort and compare coverage with the BBC. One of these is crowd funded reporters and the other a national news site with great funding and legal obligations to report in the public interest.

        I don’t hate them, they just need to be better.

        source
  • Teknikal@eviltoast.org ⁨2⁩ ⁨months⁩ ago

    I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.

    source
    • datalowe@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Do you mean you rigorously went through a hundred articles, asking DeepSeek to summarise them and then got relevant experts in the subject of the articles to rate the quality of answers? Could you tell us what percentage of the summaries that were found to introduce errors then? Literally 0?

      Or do you mean that you tried having DeepSeek summarise a couple of articles, didn’t see anything obviously problematic, and figured it is doing fine? Replacing rigorous research and journalism by humans with a couple of quick AI prompts, which is the core of the issue that the article is getting at. Because if so, please reconsider how you evaluate (or trust others’ evaluations of) information tools which might help or help destroy democracy.

      source
    • Flocklesscrow@lemm.ee ⁨2⁩ ⁨months⁩ ago

      Now ask it whether Taiwan is a country.

      source
      • qaz@lemmy.world ⁨2⁩ ⁨months⁩ ago

        That depends on if you ask the online app (which will cut you off or give you a CCP sanctioned answer) or run it locally.

        source
  • Phoenicianpirate@lemm.ee ⁨2⁩ ⁨months⁩ ago

    I learned that AI chat bots aren’t necessarily trustworthy in everything. In fact, if you aren’t taking their shit with a grain of salt, you’re doing something very wrong.

    source
    • Redex68@lemmy.world ⁨2⁩ ⁨months⁩ ago

      This is my personal take. As long as you’re careful and thoughtful whenever using them, they can be extremely useful.

      source
      • Llewellyn@lemm.ee ⁨2⁩ ⁨months⁩ ago

        Extremely?

        source
      • echodot@feddit.uk ⁨2⁩ ⁨months⁩ ago

        Could you tell me what you use it for because I legitimately don’t understand what I’m supposed to find helpful about the thing.

        We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I’m heading IT, so I’m supposed to be able to come up with some kind of answer and yet I have nothing. Even putting the side the fact that it probably doesn’t work as advertised, I still can’t really think of a use for it.

        The main problem is it won’t be able to operate our ancient and convoluted ticketing system, so it can’t actually help.

        Everyone I’ve ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.

        source
        • -> View More Comments
    • Knock_Knock_Lemmy_In@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Treat LLMs like a super knowledgeable, enthusiastic, arrogant, unimaginative intern.

      source
      • milicent_bystandr@lemm.ee ⁨2⁩ ⁨months⁩ ago

        Super knowledgeable but with patchy knowledge, so they’ll confidently say something that practically everyone else in the company knows is flat out wrong.

        source
      • Phoenicianpirate@lemm.ee ⁨2⁩ ⁨months⁩ ago

        I noticed that. When I ask it about things that I am knowledgeable about or simply wish to troubleshoot I often find myself having to correct it. This does make me hestitant to follow the instructions given on something I DON’T know much about.

        source
        • -> View More Comments
  • tal@lemmy.today ⁨2⁩ ⁨months⁩ ago

    They are, however, able to inaccurately summarize it in GLaDOS’s voice, which is a point in their favor.

    source
    • JackGreenEarth@lemm.ee ⁨2⁩ ⁨months⁩ ago

      Surely you’d need TTS for that one, too? Which one do you use, is it open weights?

      source
      • brucethemoose@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Zephyra just came out, seems sick:

        huggingface.co/Zyphra

        There are also some “native” tts LLMs like GLM 9B, which “capture” more information in the output than pure text input.

        source
        • -> View More Comments
    • JohnEdwa@sopuli.xyz ⁨2⁩ ⁨months⁩ ago

      Yeah, out of all the generative AI fields, voice generation at this point is like 95% there in its capability of producing convincing speech even with consumer level tech like ElevenLabs. That last 5% might not even be solvable currently, as it’s those moments it gets the feeling, intonation or pronunciation wrong when the only context you give it is a text input.

      Especially voice cloning - the DRG Cortana Mission Control mod is one of the examples I like to use.

      source
  • chemical_cutthroat@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Which is hilarious, because most of the shit out there today seems to be written by them.

    source
  • small44@lemmy.world ⁨2⁩ ⁨months⁩ ago

    BBC finds lol. No, we slresdy knew about that

    source
  • underwire212@lemm.ee ⁨2⁩ ⁨months⁩ ago

    News station finds that AI is unable to perform the job of a news station

    🤔

    source
  • ininewcrow@lemmy.ca ⁨2⁩ ⁨months⁩ ago

    The owners of LLMs don’t care about ‘accurate’ … they care about ‘fast’ and ‘summary’ … and especially ‘profit’ and ‘monetization’.

    As long as it’s quick, delivers instant content and makes money for someone … no one cares about ‘accurate’

    source
    • Eheran@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Especially after the open source release of DeepSeak… What…?

      source
  • buddascrayon@lemmy.world ⁨2⁩ ⁨months⁩ ago

    That’s why I avoid them like the plague. I’ve even changed almost every platform I’m using to get away from the AI-pocalypse.

    source
    • echodot@feddit.uk ⁨2⁩ ⁨months⁩ ago

      I can’t stand the corporate double thing.

      Despite the mountains of evidence that AI is not capable of something even basic as reading an article and telling you what is about it’s still apparently going to replace humans. How do they come to that conclusion?

      The world won’t be destroyed by AI, It will be destroyed by idiot venture capitalist types who reckon that AI is the next big thing. Fire everyone, replace it all with AI; then nothing will work and nobody will be able to buy anything because nobody has a job.

      Que global economic collapse.

      source
      • vxx@lemmy.world ⁨2⁩ ⁨months⁩ ago

        It’s a race, and bullshitting brings venture capital and therefore an advantage.

        99.9% of AI companies will go belly up when Investors ask for results.

        source
        • -> View More Comments
    • Opisek@lemmy.world ⁨2⁩ ⁨months⁩ ago

      No better time to get into self hosting!

      source
  • Paradox@lemdro.id ⁨2⁩ ⁨months⁩ ago

    Funny, I find the BBC unable to accurately convey the news

    source
    • bilb@lem.monster ⁨2⁩ ⁨months⁩ ago

      Yeah, that

      Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

      Perplexity corrected the BBC article.

      source
    • addie@feddit.uk ⁨2⁩ ⁨months⁩ ago

      Dunno why you’re being downvoted. If you’re wanting a somewhat right-wing, pro-establishment, slightly superficial take on the news, mixed in with lots of “celebrity” frippery, then the BBC have got you covered. Their chairmen have historically been a list of old Tories, but that has never stopped the Tory party of accusing their news of being “left leaning” when it’s blatantly not.

      source
  • NutWrench@lemmy.world ⁨2⁩ ⁨months⁩ ago

    But AI is the wave of the future! The hot, NEW thing that everyone wants! ** furious jerking off motion **

    source
  • untorquer@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Fuckin news!

    source
  • badbytes@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Why, where they trained using MAIN STREAM NEWS? That could explain it.

    source
  • Joelk111@lemmy.world ⁨2⁩ ⁨months⁩ ago

    I’m pretty sure that every user of Apple Intelligence could’ve told you that. If AI is good at anything, it isn’t things that require nuance and factual accuracy.

    source
  • ehpolitical@lemmy.ca ⁨2⁩ ⁨months⁩ ago

    I recently had one chatbot refuse to answer a couple of questions, and another delete my question after warning me that my question was verging on breaking its rule… never happened before, thought it was interesting.

    source
  • tacosplease@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Neither are my parents

    source
  • Grimy@lemmy.world ⁨2⁩ ⁨months⁩ ago
    [deleted]
    source
    • NoForwardslashS@sopuli.xyz ⁨2⁩ ⁨months⁩ ago

      It is stated as 51% problematic, so maybe your coin flip was successful this time.

      source
  • rottingleaf@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Yes, I think it would be naive to expect humans to design something capable of what humans are not.

    source
    • maniclucky@lemmy.world ⁨2⁩ ⁨months⁩ ago

      We do that all the time. It’s kind of humanity’s thing. I can’t run 60mph, but my car sure can.

      source
      • rottingleaf@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Qualitatively.

        source
        • -> View More Comments