Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Lawyer caught using AI-generated false citations in court case penalised in Australian first

⁨58⁩ ⁨likes⁩

Submitted ⁨⁨4⁩ ⁨days⁩ ago⁩ by ⁨Davriellelouna@lemmy.world⁩ to ⁨australia@aussie.zone⁩

https://www.theguardian.com/law/2025/sep/03/lawyer-caught-using-ai-generated-false-citations-in-court-case-penalised-in-australian-first

source

Comments

Sort:hotnewtop
  • naevaTheRat@lemmy.dbzer0.com ⁨4⁩ ⁨days⁩ ago

    Haha fucking idiot clankerwanker.

    These machines generate plausible text. That’s all.

    source
  • null_dot@lemmy.dbzer0.com ⁨4⁩ ⁨days⁩ ago

    I’ve been using an AI bot more and more in my own consultancy.

    I don’t use it to draft anything to be issued to a client or regulator, but for internal notes it can be helpful sometimes.

    It’s kind of surprising how often it just confidently spews out sentences which seem plausible but are completely incorrect.

    Legislation seems to be an area in which it’s particularly over confident.

    The penalties here seem harsh but submitting something to a court that is false and misleading is a big deal, even if it was inadvertent.

    source
    • blind3rdeye@aussie.zone ⁨4⁩ ⁨days⁩ ago

      The penalties here seem harsh but submitting something to a court that is false and misleading is a big deal, even if it was inadvertent.

      I think the penalties are too harsh at all. This person is suppose to be a trained professional. Their right to practice law is based on their skills and their knowledge. It’s a high barrier that prevents most people from taking that job. And in this case, the person outsourced a key part of their job to a LLM, and did not verify the result. Effectively they got someone (something) unqualified to do the job for them, and passed it off as their own work. So the high barrier which was meant to ensure high-quality work was breached. It makes sense to strip the person of their right to do that kind of work. (The suspension is temporary, which is fair too. But these kinds of breaches trust and reliability are not something people should just accept.)

      source
      • tuff_wizard@aussie.zone ⁨3⁩ ⁨days⁩ ago

        I’d say of any high paid profession, the legal trade is the most likely to be decimated by ‘AI’ and LLM’s.

        If you fed every case and ruling, law and statute into an LLM, removed it’s "yes, and’ing and had someone who knew how to write a effective prompt you could answer many, many legal questions and save a lot of time searching for precedence.

        Obviously someone will have to accept liability if poor advice is given but I can see some hotshot lawyer taking the risk if it means he can handle 1000’s of cases at once with a few ‘prompt engineers’.

        source
        • -> View More Comments
      • sqgl@sh.itjust.works ⁨3⁩ ⁨days⁩ ago

        The lawyer is still allowed to practice but only as an employee, under supervision and checked quarterly.

        source
      • null_dot@lemmy.dbzer0.com ⁨3⁩ ⁨days⁩ ago

        You seem to have a very high expectation of professionalism.

        Trained professionals who are supposed to have skills and knowledge and experience make mistakes all the time, sometimes through ineptitude, but also through laziness.

        Whether it’s Doctors, lawyers, accountants, architects, any profession really. In many or most cases the client doesn’t suffer real harm, or if they do the costs of litigation would be higher than the compensation.

        A referral to a professional body is usually not very serious. Doctors are referred to the board for malpractice all the time.

        I’m a tax consultant. We’re regulated by the Tax Practitioners Board. I find it extraordinarily unlikely that they would take someone’s license over a submission to the ATO that relied on false cases. Basically they only take action in cases where there is little or no doubt that the practitioner sought to intentionally mislead the tax office.

        So, you personally might not think the penalties are harsh, but I can assure you that restricting someone’s license to practice, whatever their profession, is a measure usually reserved for fraudulent behavior.

        source
    • eureka@aussie.zone ⁨4⁩ ⁨days⁩ ago

      It’s kind of surprising how often it just confidently spews out sentences which seem plausible but are completely incorrect.

      To me, it’s not surprising at all. It’s trained to talk like its training data talks, how people talk. Very loosely speaking, it’s a “common sense” generator, and if there are topics that you’re experienced with and you look at a site like reddit talking about it, you soon realise how normal it is for people to be confidently incorrect.

      And on that note, it’s been seriously worrying to me how people seem to trust and anthropomorphise computers. It’s been a problem since at least the '60s but the advent of Artificial so-called Intelligence has revealed how dangerous it is.

      Unless a bot is trained with curated data (like some medical imaging ones, for example), it shouldn’t be believed. And even then it shouldn’t be fully trusted.

      source
      • null_dot@lemmy.dbzer0.com ⁨4⁩ ⁨days⁩ ago

        I agree for the most part.

        “Surprising” is perhaps the wrong word. If you have even a vague understanding of how these work, then nothing is really surprising. However, a bot day to day and learning how to integrate it into your workflow, you get used to a certain level of quality, but occasionally (regularly?) run into something that doesn’t meet your expectations.

        I agree that the way that some people are interacting with these LLMs is… odd. However, people are engaging in so many odd behaviors I have to say if they’re not harming anyone then have at it.

        source
        • -> View More Comments
    • Taleya@aussie.zone ⁨4⁩ ⁨days⁩ ago

      It’s kind of surprising how often it just confidently spews out sentences which seem plausible but are completely incorrect.

      These things were trained on the 21st century internet. I wouldn’t trust a single fcking thing they say. It’s a dunning- kruger machine

      source
      • null_dot@lemmy.dbzer0.com ⁨3⁩ ⁨days⁩ ago

        I don’t trust anything they say.

        source
    • Salvo@aussie.zone ⁨3⁩ ⁨days⁩ ago

      It is useful for Lorem Ipsum text and that is all.

      Honestly, if you are submitting anything using AI generated content, you may as just put Lorem Ipsum text instead. That way you are not wasting ridiculous amounts of electricity and potable water.

      en.wikipedia.org/wiki/Lorem_ipsum

      source
      • sqgl@sh.itjust.works ⁨3⁩ ⁨days⁩ ago

        The energy is indeed wasteful but cooling water is recovered and reused . Do you know for a fact that much of it evaporates?

        source
      • null_dot@lemmy.dbzer0.com ⁨3⁩ ⁨days⁩ ago

        I’ve tried to explain in other comments but basically, I don’t “submit” anything using AI generated content.

        It’s a helpful support which can sometimes save time.

        source
  • kandoh@reddthat.com ⁨3⁩ ⁨days⁩ ago

    To death you say?

    source
  • sqgl@sh.itjust.works ⁨3⁩ ⁨days⁩ ago

    Since this case, there have been more than 20 other reported cases in Australian courts where lawyers or self-represented litigants have been found to have used artificial intelligence in the preparation of court documents that led to those documents containing fake citations.

    “Since this case”, however there are over a thousand instances of lawyers in Australia using AI evidence which “hallucinated”.

    old.reddit.com/…/tracker_legal_decisions_where_ge…

    source