Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

New York considers bill that would ban chatbots from giving legal, medical advice

⁨979⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨weeks⁩ ago⁩ by ⁨NomNom@feddit.uk⁩ to ⁨technology@lemmy.world⁩

https://statescoop.com/new-york-bill-would-ban-chatbots-legal-medical-advice/

source

Comments

Sort:hotnewtop
  • artyom@piefed.social ⁨3⁩ ⁨weeks⁩ ago

    Hell yeah, let’s hold them accountable for disinformation. They’ll be gone completely in a matter of months.

    source
    • iopq@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      It’s a bit different, because a search engine can give you 0 results. An AI is trained on getting the most correct answers so it always guesses, it’s the best way to score

      source
      • XTL@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

        Except for refusals, but that’s a kind of answer as well.

        source
  • supersquirrel@sopuli.xyz ⁨3⁩ ⁨weeks⁩ ago

    I think a better solution is to ban techbros from giving serious economic or cultural advice and take computers away from business majors.

    source
    • HeyThisIsntTheYMCA@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Please don’t take them entirely away. Maybe just internet access? 30ish years had to do accounting by hand. In those green ledgers. It took approximately twelve times longer to do it by hand than to do it with a computer. And it made me shrimp like 5 times worse. I needed an architect’s table what angled the top of it in order to work properly but I could neither get one supplied by the employer nor afford to give one to the employer.

      Not all technology is bad

      source
      • isVeryLoud@lemmy.ca ⁨3⁩ ⁨weeks⁩ ago

        Oddly specific gripe, I’ll allow it.

        source
        • -> View More Comments
      • WhyJiffie@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        found the business major!

        what about a typewriter?

        source
        • -> View More Comments
    • jaybone@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

      I don’t get how some of these tech company CEOs who came up as engineers can be pushing this bullshit. I get once the company got big they started hiring business bros. But some big companies still have CEOs that were once engineers. You’d think they would know better.

      source
      • NannerBanner@literature.cafe ⁨2⁩ ⁨weeks⁩ ago

        What kind of engineer? Because while the physical world, with all of its mechanical and civil and aerospace engineers, has its shit figured out with professional standards and very clearly defined responsibilities and duties, the world of social engineers, tire engineers, procurement engineers, supply chain engineers, sandwich engineers, project engineers, lead engineers, and yes, software engineers, definitely is a little too loose with any definition for me to care that these ceos were once ‘engineers.’

        source
        • -> View More Comments
  • HootinNHollerin@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

    Would be nice if regular legal and health advice was in any way affordable though

    source
  • tinkermeister@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    I may have become too cynical but, as is often the case when you dig deeper, this sounds like the result of lobbyists trying to protect licensing rather than people.

    We can be dumb, but we’ve been doing web searches for legal and medical advice for ages because it is too damned expensive and time consuming to go to professionals for every little thing. Not to mention, doctors have so little time for you that it is hard to get them to listen to the whole story to make connections between symptoms.

    The LLMs already tell you that they aren’t licensed professionals and, for many, provide citations for their sources (miles better than your typical health website).

    As a personal anecdote, my son was having stomach pain but was planning to tough it out. He checked with ChatGPT and it recommended he go to the ER. He did, and if he hadn’t, he would likely be dead now. He spent 3 days in the hospital having his bowels unobstructed through a tube in his nose.

    There is value in people having that kind of information at their fingertips.

    Regulation is absolutely needed, but I would rather they focus on protecting us from AI being used for military purposes, mass surveillance, etc.

    source
    • tempest@lemmy.ca ⁨3⁩ ⁨weeks⁩ ago

      Are you in the US? My take away here is American healthcare is bad but we’re treating the symptom not the disease.

      source
      • tinkermeister@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Yeah, I’m in the US and I agree. Though it is going to take some serious change to treat the problem. In the meantime, this is at least a stopgap solution for people who don’t have a lot of options.

        source
    • HeyThisIsntTheYMCA@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Wait, he thought he could sit that pain through at home? Your son is tough as nails

      source
      • tinkermeister@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Yeah, he is pretty tough. I wish I could hug him, he is about a 10 hour drive from me. That tube was nightmarish from what he’s told me.

        source
        • -> View More Comments
  • ieGod@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    I don’t see how you police/enforce this. The technology is out of the bag, people will find ways to access. Do we need age/location verification for this now too? What if I’m running a local agent? I don’t agree with this.

    source
    • cmnybo@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

      The law would allow you to sue whoever is running the chatbot. If you run your own LLM locally and take bad advice from it, then it’s your own fault.

      source
      • ieGod@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        Walk me through how a company based and operating not in new york would be subject to any actions from this lawsuit.

        source
        • -> View More Comments
      • how_we_burned@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        So who gets sued. The guy who put the chat bot on the server and is running it or the chatbot software developer themselves?

        Or both?

        source
        • -> View More Comments
  • deathbird@mander.xyz ⁨2⁩ ⁨weeks⁩ ago

    If implemented, that would just ban chatbots that use large language models. It’s not a terrible idea.

    What would actually happen is that so-called AI chatbot systems would try to detect if someone is from New York and then try to exclude them from receiving medical or legal advice, fail, and then get sued and then pay a small fine, over and over again forever.

    source
    • architect@thelemmy.club ⁨2⁩ ⁨weeks⁩ ago

      This is a really bad idea.

      First because healthcare is clearly being gatekept from people.

      Second, because even if you go to a healthcare professional nowadays, there is no guarantee that that person is not a fucking idiot that doesn’t believe in vaccines. I can’t believe I have to actually ask people before they touch me if they believe in vaccines or not and then tell them to not come back into my room if they answer that they don’t believe in science. But that has happened and it has happened to the people I’ve taken care of and because of this now healthcare can’t be trusted.

      The LLM is not any worse than that. In fact, I would say that it’s already too cautious. No way the model is ever going to tell me vaccines are bad. It’s not going to tell me to take a poison to clear Covid. It’s not going to tell me to drink bleach like the president did. It’s literally not any worse than the bullshit we are dealing with all day every fucking day.

      And I’m getting to the point that if you’re a full grown human fucking being and you’re going to believe something if it tells you to drink fucking bleach or swallow a fucking lightbulb then that’s nature saying something about you.

      source
      • Doomsider@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Naw, completely disagree. If you had a calculator you knew was defective you would ban doctors and lawyers from using it.

        You also seem to think that LLM is going to be inherently more accurate than a expert human. We can see with GrokAI how easy it is to manipulate an AI into saying racist white nationalist garbage. So we are not just trusting the technology but also a layer of unpredictable corporate meddling.

        Why does the LLM recommend this drug but not the other one? We quickly see how a corporation could favor a certain medication due to behind the scene deals or even push a medication.

        You can’t trust a black box you are not allowed to look into. Trust in a LLM at this point is pure folly.

        source
        • -> View More Comments
      • chunes@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        100% fact.

        source
  • moroninahurry@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    Laws like this are great for these companies. This is how they will justify removing access to useful information and putting it behind paywalls. But oh your need a prescription so now the insurance companies are involved (spoiler: they already are) and so you don’t even have access to pay out the nose for medical information.

    Then when Google search has been completely replaced with AI, you won’t even be able to search for medical information.

    Healthcare companies aren’t about to provide anything for free.

    source
    • Soup@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      LLMs and chatbots should not be giving medical advice. You are afraid of the private healthcare system, not the lack of access to the most janky bandaid fix for its failures.

      source
      • douglasg14b@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        The line between medical advice and personal research is pretty freaking gray, so banning medical advice. Does that also ban talking to llms about anything that is medical adjacent?

        Does medical adjacent mean personal disabilities? Drug related interests? Pet health?

        …etc

        It’s a slippery slope and we don’t need to be sliding down it

        source
        • -> View More Comments
      • moroninahurry@piefed.social ⁨2⁩ ⁨weeks⁩ ago

        Neither should Wikipedia or Google. So I guess by your logic nobody should search or learn about medical conditions on a computer.

        source
        • -> View More Comments
  • TropicalDingdong@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    I mean.

    Is the wikipedia responsible for you reading an article about a law and then taking that as legal advice?

    source
    • LNRDrone@sopuli.xyz ⁨3⁩ ⁨weeks⁩ ago

      Wikipedia doesn’t give “legal advice”, it has information about these laws, with the sources cited.

      That is very different than asking LLM anything and it throws you random bullshit from unknown sources, with no easy way to verify where it is from or if it is at all accurate.

      source
      • TropicalDingdong@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Wikipedia doesn’t give “legal advice”, it has information about these laws, with the sources cited.

        That is very different than asking LLM anything and it throws you random bullshit from unknown sources, with no easy way to verify where it is from or if it is at all accurate.

        It seems like your argument is that because Wikipedia “gets it right” and has cited sources, it isn’t liable? Which I promise, is not how liability works.

        What if it was Wikipedia versus “Some random sovcit facebook post” then? Is the Sovcit post liable because its sources are bullshit? Since there sources are random bullshit and or unknown, do they absorb liability? Again, its the same case, that is not how liability works.

        People are going to have to acknowledge you can’t have it both ways.

        Also…

        with no easy way to verify where it is from or if it is at all accurate.

        C’mon. Plenty of LLM’s can also hallucinate sources which are easily verified. And like with Wikipedia, one could go check them.

        source
    • Passerby6497@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Wikipedia isn’t giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

      Also, people get in trouble for giving legal advice, artificial unintelligence('s company) should as well.

      source
      • TropicalDingdong@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Wikipedia isn’t giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

        Okay lets try this then:

        Chat bots aren’t giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

        Show me the difference.

        Also, people get in trouble for giving legal advice,

        No, they don’t, unless they are genuinely misrepresenting their positions. Sovcit influencers are well within their rights to make up all kinds of gobbly-gookey-garbage pseudo-legal advice.

        People who get in trouble are those that follow the gobbly-gookey-garbage pseudo-legal advice.

        source
        • -> View More Comments
    • JoshuaFalken@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      I could see the argument for things that aren’t particularly important, but to continue with the legal example, it seems akin to asking a practicing lawyer a question and asking someone that watched Boston Legal when it aired and can quote James Spader.

      Unfortunately, with the potential for a hallucinatory response, anything beyond quite simplistic queries shouldn’t be relied on with more weight than a crutch of toothpicks.

      source
      • TropicalDingdong@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        I don’t think you are wrong, but again, thats not the case.

        You’re making an argument about speech here.

        Lets say you make a fan website based entirely on fine tuned LLM which acts and responds as James Spader from Boston legal. Are you liable if a user of that website construes that speech as legal advice?

        If you are willing to give up access to speech so easily, I have almost no hope for Americans in the near future.

        What laws like this do is create an incredibly high pass filter to in positions of established power. Its literally suicidal in regards to freedom of speech on the internet.

        The right answer is that if you are dumb enough to have gotten your legal advice from an AI hallucination of James Spader, you get to absorb those consequences. The wrong answer is to tell people they aren’t allowed to build fan websites of James Spader giving questionable legal advice.

        source
        • -> View More Comments
    • WesternInfidels@feddit.online ⁨3⁩ ⁨weeks⁩ ago

      Is the wikipedia responsible for you reading an article about a law and then taking that as legal advice?

      Is the U.S. House of Representatives responsible for you reading the text of a law itself and then taking that as legal advice?

      source
      • TropicalDingdong@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        That’s a totally irrelevant comparison. There is no equivalent publisher of the law to the US House of reps. Nothing the Wikipedia publishes has legal bearing; Everything the house of Reps publish does have legal bearing.

        source
        • -> View More Comments
  • dhruv3006@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    You bring a regulation - can you really enforce it?

    source
  • willington@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago
    1. Make laws against chatbots.
    2. Demand proof you are not a chatbot.
    3. Surveillance capitalism.

    The real target here is population control.

    The lawmakers, which take billionaire money by the ton, who HAVE NEVER given a shit, suddenly, NOW, they want to protect the vulnerable. Abso fucking lutely laughable on its face.

    source
    • militaryintelligence@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Agreed. It’s never about protection, just covert exploitation

      source
  • henfredemars@infosec.pub ⁨3⁩ ⁨weeks⁩ ago

    Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?

    source
    • voidsignal@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      it’s worse. In 4D it’s even worser

      source
    • wewbull@feddit.uk ⁨3⁩ ⁨weeks⁩ ago

      There are billions being sunk into AI. How much health care could that buy? Your logic only makes sense if AI is free. It’s not.

      source
    • JoshuaFalken@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      ‘Should I use one teaspoon of salt in this recipe, or two?’

      Two is ideal.

      ‘Do dogs like chicken wings?’

      Wild dogs regularly hunt small animals like hare or chicken for food.

      One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren’t much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.

      source
      • henfredemars@infosec.pub ⁨3⁩ ⁨weeks⁩ ago

        Hm, good point. Perhaps the overconfidence AI might provide is even worse than knowing you don’t know.

        source
    • thisbenzingring@lemmy.today ⁨3⁩ ⁨weeks⁩ ago

      the AI devices will just have preambles and disclaimers and word things in ways to refer the user to human resources

      source
    • Passerby6497@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Having potentially inaccurate resources might be better than nothing, or is that worse?

      You pick up a mushroom in the forest and take it home. If you have no information, do you eat it? If something tells you it’s safe do you eat it?

      source
    • Catoblepas@piefed.blahaj.zone ⁨3⁩ ⁨weeks⁩ ago

      If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.

      source
      • HeyThisIsntTheYMCA@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        especially if it’s wrong 20-35% of the time

        source
    • Cyteseer@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      No, misinformation is worse.

      source
    • smh@slrpnk.net ⁨2⁩ ⁨weeks⁩ ago

      We had a medical scare just yesterday. I was in the ER for 8 hours with my partner over a non-life-threatening but still emergency problem.

      An ultrasound, cat scan, and much poking and prodding later, we still don’t know what is up. The AI was at least able to predict next steps (if A then discharge and follow up with PCP, if B then surgery this week, if C then emergency surgery), something the ER was too busy to do for several hours. It was reassuring. The AI also gave me (working) links to more thorough resources on the topic.

      source
    • Lfrith@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      Problem is people treat it as reliable when AI itself isn’t able to verify or know if what it is generating is correct.

      Would be better if it provided direct links for people to go to and read. A list of citations if you will than the proclamations it makes know. Its too “opinionated” making it give advice when it would ideally be neutral just providing links for people to read further from sources that hopefully isn’t AI.

      AI has even gotten sports trivia I know incorrect. I don’t think people realize AI is just generation. Not as reliable or trustworthy authority just because it strings together sentences.

      source
  • TheObviousSolution@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    Just have them add a disclaimer or have the hosts be liable for what their chatbots say, stop adding bureaucracy just asking to get selective prosecuted and abused.

    source
    • deathbird@mander.xyz ⁨2⁩ ⁨weeks⁩ ago

      Section 230 of the dmca is designed to allow platforms to exist because people can say whatever the fuck they want. But nobody should make a machine that says things they can’t control, and if you do you need to be disciplined for such irresponsibility.

      source
    • deathbird@mander.xyz ⁨2⁩ ⁨weeks⁩ ago

      Name checks out.

      source
  • melfie@lemy.lol ⁨2⁩ ⁨weeks⁩ ago

    In the US especially, medical professionals are overworked and simply don’t have the time and energy properly diagnose. If you have a more complex, chronic issue, there’s a good chance you’ll be waiting months at a time to see various specialists who are only going to spend about 10 distracted minutes thinking about your case and might not even have any useful insights, or they might misdiagnose you and make your condition worse. You basically have to do your own research and show them studies. If you’re a person of color or a woman, etc., there’s a good chance you won’t even be taken seriously. In an ideal world, it would work like it does on TV, but in the real world, it’s all about maximizing profits and the patients be damned. Sure, LLMs are unreliable, but they do at least provide ideas to research.

    source
    • SaveTheTuaHawk@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      That’s not why people are using ChatBots, they are using Chatbots because they can’t afford healthcare.

      and before we get out the tiny violins for MDs, they gatekeep the system to keep their salaries high.

      source
      • melfie@lemy.lol ⁨2⁩ ⁨weeks⁩ ago

        they are using Chatbots because they can’t afford healthcare

        Even if they do spend their limited resources on healthcare, there’s a good chance it’s going to be a waste of money.

        before we get out the tiny violins for MDs

        A lot of MDs are pretty useless, and that’s a big part of the problem. Just because someone can memorize and regurgitate information well, that doesn’t mean they’re going to be effective at their job. It’s often necessary to shop around to find someone who doesn’t suck, which is especially difficult for anyone who can’t afford it.

        source
  • Zink@programming.dev ⁨2⁩ ⁨weeks⁩ ago

    I’m a human being and I’m pretty sure I am already not allowed to give legal or medical advice to anybody in new york or any other state.

    source
  • ArbitraryValue@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    If you don’t want legal or medical advice from an AI, you can already simply not ask the AI for legal or medical advice. But I don’t want your paternalistic restrictions on what I may ask.

    source
    • moroninahurry@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      Sir did you pay for that medical advice though? That’s what these laws will eventually enforce. Prescription advice.

      source
  • DarrinBrunner@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    Sounds like a start. More is needed though.

    The bill targets AI chatbots that impersonate licensed professionals — such as doctors and lawyers — and bars them from providing “substantive response, information, or advice” that would violate professional licensing laws or constitute the unauthorized practice of law.

    It also mandates that chatbot owners provide “clear, conspicuous, and explicit” notice to users that they are interacting with an AI system, with the notice displayed in the same language as the chatbot and in a readable font size. However, the bill clarifies that this notice for users, which indicates that they are interacting with a non-human system does, not absolve the chatbot owners of liability.

    source
  • chunes@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Fuck the hell out of this.

    My brothers in christ, I’m not going to drink bleach because the chat bot tells me to. I’m trying to come up with disease ideas to discuess with my doctors, and it’s invaluable for that.

    source
    • trashgirlfriend@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      then you should just be discussing your issues with your doctors in the first place?

      source
      • chunes@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        They are stumped and so is my second opinion. In this situation, every idea is valuable.

        source
        • -> View More Comments
    • militaryintelligence@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      People do it when a president tells them to.

      source
    • LuceVendemiaire@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      then use… google? if you do your own research you could atleast do it properly.

      Not to mention if the chatbot told me to drink bleach I would be asking for deletion of data at the very least, not shrug and keep using it…

      source
      • chunes@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I… do? You can do more than one thing

        source
  • d3adpaul77@lemmy.org ⁨2⁩ ⁨weeks⁩ ago

    we don’t want the plebs getting around our carefully constructed cartels…

    source
    • Burninator05@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Isn’t this just trading one cartel for another? The difference being that doctors and lawyers can be held accountable for their errors while a LLM can’t because no one actually stands behind them.

      source
      • chaotic_ugly@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        Maybe. LLMs are free(ish), meanwhile a single trip to the ER can leave a person destitute. Maybe that’s not so bad (it is) if the ER visit is for something actually urgent, but somewhere between 27% and 40% of ER visits are non-urgent and most are treatment by a PCP. But… ERs have to treat you. But, in the US, a primary care physician can look you right in the eyes and turn you away because you have no money.

        People don’t want to admit that AI does some good because the companies that own these LLMs are as corrupt as any other and the implications of the corruption of this tech are horrifying. But for health care, including mental health, LLMs are an unexpected godsend.

        Uscher-Pines, L., Pines, J., Kellermann, A., Gillen, E., & Mehrotra, A. (2013). Emergency Department Visits for Nonurgent Conditions: Systematic Literature Review. American Journal of Managed Care. pmc.ncbi.nlm.nih.gov/articles/PMC4156292/

        Raven, M. C., et al. (2024). Emergency Department Visits That Could Be Managed at Other Care Sites. JAMA Network Open. jamanetwork.com/journals/…/2813806

        source
      • architect@thelemmy.club ⁨2⁩ ⁨weeks⁩ ago

        Maybe but it’s trading one cartel for one that’s not as bad.

        Which is really saying something considering how bad these companies are.

        But imagine being gate kept from life because you don’t have enough money for it. Imagine going to the Doctor over and over and over again and then never be able to find fucking shit yet managing to always charge you hundreds upon hundreds upon hundreds of dollars every fucking time. Until finally over a decade later, one just randomly says oh you need this super simple drug To take for a week to clear it. Thousands upon thousands upon thousands of dollars years of suffering, and yeah, not one of them could figure it the fuck out? Until one doctor took one look at my skin and knew? But the others still were owed a paycheck for it? So yeah, it is trading one cartel for another but fuck the healthcare cartel. What the fuck did we expect to happen?

        If you don’t save peoples lives and you don’t give them a way to find healthcare then you deserve what you fucking get and we are all going to suffer for this. We are all going to suffer for allowing these quacks all over the place. Selling bullshit all over the place. Telling us vaccines don’t work. Yeah It’s trading one for another, but at least one isn’t going to charge us a fucking car just to tell us to go fucking home and pass the dead baby by ourselves and to come back if it don’t work out so they can get another car out of me to save my life.

        Yeah, I got some fucking beef with Healthcare professionals.

        source
        • -> View More Comments
      • d3adpaul77@lemmy.org ⁨2⁩ ⁨weeks⁩ ago

        Theoretically they can be but in practice it’s not always so easy. I prefer options. there’s already been dozens of cases of AI getting things right when Dr’s get it wrong. All trades should get the same competition.

        source
      • d3adpaul77@lemmy.org ⁨2⁩ ⁨weeks⁩ ago

        totally respect your position btw.

        source
  • webkitten@piefed.social ⁨3⁩ ⁨weeks⁩ ago

    This bill gave us the “best” interaction:

    https://bsky.app/profile/badmedicaltakes.bsky.social/post/3mghyg5eufk2m

    A Bluesky skeet from @badmedicaltakes.bsky.social:

    “Twitter user eoghan:

    How dare poor people get free medical advice

    <quote tweet from Twitter user Polymarket: BREAKING: New York bill would ban AI from answering questions related to medicine, law, dentistry, nursing, psychology, social work, engineering, & more.>

    Twitter user YBrogard79094: JUST MAKE HEALTHCARE ACCESSIBLE

    Twitter user eoghan:

    AI is literally free healthcare. Being a communist must be exhausting”

    source
    • Hiro8811@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      You can google your simptoms and there probably are some reliable sites but a hallucinating chatbot is a bad idea. Not to mention some people suggested treating covid with chlorine, vinegar etc…

      source
  • phx@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    AI in the legal field could be useful for assisting an actual legal professional in compiling precedent based against on-the-books laws, so long as it cites sources and they verify them.

    In the medical field, it could be useful for spotting anomalies between multiple images such as X-rays or cross-referencing medical documents WHEN USED BY A PROFESSIONAL.

    But the thing is, it should be a tool - carefully used - to enhance the existing profession, not replace actual professionals.

    source
  • AmbitiousProcess@piefed.social ⁨3⁩ ⁨weeks⁩ ago

    I’m not sure I totally agree with this, even as much as I want AI companies to be held accountable for things like that.

    The reason so many people turn to LLMs for legal/medical advice is because those are both incredibly unaffordable, complex, hard to parse fields.

    If I ask an LLM what x symptom, y symptom, and z symptom could mean, and it cites multiple reputable sources to tell me it’s probably the flu and tells me to mask up for a bit, that’s probably gonna be better than that person being told “I’m sorry, I can’t answer that”

    At the same time, I might provide an LLM with all those symptoms, and it might hallucinate an answer and tell me I have cancer, or tell me to inject bleach to cure myself.

    I feel like I’d much rather see a bill that focuses more on how the LLMs come to their conclusions, rather than just a blanket ban.

    Like for example, if an LLM cites multiple medical journals, government health websites, etc, and provides the same information they had up, but it turns out to be wrong later because those institutions were wrong, would it be justified to sue the LLM company for someone else’s accidental misinformation?

    But if an LLM pulls from those sources, gets most of it right, but comes to a faulty conclusion, then should a private right of action exist?

    I’m not really sure myself to be honest. A lot of people rely on LLMs for their information now, so just blanket banning them from displaying certain information, for a lot of people, is just gonna be “you can’t know”, and they’re not gonna bother with regular searches anymore. To them, the chatbot IS the search engine now.

    source
  • SaveTheTuaHawk@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    Here’s a video about a simple question and how ChatGPT both refuses the correct answer, is shown proof of the correct answer, and even gaslights the user on the wrong answer.

    source
  • NutWrench@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Chat bots should never give medical advice. Chat bots dispense basic, standalone factoids, like “aspirin is a pain reliever.” But they don’t know or care about dosages, comorbid conditions or whether or not you live or die, so they won’t ask follow up questions.

    source