Hospital bosses love AI. Doctors and nurses are worried.::undefined

  • Diplomjodler@feddit.de
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    1
    ·
    1 year ago

    AI in healthcare could have innumerable benefits, if it was primarily viewed as a means to improve quality. But we all know that it will be seen as a cost cutting tool first and foremost. The consequences are pretty predictable.

      • SCB@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        edit-2
        1 year ago

        I know I should expect “the Luddite” to be shitty and wrong but this is next-level shitty and wrong:

        When I call Home Depot because my new washing machine broke, they do not want to help me, because that costs them money. It creates complexity for them, so they put a computer in front of those of us calling to complain about our broken washing machines, putting us through that bad user experience of being simplified

        Not helping you costs them money. They’re already paying for the call center, they already have you as a customer, and your experience directly benefits their brand.

        Online dating apps like Tinder make their money by users using their app, through subscriptions or other in-app purchases. This incentivizes Tinder to keep you on the app, meaning they are incentivized to sabotage your search for the perfect partner, after which most people will presumably leave Tinder. This is why dating apps are like a game that presents you with as many options as fast as possible. This is not how people actually interact, or how you get to know a potential romantic partner.

        This is similarly nonsensical in that A) tinder has competition so they are incentivized to provide you a good experience or they lose market share and B) once you match with someone you then interact like humans always have.

        “Any time something doesn’t work it’s because capitalism” is not a real philosophy.

  • HousePanther@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    5
    ·
    1 year ago

    All it is going to take for AI to go away is when it makes a serious mistake that injures or kills a patient. The wrongful death and negligence lawsuits would be in the high millions or even more if it is a young child. In my opinion, AI is a very bad idea. I would sooner put my trust in a human being than AI. I can see AI being a tool to assist a doctor in making a diagnosis, but certainly not to replace doctors or reduce their numbers.

    I am kind of - no, I am really - anti-capitalist. In some evil sort of way, if the hospital ‘boss’ decides to replace enough medical professionals with AI and robots and this causes a patient to die from inadequate care, then not a small part of me believes that the hospital boss should pay with their own life. Yeah, I have an anger management problem when it comes to the wealthy.

    • danielbln@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      AI won’t replace doctors, but doctors that use AI might replace doctors that don’t, and I’m ok with that. Keep the human in the loop, by all means, but make use of powerful tooling that might make things better.

      • DragonAce@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I’m of the same mindset. A doctor equipped with all the latest technology will be able to offer a far more accurate diagnosis and custom treatment plan, rather than the traditional “make an educated guess and throw shit at it till something works” approach.

      • nous@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        IMO it is a double edged sword. One the one hand a doctor that uses AI to notify them of something they might not have thought of and the doctor confirms what the AI says before treatment can be a big benefit. But on the flip side people leaning to much on it and not verifying the output at all and taking what it says at face value like it cannot be wrong will lead to some very bad situations.

        I can see most people wanting to pull towards the former, but cost cutting, overworking employees and trying to maximise profits will pull things towards the latter. And ATM I don’t know which force is stronger - we really need to get the profit motives out of our healthcare systems.

        • shadowSprite@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I think it’s a more modern version of what we in EMS call “treat the patient, not the monitor.” AKA, if your patient looks like they’re in distress, is having trouble breathing, etc, but you throw them on the monitor to get vitals and it’s reading that everything is within normal levels, don’t just sit back and be like well clearly you are fine, stop saying you cant breathe because my little lifepack says otherwise. Either the monitor is wrong or they’re doing some hard-core compensation to keep themselves within normal ranges, so let’s treat them and not what the computer says.

    • Kalkaline@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      The penalties have to be high enough to dissuade the use of AI as a replacement for human care. For me, I see AI as an assist for doing things like intake questionnaires, spotting inconsistencies in the charts, automating some of the research and digging physicians and nurses have to do to get to know their patients. I see these companies like Persyst who have been doing EEG trending for years, they’re the gold standard when it comes to that stuff, they readily admit it’s not a replacement for a trained physician reading the raw data, but if you talk to a physician they’ll tell you it’s a huge help to speed up their read times but agree it’s not a replacement for them.

    • revelrous@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Well. In hell you’ll find company. The greed of capitalism is a violence, it a fair world it would be punished accordingly.

    • Fubarberry@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Thing is, human doctors already make a lot of mistakes that cause wrongful deaths. It wouldn’t surprise me if it ends up being similar to the situation we’re seeing with Tesla’s self driving cars, where they clearly have safety issues, but still end up being twice as safe as human drivers.

    • Fallenwout@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Doctors aren’t held accountable for their mistakes. They cover shit up. It is only by .accident a patient finds out and are able to sue

    • HobbitFoot @thelemmy.club
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It probably won’t look like that.

      There are already computer programs in place that assist doctors and nurses, like programs that check for drug interactions.

      AI will probably get added to do things to assist practitioners instead of replacing them at first. A junior general doctor uses an AI as a diagnosis tool or a radiologist uses AI to help in diagnosing tumors.

      Over time, costs go down the tools get better and you can use less time per doctor as the AI gets relied on to more of the grunt work and the doctor is just there to make sure nothing bad happens. By the time they propose outright replacing humans with AI, they are going to have a large body of evidence showing that AI has a better record than people.

    • OrdinaryAlien@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      So, you’re saying we should invent a bad AI to kill children? Consider it done. 🤖

  • godzillabacter@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    4th year medical student. AI is not ready to be making any diagnostic or therapeutic decisions. What I do think we’re just about ready for is simply making notes faster to write. Discharge summaries especially, could be the first real step AI takes into healthcare. For those unaware, a discharge summary is a chronological description of all the major events in a patient’s hospitalization that explain why they presented, how they were diagnosed, any complications that arose, and how they were treated. They are just summaries of all of the previous daily notes that were written by the patient’s doctors. An AI could feasibly only pull data from these notes, rephrasing for clarity and succinctness, and save doctors 10-20 minutes of writing on every discharge they do.

    • SkySyrup@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Also in general - summaries are just a strong suit of LLMs right now, and even if the technology doesn’t advance further (which I’m quite skeptical of) it is still an extremely useful tool which will drastically impact so much.

    • theluddite@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      This is how most of the tech industry thinks – looking at the existing process and trying to see which parts can be automated – but I’d argue that it’s actually not that great of a framework for finding good uses for technology. It’s an artifact of a VC-funded industry, which sees technology primarily as a way to save costs on labor.

      In this particular case, I do think LLMs would be great at lowering labor costs associated with writing summaries, but you’d end up with a lot of cluttered, mediocre summaries clogging up your notes, just like all the other bloatware that most of our jobs now force us to deal with.

  • marche_ck@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    1 year ago

    This is a bad idea. Used to do call centre customer service, and while it wasn’t implemented on our side, some contacts that got routed to us seems to be handled by chatbots before, and people aren’t happy.

    Human condition is complex, organic, often unique circumstances. A brain dead statistical machine like AI cannot be expected to handle these things well.

    Now, if it is for health related big data analytics, like epidemic modeling, demographic changes, effect of dietary patterns (notoriously hard to model actually), then I don’t see anything wrong with that. Caring for people? No way.

    • Stovetop@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      It’s not going to be about patient interaction, at least not at first. This is about EMR-intrgrated analytics and diagnostic tools intended to help streamline the workflows of overworked doctors and nurses and to identify patterns that humans may miss.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    This is the best summary I could come up with:


    NEW YORK — Every day Bojana Milekic, a critical care doctor at Mount Sinai Hospital, scrolls through a computer screen of patient names, looking at the red numbers beside them — a score generated by artificial intelligence — to assess who might die.

    Mount Sinai is among a group of elite hospitals pouring hundreds of millions of dollars into AI software and education, turning their institutions into laboratories for this technology.

    They worry about the technology making wrong diagnoses, revealing sensitive patient data and becoming an excuse for insurance and hospital administrators to cut staff in the name of innovation and efficiency.

    In the 1970s, Stanford University researchers created a rudimentary AI system that asked doctors questions about a patient’s symptoms and provided a diagnosis based on a database of known infections.

    In the 1990s and early 2000s, AI algorithms began deciphering complex patterns in X-rays, CT scans and MRI images to spot abnormalities that the human eye might miss.

    “While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions,” he said in a statement.


    The original article contains 1,500 words, the summary contains 201 words. Saved 87%. I’m a bot and I’m open source!

  • Fallenwout@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    edit-2
    1 year ago

    Good, doctors should be worried. Because if it isn’t a textbook issue you have, they won’t search any further and tell you with their heads up high “live with it”. And they got it wrong half the time too. Pretty much what AI would do. Characters like dr.House are a myth.

    Nurses should not be worried. Because it is them who take the imaging, put on bandages and clean up.

    I welcome AI if it means shutting up ivory tower doctors.

    All the power to nurses!