• whithom
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    11 days ago

    Yes! And we should use it when it has been proven effective. But the AI shouldn’t be able to administer drugs.

    • pearsaltchocolatebar
      link
      fedilink
      English
      arrow-up
      15
      ·
      11 days ago

      For sure. There always needs to be a human in the loop. But this notion people seem to have that all AI is completely worthless just isn’t true.

      What’s scary is the hospital administration that will use AI to deny care to unprofitable patients (I’ve listened in on these conversations).

      • deranger@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        11 days ago

        Where’s anyone saying it’s worthless? That’s not in the article nor in these comments.

        The issue is how it’s being used. It’s not being used to detect cancer. It’s being used for “efficiency”, which means more patients being seen by fewer nurses. It’s furthering the goals of the business majors in hospital administration, not the nurses or doctors who are caring for the patient.

        • kryptonidas@lemmings.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 days ago

          Ai nearly everywhere is to improve efficiency, less people become more productive so that the owners keep more money. Because a pay rise because of it is off the books. Since now you need to be “less skilled” anyway.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          LLMs are largely worthless (in the context of improving human society).

          Neural Nets aimed at much more specific domains (recognizing and indicating metastases or other abnormalities in pathology slides for human review, for example) are EXTREMELY useful and worthwhile.