…“We believe Artificial Intelligence can save lives – if we let it. Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence working on new cures. There are scores of common causes of death that can be fixed with AI, from car crashes to pandemics to wartime friendly-fire.”

As I type this, the nation of Israel is using an AI program called the Gospel to assist its airstrikes, which have been widely condemned for their high level of civilian casualties…

  • TWeaK@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    6
    ·
    10 months ago

    Medicine relies on verification. AI operates without that.

    AI would be terrible in medicine.

    The Gospel is a good example, although I’d argue it’s intentionally used for that purpose - that, and so that no person can be held to account for their decisions.

    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      arrow-up
      28
      arrow-down
      1
      ·
      10 months ago

      I agree that in actual use, medicine needs to verifiably work. I believe “AI”, if you wanna call it that, probably has its place in effectively speedrunning theoretical testing and bruteforcing of results that would take humans much longer to even think of.

      The problem arises when people trust whatever the machine spits out. But thats not a new problem with AI its a general problem that any form of media has.

      • TWeaK@lemm.ee
        link
        fedilink
        English
        arrow-up
        23
        ·
        10 months ago

        AI is a tool. Just like all tools, it’s only as good as the tool that’s using it.

    • Moobythegoldensock@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      Yep, exactly.

      As a doctor who’s into tech, before we implemented something like AI-assisted diagnostics, we’d have to consider what the laziest/least educated/most tired/most rushed doctor would do. The tools would have to be very carefully implemented such that the doctor is using the tool to make good decisions, not harmful ones.

      The last thing you want to do is have a doctor blindly approve an inappropriate order suggested by an AI without applying critical thinking and causing harm to a real person because the machine generated a factually incorrect output.