Healthcare systems are increasingly integrating the use of Electronic Health Records (EHRs) to store and manage patient health information and history. As hospitals adopt the new technology, the use of AI to manage these datasets and identify patterns for treatment plans is also on the rise, but not without debate.

Supporters of AI in EHRs argue that AI improves efficiency in diagnostic accuracy, reduces inequities, and reduces physician burnout. However, critics raise concerns over privacy of patients, informed consent, and data bias against marginalized communities. As bills such as H.R. 238 increase the clinical authority of AI, it is important to have discussions surrounding the ethical, practical, and legal implications of AI’s future role in healthcare.

I’d love to hear what this community thinks. Should AI be implemented with EHRs? Or do you think the concerns surrounding patient outcomes and privacy outweigh the benefits?

  • Maeve@kbin.earth
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    3 days ago

    Think of all the data breaches of big insurance and clearinghouses. That’s definitely going to be an AI nightmare.

    • TheFogan@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Well yeah exactly why I said “the same risk”. ideally it’s going to be in the same systems… and assuming no one is stupid enough (or the laws don’t let them) attach it to the publicly accessible forms of existing AIs It’s not a new additional risk, just the same one. (though those assumptions are largely there own risks.