OpenAI’s Whisper tool may add fake text to medical transcripts, investigation finds.

  • ShareMySims@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 days ago

    Errors and Hallucinations are definitely serious concerns, but my biggest concern would be privacy. If my GP is using AI, I no longer see my medical information as private, and that is unacceptable.

  • SpikesOtherDog@ani.social
    link
    fedilink
    arrow-up
    1
    ·
    23 days ago

    I work in judicial tech and have heard questions of using AI transcription tools. I didn’t believe AI should be used in this kind of high risk area. The ones asking if AI is a good fit for court transcripts can be forgiven because all they see is the hype, but if the ones responding greenlight a project like that there will be some incredibly embarrassing moments.

    My other concern is that the court would have to run the service locally. There are situations where a victim’s name or other information is redacted. That information should not be on an Open AI server and should not be regurgitated back out when the AI misbehaves.

    • FatCrab@lemmy.one
      link
      fedilink
      arrow-up
      1
      ·
      23 days ago

      Don’t court stenographers basically use tailored voice models and voice to text transcription already?

  • ChihuahuaOfDoom@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    23 days ago

    Regular transcription software is finally respectable (the early days of dragon naturally speaking were dark indeed). Who thought tossing AI in the mix was a good idea?

    • ladicius@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      23 days ago

      This is the AI plan every healthcare entity worldwide will adopt.

      No joke. They are desperate for shit like this.