Not much familiar wirh metrics for evaluating progression in medical fields, so asking in general sense.

  • Stovetop@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    Well, I can’t claim to be an expert on the subject, at any rate, but there are plenty of models which are local-only and are required to directly reference the information they interpret. I’d assume a HIPAA-compliant model would need to be more like an airgapped NotebookLM than ChatGPT.

    But I would also assume the risk of hallucinations or misinterpretations is the reason why a clinician would still need to review the AI summary to add/correct details before signing off on everything, so there’s probably still some risk. Whether that risk of error is greater or less than an overworked resident writing their notes a couple days after finishing a 12-hour shift is another question, though.

    • cecinestpasunbot@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      If you end up integrating LLMs in a way where it could impact patient care that’s actually pretty dangerous considering their training data includes plenty of fictional and pseudo scientific sources. That said it might be okay for medical research applications where accuracy isn’t as critical.