Note Bloat

Navigating the flaws of AI scribes: A clinical examination

Author Image
Published in
 
From Clinicians
  • 
5
 Min Read
  • 
Dec 24, 2024
Subscribe
Subscribe for Updates
Thank you!
You are signed up for Note Bloat notifications.
Oops! Something went wrong while submitting the form.
Reviewed by
 
From Clinicians
December 24, 2024

Navigating the flaws of AI scribes: A clinical examination

Reviewed By
From Clinicians
Published Date
December 24, 2024
Time to read
5
min.

Table of Contents

As a family physician married to a software engineer, I have a confession to make: I'm terrible at keeping up with new technology. My husband lives and breathes the latest tech innovations, while I'm perfectly content being what you might call a "luddite."

But even technophobes like me can't ignore the AI revolution in healthcare, particularly when it comes to medical documentation.

I was very skeptical when I heard about AI scribing for clinicians. Someone had mentioned that AI can have "hallucinations" – not exactly reassuring when you're already dealing with patients who occasionally experience the real thing. I was worried my notes wouldn’t be accurate, and that the AI would steal all my patients’ data. I didn’t know if I could rely on it or how it would distinguish speakers within a conversation. 

Now, I use an AI scribe every day and luckily no pink elephant hallucinations have gotten me yet. As a reformed AI skeptic, here is a breakdown of my main concerns when I first started using an AI scribe, and where I eventually landed; namely that AI scribes work best when we work with them.  

Hallucinations (not the kind we learned about in med school) 

When people talk about AI hallucinations, they're referring to instances where the AI tries to fill in gaps with made-up information – similar to how our brains might imagine the lower half of someone's face when they're wearing a mask.

In my experience, these are most likely to occur during very brief or incomplete visits. A study in the National Library of Medicine pointed out that this is why AI could (and should) never replace humans in healthcare. In fact, the more we physicians use and train the technology, the better it can understand what we need it to do. The most common issue I've encountered is the AI adding in normal findings that were never explicitly stated during the visit. I negate these with a quick post-visit note review.

Accuracy 

"But what if it gets something wrong?" I must have said this twenty times a day when I first started using an AI scribe. It's a valid concern – accuracy in medical documentation is non-negotiable.

We've all belted out "Hold me closer, Tony Danza," when Elton John was actually singing "tiny dancer." The best voice-to-text technology still has its own "misheard lyrics" moments, especially with medical terminology. While no system is perfect, these tools are usually trained on medical terms, and actually adapt as you use them to get better each time.  

Patient data security  

I refuse to use a product that might compromise my patients’ information. Before implementing an AI scribe, I did my due diligence – a sort of "chart review" of scribe security practices. What I learned was reassuring: when an AI scribe provider says they're HIPAA and SOC2 compliant, they're not just throwing around buzzwords. They have comprehensive physical, network, and process security measures, with regular third-party audits. And SOC2 compliance requires strict information security policies that are verified through independent audits.

HIPAA-compliant/SOC2-compliant AI scribes only learn from de-identified patient information. That means they can recognize patterns (like knowing to ask about onset, location, duration, and severity when a patient reports leg pain) without compromising individual patient privacy.

Who’s speaking? 

Initially I wondered whether the AI would share our society's biases – would it assume my male patients were the doctors? Interestingly, the AI scribe I use doesn't distinguish between male and female voices at all. 

While it avoids gender bias, AI can still struggle to attribute statements to the correct speaker. For example, if a patient mentions that the internet told them to put jalapeños on their open wound twice a day, the AI might include it as a medical recommendation rather than a patient statement.  (Note to self: Add “Do not recommend jalapeño wound therapy” to my standard disclaimers.)

Reliability 

Once I grew accustomed to living the scribe life, I worried about becoming dependent on it. What if it stopped working? In reality, most issues I've encountered have been user error (I’m a luddite, remember?) – forgetting to turn it on or not enabling the microphone.

Devices may die, or environments can be too loud to pick up conversations; in other words, life happens. But these instances are rare and usually manageable. They’re also good reminders to check notes and systems throughout the day.

The bottom line

Looking back at my initial anxieties, some of my concerns were definitely valid. Yes, I need to review my AI scribe's work. But it feels like a minor inconvenience compared to the pre-AI days, when I was doing documentation by myself. 

What I've learned is that AI scribes work best when we work with them — reviewing notes, training tools, and fact-checking accuracy. Rather than viewing them as replacements, I see AI scribes as partners in documentation. This partnership has evolved into something beautiful and unexpected: free weekends. I guess sometimes even luddites need to embrace change.

FAQs

Frequently asked questions from clinicians and medical practitioners.

No items found.
Author Image
Published in
 
From Clinicians
  • 
5
 Min Read
  • 
Dec 24, 2024
Subscribe
Reviewed by